容器的磁盤的生命週期是短暫的,這就帶來了許多問題;第一:當一個容器損壞了,kubelet會重啓這個容器,可是數據會隨着container的死亡而丟失;第二:當不少容器在同一Pod中運行的時候,常常須要數據共享。kubernets Volume解決了這些問題html
kubernets volume的四種類型node
第一步:編寫yml文件nginx
╭─root@node1 ~ ╰─➤ vim nginx-empty.yml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: du # 對應 mountPath: /usr/share/nginx/html volumes: - name: du # 對應 emptyDir: {}
第二步:運行yml文件docker
╭─root@node1 ~ ╰─➤ kubectl apply -f nginx-empty.yml
第三步:查看podvim
╭─root@node1 ~ ╰─➤ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 7m18s 10.244.2.14 node3 <none> <none>
第四步:到node3節點查看容器詳細信息api
# docker ps ╭─root@node3 ~ ╰─➤ docker inspect 9c3ed074fb29| grep "Mounts" -A 8 "Mounts": [ { "Type": "bind", "Source": "/var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du", "Destination": "/usr/share/nginx/html", "Mode": "Z", "RW": true, "Propagation": "rprivate" },
第五步:寫入內容app
╭─root@node3 ~ ╰─➤ cd /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du ╭─root@node3 /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du ╰─➤ ls ╭─root@node3 /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du ╰─➤ echo "empty test" >> index.html
第六步:訪問curl
╭─root@node1 ~ ╰─➤ curl 10.244.2.14 empty test
第七步:停掉容器ide
╭─root@node3 ~ ╰─➤ docker stop 9c3ed074fb29 9c3ed074fb29
第八步:查看新起來的容器測試
╭─root@node3 ~ ╰─➤ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 14ca410ad737 5a3221f0137b "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_nginx_nginx_default_2ab6183c-eddd-44eb-9e62-ded5106d1d1a_1
第九步:查看pod 信息 並訪問
╭─root@node1 ~ ╰─➤ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 1 40m 10.244.2.14 node3 <none> <none> ╭─root@node1 ~ ╰─➤ curl 10.244.2.14 empty test
第十步:刪除pod
╭─root@node1 ~ ╰─➤ kubectl delete pod nginx pod "nginx" deleted
第十一步:查看emptyDir
╭─root@node3 ~ ╰─➤ ls /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du ls: cannot access /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du: No such file or directory
emptyDir實驗總結
效果至關於執行: docker run -v /tmp:/usr/share/nginx/html
第一步:編寫yml文件
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: du # 對應 mountPath: /usr/share/nginx/html volumes: - name: du # 對應 hostPath: path: /tmp
第二步:運行yml文件
╭─root@node1 ~ ╰─➤ kubectl apply -f nginx-hostP.yml pod/nginx2 created
第三步:查看pods
╭─root@node1 ~ ╰─➤ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx2 1/1 Running 0 55s 10.244.2.15 node3 <none> <none>
第四步:pod所在節點寫入測試文件
╭─root@node3 ~ ╰─➤ echo "hostPath test " >> /tmp/index.html
第五步: 訪問
╭─root@node1 ~ ╰─➤ curl 10.244.2.15 hostPath test
第六步:刪除pods
╭─root@node1 ~ ╰─➤ kubectl delete -f nginx-hostP.yml pod "nginx2" deleted
第七步:查看測試文件
╭─root@node3 ~ ╰─➤ cat /tmp/index.html hostPath test
hostPath實驗總結
第一步:部署nfs (每一個節點安裝nfs)
╭─root@node1 ~ ╰─➤ yum install nfs-utils rpcbind -y ╭─root@node1 ~ ╰─➤ cat /etc/exports /tmp *(rw) ╭─root@node1 ~ ╰─➤ chown -R nfsnobody: /tmp ╭─root@node1 ~ ╰─➤ systemctl restart nfs rpcbind ---------------------------------------------- ╭─root@node2 ~ ╰─➤ yum install nfs-utils -y ---------------------------------------------- ╭─root@node3 ~ ╰─➤ yum install nfs-utils -y
第二步:編寫yml文件
apiVersion: v1 kind: Pod metadata: name: nginx2 spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: du # 對應 mountPath: /usr/share/nginx/html volumes: - name: du # 對應 nfs: path: /tmp server: 192.168.137.3
第三步:運行yml文件
╭─root@node1 ~ ╰─➤ kubectl apply -f nginx-nfs.yml pod/nginx2 created
第四步:查看pod信息
╭─root@node1 ~ ╰─➤ kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx2 1/1 Running 0 5m46s 10.244.2.16 node3 <none> <none>
第五步:編寫測試文件
╭─root@node1 ~ ╰─➤ echo "nfs-test" >> /tmp/index.html
第六步:訪問
╭─root@node1 ~ ╰─➤ curl 10.244.2.16 nfs-test
第一步:部署NFS
略
第二步:編寫pv的yml文件
apiVersion: v1 kind: PersistentVolume metadata: name: mypv spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: path: /tmp server: 192.168.137.3 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc1 spec: accessModes: - ReadWriteMany volumeName: mypv resources: requests: storage: 1Gi
accessModes有三類
reclaim policy有三類
第三步:執行yml文件 建立pv/pvc
╭─root@node1 ~ ╰─➤ vim pv-pvc.yml ╭─root@node1 ~ ╰─➤ kubectl apply -f pv-pvc.yml persistentvolume/mypv created persistentvolumeclaim/mypvc1 created
第四步:查看
╭─root@node1 ~ ╰─➤ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mypv 1Gi RWX Retain Bound default/mypvc1 68s ╭─root@node1 ~ ╰─➤ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc1 Bound mypv 1Gi RWX 78s
第五步:編寫nginx的yml文件
apiVersion: v1 kind: Pod metadata: name: nginx3 spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: du # 對應 mountPath: /usr/share/nginx/html volumes: - name: du # 對應 persistentVolumeClaim: claimName: mypvc1
第六步:執行yml文件
╭─root@node1 ~ ╰─➤ kubectl apply -f nginx-pv.yml pod/nginx3 created
第七步:查看pod
╭─root@node1 ~ ╰─➤ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx3 1/1 Running 0 3m45s 10.244.2.17 node3 <none> <none>
第八步:編寫訪問文件並訪問
╭─root@node1 ~ ╰─➤ echo "pv test" > /tmp/index.html ╭─root@node1 ~ ╰─➤ curl 10.244.2.17 pv test
我這裏的pod是與nfs有關,nfs掛載有問題致使pod有問題,執行完刪除命令之後看到pod一直處於terminating的狀態。
這種狀況下能夠使用強制刪除命令:
kubectl delete pod [pod name] --force --grace-period=0 -n [namespace] # 注意:必須加-n參數指明namespace,不然可能報錯pod not found
演示:
╭─root@node1 ~ ╰─➤ kubectl get pod NAME READY STATUS RESTARTS AGE nginx3 0/1 Terminating 0 7d17h ╭─root@node1 ~ ╰─➤ kubectl delete pod nginx3 --force --grace-period=0 -n default warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "nginx3" force deleted