本文在「建立PV,建立PVC掛載PV,建立POD掛載PVC」這個環境的基礎上,進行各類刪除實驗,並記錄、分析各資源的狀態。html
實驗建立了一個PV、一個PVC掛載了PV、一個POD掛載PVC,並編寫了兩個簡單的小腳原本快速建立和刪除環境。對應的腳本以下所示:docker
須要注意的是在建立PV時,PV並不會去檢查你配置的server是否真的存在;也不會檢查server上是否有一個可用的NFS服務;固然更不會檢查你設置的storage大小是否真有那麼大。我的感受PV的設置只是一個聲明,系統並不會對此作任何檢查。PVC的掛載也只是根據配額的大小和訪問模式,過濾一下PV,並以最小的代價支持。api
[root@k8s-master pv]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false [root@k8s-master pv]# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi [root@k8s-master pv]# cat test-pvc-pod.yaml apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc [root@k8s-master pv]# cat start.sh #!/bin/bash kubectl create -f nfs-pv.yaml kubectl create -f pvc.yaml kubectl create -f test-pvc-pod.yaml [root@k8s-master pv]# cat remove.sh #!/bin/bash kubectl delete pod test-nfs-pvc kubectl delete persistentvolumeclaim nfs-pvc kubectl delete persistentvolume pv0001 [root@k8s-master pv]#
建立PV,建立PVC掛載PV,建立POD掛載PVC。在刪除PV後,PVC狀態從Bound變爲Lost,Pod中的Volume仍然能用,數據也沒有被刪除。bash
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 15s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 18s [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 39s [root@k8s-master pv]# kubectl delete persistentvolume pv0001 persistentvolume "pv0001" deleted [root@k8s-master pv]# kubectl get persistentvolume No resources found. [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Lost pv0001 0 1m [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 1m [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# cd /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc nfs-pvc]# ls 2.out [root@test-nfs-pvc nfs-pvc]# exit exit [root@k8s-master pv]#
設置PV時,若是設置了回收策略是「回收」的時候,在刪除PVC時,系統(Controller-Manager)會啓動一個recycler的Pod,用於清理數據卷中的內容。每種數據卷的回收Pod是不一樣的,都有本身特定的邏輯。本文以NFS爲例,給出具體配置及Pod描述以下:服務器
[root@k8s-master ~]# cat /etc/kubernetes/controller-manager #配置完成後請重啓controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/recycler.yaml" [root@k8s-master ~]# cat /etc/kubernetes/recycler.yaml apiVersion: v1 kind: Pod metadata: name: pv-recycler- namespace: default spec: restartPolicy: Never volumes: - name: vol hostPath: path: /any/path/it/will/be/replaced containers: - name: pv-recycler image: "docker.io/busybox" imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] volumeMounts: - name: vol mountPath: /scrub [root@k8s-master ~]#
若是不進行上面的設置的話,默認回收Pod用的image是gcr上的busybox,由於種種緣由,在國內是沒法下載的(即便你的機器上有gcr的busybox也不能夠,還須要設置鏡像下載策略爲IfNotPresent,不然會一直去gcr查詢是否有新版本的鏡像,這也會致使imagePullError)。因此必需要在Controller-Manager中進行設置,設置成隨便哪一個busybox。ssh
建立PV,建立PVC掛載PV,建立POD掛載PVC。刪除PVC後,PV狀態從Bound變爲Available,系統(controller-manager)調用持久化存儲清理插件(recycler-for-pv0001),將PVC對應的PV清空。Pod中的Volume仍然能用,但volume中的數據被刪除了。post
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 11s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 14s [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 19s [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# touch /home/laizy/test/nfs-pvc/1.out [root@test-nfs-pvc /]# cd /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc nfs-pvc]# ls 1.out [root@test-nfs-pvc nfs-pvc]# exit exit [root@k8s-master pv]# kubectl delete persistentvolumeclaim nfs-pvc persistentvolumeclaim "nfs-pvc" deleted [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE recycler-for-pv0001 0/1 ContainerCreating 0 1s test-nfs-pvc 1/1 Running 0 1m [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Available 1m [root@k8s-master pv]# kubectl get persistentvolumeclaim No resources found. [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 2m [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# ls /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc /]
建立PV,建立PVC掛載PV,建立POD掛載PVC。刪除Pod後,PV、PVC狀態沒變,Pod中的Volume對應的NFS數據沒有被刪除。學習
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 11s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 27s [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 36s [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# cat /home/laizy/test/nfs-pvc/1.out 123456 [root@test-nfs-pvc /]# exit exit [root@k8s-master pv]# kubectl delete pod test-nfs-pvc pod "test-nfs-pvc" deleted [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 8m [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 8m [root@k8s-master pv]# ssh 192.168.20.47 #登陸到遠程NFS服務器 root@192.168.20.47's password: Last failed login: Mon Mar 27 14:37:19 CST 2017 from :0 on :0 There was 1 failed login attempt since the last successful login. Last login: Mon Mar 20 10:49:18 2017 [root@localhost ~]# cd /data/disk1/1.out 123456