[TOC]html
起因:在使用helm update
stable/sonatype-nexus
從1.6版本更新到1.13版本後,出現PVC刪除,從新建立PVC的狀況,好在原來PV爲Retain。故研究下Retain的PV怎麼恢復數據。nginx
實驗目的:PVC刪除後,PV因Retain策略,狀態爲Released,將PV內數據恢復成PVC,掛載到POD內,達到數據恢復。api
環境說明:bash
準備yaml文件:app
pvc.yaml
ide
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-test spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
nginx.yaml
ui
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-rbd spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: ceph-rbd-volume mountPath: "/usr/share/nginx/html" volumes: - name: ceph-rbd-volume persistentVolumeClaim: claimName: pvc-test
新建pvc、deployment、寫入數據並刪除pvc操做過程:spa
[root@lab1 test]# ll total 8 -rw-r--r-- 1 root root 533 Oct 24 17:54 nginx.yaml -rw-r--r-- 1 root root 187 Oct 24 17:55 pvc.yaml [root@lab1 test]# kubectl apply -f pvc.yaml persistentvolumeclaim/pvc-test created [root@lab1 test]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-test Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 7s [root@lab1 test]# kubectl apply -f nginx.yaml deployment.extensions/nginx-rbd created [root@lab1 test]# kubectl get pod |grep nginx-rbd nginx-rbd-7c6449886-thv25 1/1 Running 0 33s [root@lab1 test]# kubectl exec -it nginx-rbd-7c6449886-thv25 -- /bin/bash -c 'echo ygqygq2 > /usr/share/nginx/html/ygqygq2.html' [root@lab1 test]# kubectl exec -it nginx-rbd-7c6449886-thv25 -- cat /usr/share/nginx/html/ygqygq2.html ygqygq2 [root@lab1 test]# kubectl delete -f nginx.yaml deployment.extensions "nginx-rbd" deleted [root@lab1 test]# kubectl get pvc pvc-test NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-test Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 4m10s [root@lab1 test]# kubectl delete pvc pvc-test # 刪除PVC persistentvolumeclaim "pvc-test" deleted [root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 4m33s [root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 -o yaml > /tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yaml # 保留備用
從上面能夠看到,pvc刪除後,pv變成Released狀態。code
再次建立同名PVC,查看是否分配原來PV操做過程:htm
[root@lab1 test]# kubectl apply -f pvc.yaml persistentvolumeclaim/pvc-test created [root@lab1 test]# kubectl get pvc # 查看新建的PVC NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 19s [root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 # 查看原來的PV NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 7m18s [root@lab1 test]#
從上面能夠看到,PVC分配的是新的PV,由於PV狀態不是Available。
那怎麼才能讓PV狀態變成Available呢?咱們來查看以前的PV:
[root@lab1 test]# cat /tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: ceph.com/rbd rbdProvisionerIdentity: ceph.com/rbd creationTimestamp: 2018-10-24T09:56:06Z finalizers: - kubernetes.io/pv-protection name: pvc-069c4486-d773-11e8-bd12-000c2931d938 resourceVersion: "11752758" selfLink: /api/v1/persistentvolumes/pvc-069c4486-d773-11e8-bd12-000c2931d938 uid: 06b57ef7-d773-11e8-bd12-000c2931d938 spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test namespace: default resourceVersion: "11751559" uid: 069c4486-d773-11e8-bd12-000c2931d938 persistentVolumeReclaimPolicy: Retain rbd: fsType: ext4 image: kubernetes-dynamic-pvc-06a25bd3-d773-11e8-8c3e-0a580af400d5 keyring: /etc/ceph/keyring monitors: - 192.168.105.92:6789 - 192.168.105.93:6789 - 192.168.105.94:6789 pool: kube secretRef: name: ceph-secret namespace: kube-system user: kube storageClassName: ceph-rbd status: phase: Released
從上面能夠看到,spec.claimRef這段,仍保留以前的PVC信息。
咱們大膽刪除spec.claimRef這段。再次查看PV:
kubectl edit pv pvc-069c4486-d773-11e8-bd12-000c2931d938
[root@lab1 test]# kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Available ceph-rbd 10m
從上面能夠看到,以前的PV
pvc-069c4486-d773-11e8-bd12-000c2931d938
已經變爲Available。
再次建立PVC、deployment,並查看數據:
new_pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-test-new spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
new_nginx.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-rbd spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: ceph-rbd-volume mountPath: "/usr/share/nginx/html" volumes: - name: ceph-rbd-volume persistentVolumeClaim: claimName: pvc-test-new
操做過程:
[root@lab1 test]# kubectl apply -f new_pvc.yaml persistentvolumeclaim/pvc-test-new created [root@lab1 test]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 31m pvc-test-new Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 27m [root@lab1 test]# kubectl apply -f new_nginx.yaml [root@lab1 test]# kubectl get pod|grep nginx-rbd nginx-rbd-79bb766b6c-mv2h8 1/1 Running 0 20m [root@lab1 test]# kubectl exec -it nginx-rbd-79bb766b6c-mv2h8 -- ls /usr/share/nginx/html lost+found ygqygq2.html [root@lab1 test]# kubectl exec -it nginx-rbd-79bb766b6c-mv2h8 -- cat /usr/share/nginx/html/ygqygq2.html ygqygq2
從上面能夠看到,新的PVC分配到的是原來的PV
pvc-069c4486-d773-11e8-bd12-000c2931d938
,而且數據徹底還在。
當前版本Kubernetes PVC存儲大小是惟一能被設置或請求的資源,因咱們沒有修改PVC的大小,在PV的Available狀態下,有PVC請求分配相同大小時,PV會被分配出去並綁定成功。
在PV變成Available過程當中,最關鍵的是PV的spec.claimRef
字段,該字段記錄着原來PVC的綁定信息,刪除綁定信息,便可從新釋放PV從而達到Available。
參考資料:
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[2] https://kubernetes.io/docs/concepts/storage/storage-classes/
[3] http://dockone.io/article/2082
[4] http://dockone.io/article/2087