[TOC]html
當前最新Kubernetes穩定版爲1.14。如今爲止,尚未不一樣Kubernetes間持久化存儲遷移的方案。但根據Kubernetes pv/pvc綁定流程和原理,只要 "存儲"-->"PV"-->"PVC" 的綁定關係相同,便可保證不一樣間Kubernetes可掛載相同的存儲,而且裏面是相同數據。node
原來個人Kubernetes爲阿里雲ECS本身搭建的,如今想切換使用阿里雲購買的Kubernetes。因Kubernetes中一些應用使用像1G、2G等小容量存儲比較多,因此仍舊想保留原有的Ceph存儲使用。nginx
Kubernetes: v1.13.4
Ceph: 12.2.10 luminous (stable)vim
2個Kubernetes存儲使用storageclass管理,並鏈接相同Ceph集羣。可參考:Kubernetes使用Ceph動態卷部署應用api
數據依舊保留在存儲中,並未真正有遷移動做,遷移只是相對於不一樣Kubernetes來說。bash
爲了更好的看到效果,這裏新建一個nginx的deploy,並使用ceph rbd作爲持久化存儲,而後寫一些數據。app
vim rbd-claim.yaml
ide
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rbd-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
vim rbd-nginx-dy.yaml
ui
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-rbd-dy spec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: ceph-cephfs-volume mountPath: "/usr/share/nginx/html" volumes: - name: ceph-cephfs-volume persistentVolumeClaim: claimName: rbd-pv-claim
# 建立pvc和deploy kubectl create -f rbd-claim.yaml kubectl create -f rbd-nginx-dy.yaml
查看結果,並寫入數據至nginx持久化目錄中:阿里雲
pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m31s [root@node5 tmp]# kubectl get pvc,pod NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/rbd-pv-claim Bound pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO ceph-rbd 4m37s NAME READY STATUS RESTARTS AGE pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m36s [root@node5 tmp]# kubectl exec -it nginx-rbd-dy-7455884d49-rthzt /bin/bash root@nginx-rbd-dy-7455884d49-rthzt:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 40G 23G 15G 62% / tmpfs 64M 0 64M 0% /dev tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda1 40G 23G 15G 62% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/rbd5 976M 2.6M 958M 1% /usr/share/nginx/html tmpfs 16G 12K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 16G 0 16G 0% /proc/acpi tmpfs 16G 0 16G 0% /proc/scsi tmpfs 16G 0 16G 0% /sys/firmware root@nginx-rbd-dy-7455884d49-rthzt:/# echo ygqygq2 > /usr/share/nginx/html/ygqygq2.html root@nginx-rbd-dy-7455884d49-rthzt:/# exit exit [root@node5 tmp]#
將pv、pvc信息提取出來:
[root@node5 tmp]# kubectl get pvc rbd-pv-claim -oyaml --export > rbd-pv-claim-export.yaml [root@node5 tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee -oyaml --export > pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml [root@node5 tmp]# more rbd-pv-claim-export.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd creationTimestamp: null finalizers: - kubernetes.io/pvc-protection name: rbd-pv-claim selfLink: /api/v1/namespaces/default/persistentvolumeclaims/rbd-pv-claim spec: accessModes: - ReadWriteOnce dataSource: null resources: requests: storage: 1Gi storageClassName: ceph-rbd volumeMode: Filesystem volumeName: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee status: {} [root@node5 tmp]# more pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: ceph.com/rbd rbdProvisionerIdentity: ceph.com/rbd creationTimestamp: null finalizers: - kubernetes.io/pv-protection name: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee selfLink: /api/v1/persistentvolumes/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: rbd-pv-claim namespace: default resourceVersion: "51998402" uid: d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee persistentVolumeReclaimPolicy: Retain rbd: fsType: ext4 image: kubernetes-dynamic-pvc-dac8284a-6a1c-11e9-b533-1604a9a8a944 keyring: /etc/ceph/keyring monitors: - 172.18.43.220:6789 - 172.18.138.121:6789 - 172.18.228.201:6789 pool: kube secretRef: name: ceph-secret namespace: kube-system user: kube storageClassName: ceph-rbd volumeMode: Filesystem status: {} [root@node5 tmp]#
將上文中提取出來的pv和pvc傳至新的Kubernetes中:
[root@node5 tmp]# rsync -avz pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml rbd-pv-claim-export.yaml rbd-nginx-dy.yaml 172.18.97.95:/tmp/ sending incremental file list pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml rbd-nginx-dy.yaml rbd-pv-claim-export.yaml sent 1,371 bytes received 73 bytes 2,888.00 bytes/sec total size is 2,191 speedup is 1.52 [root@node5 tmp]#
在新的Kubernetes中導入pv、pvc:
[root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl apply -f pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml -f rbd-pv-claim-export.yaml persistentvolume/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee created persistentvolumeclaim/rbd-pv-claim created [root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO Retain Released default/rbd-pv-claim ceph-rbd 20s [root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl get pvc rbd-pv-claim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pv-claim Lost pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 0 ceph-rbd 28s [root@iZwz9g5ec0q4fc8iuqawr0Z tmp]#
能夠看到,pvc狀態顯示爲Lost
,這是由於在新的Kubernetes中導入pv和pvc後,它們會自動從新生成本身的resourceVersion
和uid
,所以在新導入的pv中的spec.claimRef
信息爲舊的:
爲了解決新導入的pv中的spec.claimRef
信息舊的變成新的,咱們將這段信息刪除,由provisioner自動從新綁定它們的關係:
這裏咱們作成一個腳本處理:
vim unbound.sh
pv=$* function unbound() { kubectl patch pv -p '{"spec":{"claimRef":{"apiVersion":"","kind":"","name":"","namespace":"","resourceVersion":"","uid":""}}}' \ $pv kubectl get pv $pv -oyaml> /tmp/.pv.yaml sed '/claimRef/d' -i /tmp/.pv.yaml #kubectl apply -f /tmp/.pv.yaml kubectl replace -f /tmp/.pv.yaml } unbound
sh unbound.sh pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee
腳本執行後,過個10秒左右,查看結果:
在新的Kubernetes中使用以前傳的rbd-nginx-dy.yaml
驗證下,在此以前,由於使用ceph rbd,須要先解除舊Kubernetes上的pod佔用該rbd:
舊Kubernetes:
[root@node5 tmp]# kubectl delete -f rbd-nginx-dy.yaml deployment.extensions "nginx-rbd-dy" deleted
新Kubernetes:
上面實驗中,使用的是RWO
的pvc,你們試想下,若是使用RWX
,多個Kubernetes使用,這種使用場景可能有更大的做用。
Kubernetes使用過程當中,pv、pvc和存儲,它們的信息和綁定關係相當重要,因此可按需求看成平常備份,有了這些備份,即便Kubernetes etcd數據損壞,也可達到恢復和遷移Kubernetes持久化數據目的。