本節中 K8S 使用 NFS 遠程存儲,爲託管的 pod 提供了動態存儲服務,pod 建立者無需關心數據以何種方式存在哪裏,只須要提出須要多大空間的申請便可。html
整體流程是:linux
server ip: 192.168.10.17nginx
[root@work03 ~]# yum install nfs-utils rpcbind -y [root@work03 ~]# systemctl start nfs [root@work03 ~]# systemctl start rpcbind [root@work03 ~]# systemctl enable nfs [root@work03 ~]# systemctl enable rpcbind [root@work03 ~]# mkdir -p /data/nfs/ [root@work03 ~]# chmod 777 /data/nfs/ [root@work03 ~]# cat /etc/exports /data/nfs/ 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash) [root@work03 ~]# exportfs -arv exporting 192.168.10.0/24:/data/nfs [root@work03 ~]# showmount -e localhost Export list for localhost: /data/nfs 192.168.10.0/24 參數: sync:將數據同步寫入內存緩衝區與磁盤中,效率低,但能夠保證數據的一致性 async:將數據先保存在內存緩衝區中,必要時才寫入磁盤
全部work節點安裝 nfs-utils rpcbindgit
yum install nfs-utils rpcbind -y systemctl start nfs systemctl start rpcbind systemctl enable nfs systemctl enable rpcbind
# wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml # kubectl apply -f rbac.yaml
# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.10.17 - name: NFS_PATH value: /data/nfs volumes: - name: nfs-client-root nfs: server: 192.168.10.17 path: /data/nfs
# cat statefulset-nfs.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: nfs-web spec: serviceName: "nginx" replicas: 3 selector: matchLabels: app: nfs-web # has to match .spec.template.metadata.labels template: metadata: labels: app: nfs-web spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: managed-nfs-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi
[root@master01 ~]# kubectl apply -f statefulset-nfs.yaml
查看 Pod/PV/PVCgithub
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-5f5fff65ff-2pmxh 1/1 Running 0 26m nfs-web-0 1/1 Running 0 2m33s nfs-web-1 1/1 Running 0 2m27s nfs-web-2 1/1 Running 0 2m21s [root@master01 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-nfs-web-0 Bound pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO managed-nfs-storage 2m35s www-nfs-web-1 Bound pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO managed-nfs-storage 2m29s www-nfs-web-2 Bound pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO managed-nfs-storage 2m23s [root@master01 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0 1Gi RWO Delete Bound default/www-nfs-web-2 managed-nfs-storage 2m25s pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 1Gi RWO Delete Bound default/www-nfs-web-1 managed-nfs-storage 2m31s pvc-62f4868f-c6f7-459e-a280-26010c3a5849 1Gi RWO Delete Bound default/www-nfs-web-0 managed-nfs-storage 2m36s
查看 nfs server 目錄中信息,同時各子目錄中內容爲空web
[root@work03 ~]# ls -l /data/nfs/ total 12 default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849 default-www-nfs-web-1-pvc-47b68872-35f2-4d3b-bc70-fc59d3bcdbf9 default-www-nfs-web-2-pvc-0af3ac53-56d9-4526-8c60-eb0ce3f281e0
將每一個 pod 中寫入內容api
[root@master01 ~]# for i in 0 1 2; do kubectl exec nfs-web-$i -- sh -c 'echo $(hostname) > /usr/share/nginx/html/index.html'; done
遠程nfs各子目錄中再也不爲空,出現了內容服務器
[root@work03 ~]# ls /data/nfs/default-www-nfs-web-0-pvc-62f4868f-c6f7-459e-a280-26010c3a5849/ index.html [root@work03 ~]#
查看每一個容器中內容,均爲各自主機名微信
[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done nfs-web-0 nfs-web-1 nfs-web-2
刪除對應 podapp
[root@master01 ~]# kubectl get pod -l app=nfs-web NAME READY STATUS RESTARTS AGE nfs-web-0 1/1 Running 0 7m7s nfs-web-1 1/1 Running 0 7m3s nfs-web-2 1/1 Running 0 7m [root@master01 ~]# kubectl delete pod -l app=nfs-web pod "nfs-web-0" deleted pod "nfs-web-1" deleted pod "nfs-web-2" deleted
能夠看到又被自動建立了
[root@master01 ~]# kubectl get pod -l app=nfs-web NAME READY STATUS RESTARTS AGE nfs-web-0 1/1 Running 0 15s nfs-web-1 1/1 Running 0 11s nfs-web-2 1/1 Running 0 8s
再次查看每一個pod中內容,能夠看到文件內容沒有變化
[root@master01 ~]# for i in 0 1 2; do kubectl exec -it nfs-web-$i -- cat /usr/share/nginx/html/index.html; done nfs-web-0 nfs-web-1 nfs-web-2
能夠看到, statefulset 控制器經過固定的 pod 建立順序能夠確保 pod 之間的拓撲關係一直處於穩定不變的狀態,經過 nfs-client-provisioner 自動建立和每一個 pod 有固定對應關係的遠程存儲卷,確保 pod 重建後數據不會丟失。
微信公衆號:zuolinux_com