PersistentVolume(PV)是指由集羣管理員配置提供的某存儲系統上的段存儲空間,它是對底層共享存儲的抽象,將共享存儲做爲種可由用戶申請使的資源,實現了「存儲消費」機制。經過存儲插件機制,PV支持使用多種網絡存儲系統或雲端存儲等多種後端存儲系統,例如,NFS、RBD和Cinder等。PV是集羣級別的資源,不屬於任何名稱空間,用戶對PV資源的使須要經過PersistentVolumeClaim(PVC)提出的使申請(或稱爲聲明)來完成綁定,是PV資源的消費者,它向PV申請特定大小的空間及訪問模式(如rw或ro),從建立出PVC存儲卷,後再由Pod資源經過PersistentVolumeClaim存儲卷關聯使,以下圖:
html
儘管PVC使得用戶能夠以抽象的方式訪問存儲資源,但不少時候仍是會涉及PV的很多屬性,例如,因爲不一樣場景時設置的性能參數等。爲此,集羣管理員不得不經過多種方式提供多種不一樣的PV以滿不一樣用戶不一樣的使用需求,二者銜接上的誤差必然會致使用戶的需求沒法所有及時有效地獲得知足。Kubernetes從1.4版起引入了一個新的資源對象StorageClass,可用於將存儲資源定義爲具備顯著特性的類(Class)而不是具體的PV,例如「fast」「slow」或「glod」「silver」「bronze」等。用戶經過PVC直接向意向的類別發出申請,匹配由管理員事先建立的PV,或者由其按需爲用戶動態建立PV,這樣作甚至免去了須要先建立PV的過程。
PV對存儲系統的支持可經過其插件來實現,目前,Kubernetes支持以下類型的插件。
官方地址:https://kubernetes.io/docs/concepts/storage/storage-classes/
nginx
由上圖咱們能夠看到官方插件是不支持NFS動態供給的,可是咱們能夠用第三方的插件來實現,下面就是本文要講的。git
GitHub地址:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploygithub
一、下載所需文件web
for file in class.yaml deployment.yaml rbac.yaml ; do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ; done
二、建立RBAC受權後端
# cat rbac.yaml kind: ServiceAccount apiVersion: v1 metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
三、建立Storageclass類api
# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
四、建立NFS的deployment,修改相應的NFS服務器IP及掛載路徑服務器
# cat deployment.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:v2.0.0 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.1.100 - name: NFS_PATH value: /huoban/k8s volumes: - name: nfs-client-root nfs: server: 192.168.1.100 path: /huoban/k8s
下面是一個StatefulSet應用動態申請PV的示意圖:網絡
例如:建立一個nginx動態獲取PVapp
# cat nginx.yaml --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: imagePullSecrets: - name: huoban-harbor terminationGracePeriodSeconds: 10 containers: - name: nginx image: harbor.huoban.com/open/huoban-nginx:v1.1 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-nfs-storage" resources: requests: storage: 1Gi
啓動以後咱們能夠看到一下信息
# kubectl get pod,pv,pvc NAME READY STATUS RESTARTS AGE pod/nfs-client-provisioner-fcb58977d-l5cs4 1/1 Running 0 20h pod/web-0 1/1 Running 0 175m pod/web-1 1/1 Running 0 175m pod/web-2 1/1 Running 0 175m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/default-test-claim-pvc-e5a66781-b46e-4191-8f51-5d1a571ca530 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 20h persistentvolume/default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 20h persistentvolume/default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 20h persistentvolume/default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 20h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/test-claim Bound default-test-claim-pvc-e5a66781-b46e-4191-8f51-5d1a571ca530 1Mi RWX managed-nfs-storage 20h persistentvolumeclaim/www-web-0 Bound default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65 1Gi RWO managed-nfs-storage 20h persistentvolumeclaim/www-web-1 Bound default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9 1Gi RWO managed-nfs-storage 20h persistentvolumeclaim/www-web-2 Bound default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337 1Gi RWO managed-nfs-storage 20h
如今,咱們在NFS服務器上也能夠看到自動生成了3個掛載目錄,單pod刪除以後數據還會存在
# ll drwxrwxrwx 2 root root 4096 Oct 23 17:31 default-www-web-0-pvc-0a578ef2-63e3-49bb-87c0-88166d3e0e65 drwxrwxrwx 2 root root 4096 Oct 23 17:31 default-www-web-1-pvc-78061eb6-c36b-44db-9472-f2684f85a4b9 drwxrwxrwx 2 root root 4096 Oct 23 17:40 default-www-web-2-pvc-ec760344-a35a-4048-b8aa-6452d6a62337
StatefulSet應用有如下特色:
1.惟一的網絡標識
2.域名訪問(<statefulsetName-index>.<service-name>.svc.cluster.local) 如:web-0.nginx.default.svc.cluster.local
3.獨立的持久存儲
4.有序的部署和刪除