1、依然簡介html
Kubernetes支持的卷類型詳見:https://kubernetes.io/docs/concepts/storage/volumes/node
Kubernetes使用Persistent Volume和Persistent Volume Claim兩種API資源來管理存儲。mysql
PersistentVolume(簡稱PV):由管理員設置的存儲,它是集羣的一部分。就像節點(Node)是集羣中的資源同樣,PV也是集羣中的資源。它包含存儲類型,存儲大小和訪問模式。它的生命週期獨立於Pod,例如當使用它的Pod銷燬時對PV沒有影響。web
PersistentVolumeClaim(簡稱PVC): 是用戶存儲的請求。它和Pod相似。Pod消耗Node資源,PVC消耗PV資源。Pod能夠請求特定級別的資源(CPU和MEM)。PVC能夠請求特定大小和訪問模式的PV。sql
能夠經過兩種方式配置PV:靜態或動態。
數據庫
靜態PV:集羣管理員建立許多PV,它們包含可供集羣用戶使用的實際存儲的詳細信息。apache
動態PV:當管理員建立的靜態PV都不匹配用戶建立的PersistentVolumeClaim時,集羣會爲PVC動態的配置卷。此配置基於StorageClasses:PVC必須請求存儲類(storageclasses),而且管理員必須已建立並配置該類,以便進行動態建立。vim
Kubernetes關於PersistentVolumes的更多描述:https://kubernetes.io/docs/concepts/storage/persistent-volumes/api
2、關於PersistentVolume的訪問方式bash
ReadWriteOnce - 卷以讀寫方式掛載到單個節點
ReadOnlyMany - 卷以只讀方式掛載到多個節點
ReadWriteMany - 卷以讀寫方式掛載到多個節點
在CLI中,訪問模式縮寫爲:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
重要:卷只能一次使用一種訪問模式安裝,即便它支持不少。
3、關於回收策略
Retain - 手動回收
Recycle - 基本擦洗(rm -rf /thevolume/*
)
Delete - 關聯的存儲資產(如AWS EBS,GCE PD,Azure磁盤或OpenStack Cinder卷)將被刪除。
目前,只有NFS和HostPath支持回收。AWS EBS,GCE PD,Azure磁盤和Cinder卷支持刪除。
4、關於PersistentVolume(PV)狀態
Available(可用狀態) - 一塊空閒資源尚未被任何聲明綁定
Bound(綁定狀態) - 聲明分配到PVC進行綁定,PV進入綁定狀態
Released(釋放狀態) - PVC被刪除,PV進入釋放狀態,等待回收處理
Failed(失敗狀態) - PV執行自動清理回收策略失敗
5、關於PersistentVolumeClaims(PVC)狀態
Pending(等待狀態) - 等待綁定PV
Bound(綁定狀態) - PV已綁定PVC
6、在全部k8s節點上安裝ceph-common
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo # vim /etc/yum.repos.d/ceph.repo [Ceph] name=Ceph packages for $basearch baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 priority=1 gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [Ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 priority=1 gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 priority=1 gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc # yum makecache # yum -y install ceph-common
7、配置靜態PV
1.在默認的RBD pool中建立一個1G的image(ceph集羣)
# ceph osd pool create rbd 128 pool 'rbd' created # ceph osd lspools # rbd create ceph-image -s 1G --image-feature layering ##建立1G的鏡像並指定layering特性 # rbd ls ceph-image # ceph osd pool application enable rbd ceph-image ##進行關聯 enabled application 'ceph-image' on pool 'rbd' # rbd info ceph-image rbd image 'ceph-image': size 1 GiB in 256 objects order 22 (4 MiB objects) id: 13032ae8944a block_name_prefix: rbd_data.13032ae8944a format: 2 features: layering op_features: flags: create_timestamp: Sun Jul 29 13:00:36 2018
2.配置ceph secret(ceph+kubernetes)
# ceph auth get-key client.admin | base64 ##獲取client.admin的keyring值,並用base64編碼(ceph集羣) QVFDOUgxdGJIalc4SWhBQTlCOXRNUCs5RUV3N3hiTlE4NTdLVlE9PQ== # vim ceph-secret.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: kubernetes.io/rbd data: key: QVFDOUgxdGJIalc4SWhBQTlCOXRNUCs5RUV3N3hiTlE4NTdLVlE9PQ== # kubectl create -f ceph-secret.yaml secret/ceph-secret created # kubectl get secret ceph-secret NAME TYPE DATA AGE ceph-secret kubernetes.io/rbd 1 3s
3.建立PV(kubernetes)
# vim ceph-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce storageClassName: "rbd" rbd: monitors: - 192.168.100.116:6789 - 192.168.100.117:6789 - 192.168.100.118:6789 pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle # kubectl create -f ceph-pv.yaml persistentvolume/ceph-pv created # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY ceph-pv 1Gi RWO Recycle STATUS CLAIM STORAGECLASS REASON AGE Available rbd 1m
4.建立PVC(kubernetes)
# vim ceph-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ceph-claim spec: storageClassName: "rbd" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi # kubectl create -f ceph-claim.yaml persistentvolumeclaim/ceph-claim created # kubectl get pvc ceph-claim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-claim Bound ceph-pv 1Gi RWO rbd 20s # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY ceph-pv 1Gi RWO Recycle STATUS CLAIM STORAGECLASS REASON AGE Bound default/ceph-claim rbd 1m
5.建立pod(kubernetes)
# vim ceph-pod1.yaml apiVersion: v1 kind: Pod metadata: name: ceph-pod1 spec: containers: - name: ceph-busybox image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: ceph-claim # kubectl create -f ceph-pod1.yaml pod/ceph-pod1 created # kubectl get pod ceph-pod1 NAME READY STATUS RESTARTS AGE ceph-pod1 1/1 Running 0 2m # kubectl get pod ceph-pod1 -o wide NAME READY STATUS RESTARTS AGE IP NODE ceph-pod1 1/1 Running 0 2m 10.244.88.2 node3
6.測試
進入到該Pod中,向/usr/share/busybox目錄寫入一些數據,以後刪除該Pod,再建立一個新的Pod,看以前的數據是否還存在。
# kubectl exec -it ceph-pod1 -- /bin/sh / # ls bin dev etc home proc root sys tmp usr var / # cd /usr/share/busybox/ /usr/share/busybox # ls /usr/share/busybox # echo 'Hello from Kubernetes storage' > k8s.txt /usr/share/busybox # cat k8s.txt Hello from Kubernetes storage /usr/share/busybox # exit # kubectl delete pod ceph-pod1 pod "ceph-pod1" deleted # kubectl apply -f ceph-pod1.yaml pod "ceph-pod1" created # kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE ceph-pod1 1/1 Running 0 15s 10.244.91.3 node02 # kubectl exec ceph-pod1 -- cat /usr/share/busybox/k8s.txt Hello from Kubernetes storage
8、配置動態PV
1.建立RBD pool(ceph)
# ceph osd pool create kube 128 pool 'kube' created
2.受權 kube 用戶(ceph)
# ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read, allow rwx pool=kube' -o ceph.client.kube.keyring # ceph auth get client.kube exported keyring for client.kube [client.kube] key = AQB2cFxbYZtRBhAAi6xcvhEW7SYx3PlBY/0O0Q== caps mon = "allow r" caps osd = "allow class-read, allow rwx pool=kube"
注:
Ceph使用術語「capabilities」(caps)來描述受權通過身份驗證的用戶使用監視器、OSD和元數據服務器的功能。功能還能夠根據應用程序標記限制對池中的數據,池中的命名空間或一組池的訪問。Ceph管理用戶在建立或更新用戶時設置用戶的功能。
Mon 權限: 包括 r 、 w 、 x 。
OSD 權限: 包括 r 、 w 、 x 、 class-read 、 class-write 。另外,還支持存儲池和命名空間的配置。
更多詳見:http://docs.ceph.com/docs/master/rados/operations/user-management/
3.建立 ceph secret(ceph+kubernetes)
# ceph auth get-key client.admin | base64 ##獲取client.admin的keyring值,並用base64編碼 QVFDOUgxdGJIalc4SWhBQTlCOXRNUCs5RUV3N3hiTlE4NTdLVlE9PQ== # ceph auth get-key client.kube | base64 ##獲取client.kube的keyring值,並用base64編碼 QVFCMmNGeGJZWnRSQmhBQWk2eGN2aEVXN1NZeDNQbEJZLzBPMFE9PQ== # vim ceph-kube-secret.yaml apiVersion: v1 kind: Namespace metadata: name: ceph --- apiVersion: v1 kind: Secret metadata: name: ceph-admin-secret namespace: ceph type: kubernetes.io/rbd data: key: QVFDOUgxdGJIalc4SWhBQTlCOXRNUCs5RUV3N3hiTlE4NTdLVlE9PQ== --- apiVersion: v1 kind: Secret metadata: name: ceph-kube-secret namespace: ceph type: kubernetes.io/rbd data: key: QVFCMmNGeGJZWnRSQmhBQWk2eGN2aEVXN1NZeDNQbEJZLzBPMFE9PQ== # kubectl create -f ceph-kube-secret.yaml namespace/ceph created secret/ceph-admin-secret created secret/ceph-kube-secret created # kubectl get secret -n ceph NAME TYPE DATA AGE ceph-admin-secret kubernetes.io/rbd 1 13s ceph-kube-secret kubernetes.io/rbd 1 13s default-token-tq2rp kubernetes.io/service-account-token 3 13s
4.建立動態RBD StorageClass(kubernetes)
# vim ceph-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-rbd annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/rbd parameters: monitors: 192.168.100.116:6789,192.168.100.117:6789,192.168.100.118:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: ceph pool: kube userId: kube userSecretName: ceph-kube-secret fsType: xfs imageFormat: "2" imageFeatures: "layering" # kubectl create -f ceph-storageclass.yaml storageclass.storage.k8s.io/ceph-rbd created # kubectl get sc NAME PROVISIONER AGE ceph-rbd (default) kubernetes.io/rbd 10s
注:
storageclass.kubernetes.io/is-default-class:註釋爲true,標記爲默認的StorageClass,註釋的任何其餘值或缺失都被解釋爲false。
monitors:Ceph監視器,逗號分隔。此參數必需。
adminId:Ceph客戶端ID,可以在pool中建立images。默認爲「admin」。
adminSecretNamespace:adminSecret的namespace。默認爲「default」。
adminSecret:adminId的secret。此參數必需。提供的secret必須具備「kubernetes.io/rbd」類型。
pool:Ceph RBD池。默認爲「rbd」。
userId:Ceph客戶端ID,用於映射RBD image。默認值與adminId相同。
userSecretName:用於userId映射RBD image的Ceph Secret的名稱。它必須與PVC存在於同一namespace中。此參數必需。提供的secret必須具備「kubernetes.io/rbd」類型,例如以這種方式建立:
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \ --from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \ --namespace=kube-system
fsType:kubernetes支持的fsType。默認值:"ext4"。
imageFormat:Ceph RBD image格式,「1」或「2」。默認值爲「1」。
imageFeatures:此參數是可選的,只有在設置imageFormat爲「2」時才能使用。目前僅支持的功能爲layering。默認爲「」,而且未開啓任何功能。
默認的StorageClass標記爲(default)
詳見:https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd
5.建立Persistent Volume Claim(kubernetes)
動態卷配置的實現基於StorageClass API組中的API對象storage.k8s.io。
集羣管理員能夠 StorageClass根據須要定義任意數量的對象,每一個對象都指定一個卷插件(也稱爲 配置器),用於配置卷以及在配置時傳遞給該配置器的參數集。集羣管理員能夠在集羣中定義和公開多種存儲(來自相同或不一樣的存儲系統),每種存儲都具備一組自定義參數。此設計還確保最終用戶沒必要擔憂如何配置存儲的複雜性和細微差異,但仍能夠從多個存儲選項中進行選擇。
用戶經過在其中包含存儲類來請求動態調配存儲PersistentVolumeClaim。
# vim ceph-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ceph-pvc namespace: ceph spec: storageClassName: ceph-rbd accessModes: - ReadOnlyMany resources: requests: storage: 1Gi # kubectl create -f ceph-pvc.yaml persistentvolumeclaim/ceph-pvc created # kubectl get pvc -n ceph NAME STATUS VOLUME ceph-pvc Bound pvc-e55fdebe-9487-11e8-b987-000c29e75f2a CAPACITY ACCESS MODES STORAGECLASS AGE 1Gi ROX ceph-rbd 5s
6.建立Pod並測試
# vim ceph-pod2.yaml apiVersion: v1 kind: Pod metadata: name: ceph-pod2 namespace: ceph spec: containers: - name: ceph-busybox image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: ceph-pvc # kubectl create -f ceph-pod2.yaml pod/ceph-pod2 created # kubectl -n ceph get pod ceph-pod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE ceph-pod2 1/1 Running 0 3m 10.244.88.2 node03 # kubectl -n ceph exec -it ceph-pod2 -- /bin/sh / # echo 'Ceph from Kubernetes storage' > /usr/share/busybox/ceph.txt / # exit # kubectl -n ceph delete pod ceph-pod2 pod "ceph-pod2" deleted # kubectl apply -f ceph-pod2.yaml pod/ceph-pod2 created # kubectl -n ceph get pod ceph-pod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE ceph-pod2 1/1 Running 0 2m 10.244.88.2 node03 # kubectl -n ceph exec ceph-pod2 -- cat /usr/share/busybox/ceph.txt Ceph from Kubernetes storage
9、使用持久卷部署WordPress和MariaDB
該小節詳見:https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
1.爲MariaDB密碼建立一個Secret
# kubectl -n ceph create secret generic mariadb-pass --from-literal=password=zhijian secret/mariadb-pass created # kubectl -n ceph get secret mariadb-pass NAME TYPE DATA AGE mariadb-pass Opaque 1 37s
2.部署MariaDB
MariaDB容器在PersistentVolume上掛載 /var/lib/mysql。
設置MYSQL_ROOT_PASSWORD環境變量從Secret讀取數據庫密碼。
# vim mariadb.yaml apiVersion: v1 kind: Service metadata: name: wordpress-mariadb namespace: ceph labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mariadb clusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mariadb-pv-claim namespace: ceph labels: app: wordpress spec: storageClassName: ceph-rbd accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mariadb namespace: ceph labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mariadb strategy: type: Recreate template: metadata: labels: app: wordpress tier: mariadb spec: containers: - image: 192.168.100.100/library/mariadb:5.5 name: mariadb env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mariadb-pass key: password ports: - containerPort: 3306 name: mariadb volumeMounts: - name: mariadb-persistent-storage mountPath: /var/lib/mysql volumes: - name: mariadb-persistent-storage persistentVolumeClaim: claimName: mariadb-pv-claim # kubectl create -f mariadb.yaml service/wordpress-mariadb created persistentvolumeclaim/mariadb-pv-claim created deployment.apps/wordpress-mariadb created # kubectl -n ceph get pvc ##查看PVC NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mariadb-pv-claim Bound pvc-33dd4ae7-9bb0-11e8-b987-000c29e75f2a 2Gi RWO ceph-rbd 2m # kubectl -n ceph get pod ##查看Pod NAME READY STATUS RESTARTS AGE wordpress-mariadb-f4d44db9c-fqchx 1/1 Running 0 1m
3.部署WordPress
# vim wordpress.yaml apiVersion: v1 kind: Service metadata: name: wordpress namespace: ceph labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim namespace: ceph labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: wordpress namespace: ceph labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: 192.168.100.100/library/wordpress:4.9.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mariadb - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mariadb-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim # kubectl create -f wordpress.yaml service/wordpress created persistentvolumeclaim/wp-pv-claim created deployment.apps/wordpress created # kubectl -n ceph get pvc wp-pv-claim ##查看PVC NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE wp-pv-claim Bound pvc-334b3bd9-9bde-11e8-b987-000c29e75f2a 2Gi RWO ceph-rbd 46s # kubectl -n ceph get pod ##查看Pod NAME READY STATUS RESTARTS AGE wordpress-5c4ffdcb85-6ftfx 1/1 Running 0 1m wordpress-mariadb-f4d44db9c-fqchx 1/1 Running 0 5m # kubectl -n ceph get services wordpress ##查看Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.244.235.175 <pending> 80:32473/TCP 4m
注:本篇文章參考了該篇文章,感謝: