https://github.com/kubernetes/examples/tree/master/staging/volumes/rbd http://docs.ceph.com/docs/mimic/rados/operations/pools/ https://blog.csdn.net/aixiaoyang168/article/details/78999851 https://www.cnblogs.com/keithtt/p/6410302.html https://kubernetes.io/docs/concepts/storage/volumes/ https://kubernetes.io/docs/concepts/storage/persistent-volumes/ https://blog.csdn.net/wenwenxiong/article/details/78406136 http://www.mamicode.com/info-detail-1701743.html
ceph支持對象存儲,文件系統及塊存儲,是三合一存儲類型,kubernetes的樣例中有cephfs與rbd兩種使用方式的介紹,cephfs須要node節點安裝ceph才能支持,rbd須要node節點安裝ceph-common才支持。
使用上的區別以下:html
Volume Plugin ReadWriteOnce ReadOnlyMany ReadWriteMany CephFS ✓ ✓ ✓ RBD ✓ ✓ -
k81集羣1.13.1版本node
[root@elasticsearch01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.34 Ready <none> 24d v1.13.1 10.2.8.65 Ready <none> 24d v1.13.1
ceph集羣 luminous版本nginx
[root@ceph01 ~]# ceph -s services: mon: 3 daemons, quorum ceph01,ceph02,ceph03 mgr: ceph03(active), standbys: ceph02, ceph01 osd: 24 osds: 24 up, 24 in rgw: 3 daemons active
[root@ceph01 ~]# ceph osd pool create rbd-k8s 1024 1024 For better initial performance on pools expected to store a large number of objects, consider supplying the expected_num_objects parameter when creating the pool. [root@ceph01 ~]# ceph osd lspools 1 rbd-es,2 .rgw.root,3 default.rgw.control,4 default.rgw.meta,5 default.rgw.log,6 default.rgw.buckets.index,7 default.rgw.buckets.data,8 default.rgw.buckets.non-ec,9 rbd-k8s, [root@ceph01 ~]# rbd create rbd-k8s/cephimage1 --size 10240 [root@ceph01 ~]# rbd create rbd-k8s/cephimage2 --size 20480 [root@ceph01 ~]# rbd create rbd-k8s/cephimage3 --size 40960 [root@ceph01 ~]# rbd list rbd-k8s cephimage1 cephimage2 cephimage3
一、下載樣例git
[root@elasticsearch01 ~]# git clone https://github.com/kubernetes/examples.git Cloning into 'examples'... remote: Enumerating objects: 11475, done. remote: Total 11475 (delta 0), reused 0 (delta 0), pack-reused 11475 Receiving objects: 100% (11475/11475), 16.94 MiB | 6.00 MiB/s, done. Resolving deltas: 100% (6122/6122), done. [root@elasticsearch01 ~]# cd examples/staging/volumes/rbd [root@elasticsearch01 rbd]# ls rbd-with-secret.yaml rbd.yaml README.md secret [root@elasticsearch01 rbd]# cp -a ./rbd /k8s/yaml/volumes/
二、k8s集羣節點安裝ceph客戶端github
[root@elasticsearch01 ceph]# yum install ceph-common
三、修改rbd-with-secret.yaml配置文件
修改後配置以下:centos
[root@elasticsearch01 rbd]# cat rbd-with-secret.yaml apiVersion: v1 kind: Pod metadata: name: rbd2 spec: containers: - image: kubernetes/pause name: rbd-rw volumeMounts: - name: rbdpd mountPath: /mnt/rbd volumes: - name: rbdpd rbd: monitors: - '10.0.4.10:6789' - '10.0.4.13:6789' - '10.0.4.15:6789' pool: rbd-k8s image: cephimage1 fsType: ext4 readOnly: true user: admin secretRef: name: ceph-secret
以下參數根據實際狀況修改:
monitors:這是 Ceph集羣的monitor 監視器,Ceph 集羣能夠配置多個 monitor,本配置3個mon
pool:這是Ceph集羣中存儲數據進行歸類區分使用,這裏用的pool爲rbd-ceph
image:這是Ceph 塊設備中的磁盤映像文件,這裏用的是cephimage1
fsType:文件系統類型,默認使用 ext4 便可
readOnly:是否爲只讀,這裏測試使用只讀便可
user:這是Ceph Client訪問Ceph存儲集羣所使用的用戶名,這裏咱們使用admin 便可
keyring:這是Ceph集羣認證須要的密鑰環,搭建Ceph存儲集羣時生成的ceph.client.admin.keyring
imageformat:這是磁盤映像文件格式,可使用 2,或者老一些的1,內核版本比較低的使用1
imagefeatures: 這是磁盤映像文件的特徵,須要uname -r查看集羣系統內核所支持的特性,這裏Ceontos7.4內核版本爲3.10.0-693.el7.x86_64只支持layeringapi
四、使用ceph認證祕鑰
在集羣中使用secret更方便易於擴展且安全安全
[root@ceph01 ~]# cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = AQBHVp9bPirBCRAAUt6Mjw5PUjiy/RDHyHZrUw== [root@ceph01 ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64 QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==
五、建立ceph-secretelasticsearch
[root@elasticsearch01 rbd]# cat secret/ceph-secret.yaml apiVersion: v1 kind: Secret metadata: name: ceph-secret type: "kubernetes.io/rbd" data: key: QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ== [root@elasticsearch01 rbd]# kubectl create -f secret/ceph-secret.yaml secret/ceph-secret created
六、建立pod測試rbd
按照官網的案例直接建立便可ide
[root@elasticsearch01 rbd]# kubectl create -frbd-with-secret.yaml
可是生產環境中不直接使用volumes,他會隨着pods的建立兒建立,刪除而刪除,數據得不到保存,若是須要數據不丟失,須要藉助pv和pvc實現
七、建立ceph-pv
注意rbd是讀寫一次,只讀屢次,目前還不支持讀寫屢次,咱們平常使用rbd映射磁盤時也是一個image只掛載一個客戶端上;cephfs能夠支持讀寫屢次
[root@elasticsearch01 rbd]# cat rbd-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ceph-rbd-pv spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce rbd: monitors: - '10.0.4.10:6789' - '10.0.4.13:6789' - '10.0.4.15:6789' pool: rbd-k8s image: cephimage2 user: admin secretRef: name: ceph-secret fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: Recycle [root@elasticsearch01 rbd]# kubectl create -f rbd-pv.yaml persistentvolume/ceph-rbd-pv created [root@elasticsearch01 rbd]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ceph-rbd-pv 20Gi RWO Recycle Available
八、建立ceph-pvc
[root@elasticsearch01 rbd]# cat rbd-pv-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ceph-rbd-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi [root@elasticsearch01 rbd]# kubectl create -f rbd-pv-claim.yaml persistentvolumeclaim/ceph-rbd-pv-claim created [root@elasticsearch01 rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rbd-pv-claim Bound ceph-rbd-pv 20Gi RWO 6s [root@elasticsearch01 rbd]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ceph-rbd-pv 20Gi RWO Recycle Bound default/ceph-rbd-pv-claim 5m28s
九、建立pod經過pv、pvc方式測試rbd
因爲須要格式化掛載rbd,rbd空間比較大10G,須要時間比較久,大概須要幾分鐘
[root@elasticsearch01 rbd]# cat rbd-pv-pod.yaml apiVersion: v1 kind: Pod metadata: name: ceph-rbd-pv-pod1 spec: containers: - name: ceph-rbd-pv-busybox image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-rbd-vol1 mountPath: /mnt/ceph-rbd-pvc/busybox readOnly: false volumes: - name: ceph-rbd-vol1 persistentVolumeClaim: claimName: ceph-rbd-pv-claim [root@elasticsearch01 rbd]# kubectl create -f rbd-pv-pod.yaml pod/ceph-rbd-pv-pod1 created [root@elasticsearch01 rbd]# kubectl get pods NAME READY STATUS RESTARTS AGE busybox 1/1 Running 432 18d ceph-rbd-pv-pod1 0/1 ContainerCreating 0 19s
報錯以下
MountVolume.WaitForAttach failed for volume "ceph-rbd-pv" : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address
解決方法
禁用一些特性,這些特性在centos7.4內核上不支持,因此生產環境中k8s及相關ceph最好使用內核版本高的系統作爲底層操做系統
rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten
[root@ceph01 ~]# rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten
一、k8s集羣端驗證
[root@elasticsearch01 rbd]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 432 18d 10.254.35.3 10.2.8.65 <none> <none> ceph-rbd-pv-pod1 1/1 Running 0 3m39s 10.254.35.8 10.2.8.65 <none> <none> [root@elasticsearch02 ceph]# df -h |grep rbd /dev/rbd0 493G 162G 306G 35% /data /dev/rbd1 20G 45M 20G 1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2 [root@elasticsearch02 ceph]# cd /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2 [root@elasticsearch02 rbd-k8s-image-cephimage2]# ls lost+found [root@elasticsearch01 rbd]# kubectl exec -ti ceph-rbd-pv-pod1 sh / # df -h Filesystem Size Used Available Use% Mounted on overlay 49.1G 7.4G 39.1G 16% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/vda1 49.1G 7.4G 39.1G 16% /dev/termination-log /dev/vda1 49.1G 7.4G 39.1G 16% /etc/resolv.conf /dev/vda1 49.1G 7.4G 39.1G 16% /etc/hostname /dev/vda1 49.1G 7.4G 39.1G 16% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm /dev/rbd1 19.6G 44.0M 19.5G 0% /mnt/ceph-rbd-pvc/busybox tmpfs 7.8G 12.0K 7.8G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 7.8G 0 7.8G 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 64.0M 0 64.0M 0% /proc/timer_stats tmpfs 64.0M 0 64.0M 0% /proc/sched_debug tmpfs 7.8G 0 7.8G 0% /proc/scsi tmpfs 7.8G 0 7.8G 0% /sys/firmware / # cd /mnt/ceph-rbd-pvc/busybox/ /mnt/ceph-rbd-pvc/busybox # ls lost+found /mnt/ceph-rbd-pvc/busybox # touch ceph-rbd-pods /mnt/ceph-rbd-pvc/busybox # ls ceph-rbd-pods lost+found /mnt/ceph-rbd-pvc/busybox # echo busbox>ceph-rbd-pods /mnt/ceph-rbd-pvc/busybox # cat ceph-rbd-pods busbox [root@elasticsearch02 ceph]# cd /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2 [root@elasticsearch02 rbd-k8s-image-cephimage2]# ls ceph-rbd-pods lost+found
二、ceph集羣端驗證
[root@ceph01 ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 65.9TiB 58.3TiB 7.53TiB 11.43 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd-es 1 1.38TiB 7.08 18.1TiB 362911 .rgw.root 2 1.14KiB 0 18.1TiB 4 default.rgw.control 3 0B 0 18.1TiB 8 default.rgw.meta 4 46.9KiB 0 104GiB 157 default.rgw.log 5 0B 0 18.1TiB 345 default.rgw.buckets.index 6 0B 0 104GiB 2012 default.rgw.buckets.data 7 1.01TiB 5.30 18.1TiB 2090721 default.rgw.buckets.non-ec 8 0B 0 18.1TiB 0 rbd-k8s 9 137MiB 0 18.1TiB 67