目錄node
k8s掛載Ceph RBD有兩種方式,一種是傳統的PV&PVC的方式,也就是說須要管理員先預先建立好相關PV和PVC,而後對應的deployment或者replication來掛載PVC使用。而在k8s 1.4之後,kubernetes提供了一種更加方便的動態建立PV的方式,即StorageClass。使用StorageClass時無需預先建立固定大小的PV來等待使用者建立PVC使用,而是直接建立PVC便可使用。api
須要說明的是,要想讓k8s的node節點執行掛載ceph rbd的指令,須要在全部節點上安裝ceph-common包。直接經過yum安裝便可。bash
#獲取管理key並進行64位編碼 ceph auth get-key client.admin | base64
建立ceph-secret.yml文件,內容以下:app
apiVersion: v1 kind: Secret metadata: name: ceph-secret data: #Please note this value is base64 encoded. # echo "keystring"|base64 key: QVFDaWtERlpzODcwQWhBQTdxMWRGODBWOFZxMWNGNnZtNmJHVGc9PQo=
建立test.pv.yml文件,內容以下:this
apiVersion: v1 kind: PersistentVolume metadata: name: test-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce rbd: #ceph的monitor節點 monitors: - 10.5.10.117:6789 - 10.5.10.236:6789 - 10.5.10.227:6789 #ceph的存儲池名字 pool: data #在存儲池裏建立的image的名字 image: data user: admin secretRef: name: ceph-secret fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle
kubectl create -f test.pv.yml
建立test.pvc.yml文件,內容以下:編碼
kind: PersistentVolumeClaim apiVersion: extensions/v1beta1 metadata: name: test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
kubectl create -f test.pvc.yml
建立test.dm文件,內容以下:spa
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test spec: replicas: 1 template: metadata: labels: app: test spec: containers: - name: test image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15 ports: - containerPort: 80 volumeMounts: - mountPath: "/data" name: data volumes: - name: data persistentVolumeClaim: claimName: test-pvc
kubectl create -f test.dm.yml
因爲StorageClass要求ceph的secret type必須爲kubernetes.io/rbd,因此在上面PV & PVC方式中建立的secret沒法使用,須要從新建立。以下:rest
# 其中key的部分爲ceph原生的key,未使用base64從新編碼 kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==' --namespace=kube-system kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" --from-literal=key='AQCikDFZs870AhAA7q1dF80V8Vq1cF6vm6bGTg==' --namespace=default
建立test.sc.yml文件,內容以下:code
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: test-storageclass provisioner: kubernetes.io/rbd parameters: monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 # Ceph 客戶端用戶ID(非k8s的) adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: data userId: admin userSecretName: ceph-secret
建立test.pvc.yml文件,內容以下:ci
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-sc-pvc annotations: volume.beta.kubernetes.io/storage-class: test-storageclass spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
kubectl create -f test.pvc.yml
至於掛載,與PV & PVC的方式一致,再也不重複說明
上面大體說明了使用k8s掛載ceph rbd塊設備的方法。這裏簡單說下k8s掛載ceph 文件系統的方法。
首先secret能夠直接與上面複用,不用再單首創建。也不須要再建立pv和pvc。直接在deployment中掛載便可,方法以下:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test spec: replicas: 1 template: metadata: labels: app: test spec: containers: - name: test image: dk-reg.op.douyuyuba.com/op-base/openresty:1.9.15 ports: - containerPort: 80 volumeMounts: - mountPath: "/data" name: data volumes: - name: data cephfs: monitors: - 10.5.10.117:6789 - 10.5.10.236:6789 - 10.5.10.227:6789 path: /data user: admin secretRef: name: ceph-secret