官網地址:https://rook.io/node
項目地址:https://github.com/rook/rooknginx
準備osd存儲介質git
硬盤符號 | 大小 | 做用 |
---|---|---|
sdb | 50GB | OSD Data |
sdc | 50GB | OSD Data |
sdd | 50GB | OSD Data |
sde | 50GB | OSD Metadata |
> 安裝前使用命令lvm lvs
,lvm vgs
和lvm pvs
檢查上述硬盤是否已經被使用,若已經使用須要刪除,且確保硬盤上不存在分區和文件系統github
確保開啓內核rbd模塊並安裝lvm2json
modprobe rbd yum install -y lvm2
安裝operatorcentos
git clone --single-branch --branch release-1.2 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml
安裝ceph集羣api
--- apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: ceph/ceph:v14.2.5 allowUnsupported: false dataDirHostPath: /var/lib/rook skipUpgradeChecks: false mon: count: 3 allowMultiplePerNode: true mgr: modules: - name: pg_autoscaler enabled: true dashboard: enabled: true ssl: true monitoring: enabled: false rulesNamespace: rook-ceph network: hostNetwork: false rbdMirroring: workers: 0 annotations: resources: removeOSDsIfOutAndSafeToRemove: false useAllNodes: false useAllDevices: false config: nodes: - name: "minikube" devices: - name: "sdb" - name: "sdc" - name: "sdd" config: storeType: bluestore metadataDevice: "sde" databaseSizeMB: "1024" journalSizeMB: "1024" osdsPerDevice: "1" disruptionManagement: managePodBudgets: false osdMaintenanceTimeout: 30 manageMachineDisruptionBudgets: false machineDisruptionBudgetNamespace: openshift-machine-api
安裝命令行工具瀏覽器
kubectl create -f toolbox.yaml
在toolbox中使用命令ceph -s
查看集羣狀態app
> 在重裝ceph集羣時須要清理rook數據目錄(默認:/var/lib/rook)工具
爲ceph-dashboard服務添加ingress路由
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off; spec: tls: - hosts: - rook-ceph.minikube.local secretName: rook-ceph.minikube.local rules: - host: rook-ceph.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-mgr-dashboard servicePort: https-dashboard
獲取訪問dashboard所需的admin帳號密碼
kubectl get secret rook-ceph-dashboard-password -n rook-ceph -o jsonpath='{.data.password}'|base64 -d
將域名rook-ceph.minikube.local加入/etc/hosts後經過瀏覽器訪問
https://rook-ceph.minikube.local/
建立rbd存儲池
--- apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: osd replicated: size: 3
> 因爲僅有一個節點和三個OSD,所以採用osd做爲故障域
建立完成後在rook-ceph-tools中使用指令ceph osd pool ls
能夠看到新建瞭如下存儲池
以rbd爲存儲介質建立storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4 reclaimPolicy: Delete
使用statefulset測試經過storageclass掛載rbd存儲
--- kind: StatefulSet apiVersion: apps/v1 metadata: name: storageclass-rbd-test namespace: default labels: app: storageclass-rbd-test spec: replicas: 2 selector: matchLabels: app: storageclass-rbd-test template: metadata: labels: app: storageclass-rbd-test spec: restartPolicy: Always containers: - name: storageclass-rbd-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-ceph-block
建立mds服務與cephfs文件系統
--- apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPools: - failureDomain: osd replicated: size: 3 preservePoolsOnDelete: true metadataServer: activeCount: 1 activeStandby: true placement: annotations: resources:
建立完成後在rook-ceph-tools中使用指令ceph osd pool ls
能夠看到新建瞭如下存儲池
以cephfs爲存儲介質建立storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: myfs pool: myfs-data0 csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete mountOptions:
使用deployment測試經過storageclass掛載cephfs共享存儲
--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data-storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: csi-cephfs volumeMode: Filesystem --- kind: Deployment apiVersion: apps/v1 metadata: name: storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-test spec: replicas: 2 selector: matchLabels: app: storageclass-cephfs-test template: metadata: labels: app: storageclass-cephfs-test spec: restartPolicy: Always containers: - name: storageclass-cephfs-test imagePullPolicy: IfNotPresent volumeMounts: - name: data mountPath: /data image: 'centos:7' args: - 'sh' - '-c' - 'sleep 3600' volumes: - name: data persistentVolumeClaim: claimName: data-storageclass-cephfs-test
建立對象存儲網關
--- apiVersion: ceph.rook.io/v1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: osd replicated: size: 3 dataPool: failureDomain: osd replicated: size: 3 preservePoolsOnDelete: false gateway: type: s3 sslCertificateRef: port: 80 securePort: instances: 1 placement: annotations: resources:
建立完成後在rook-ceph-tools中使用指令ceph osd pool ls
能夠看到新建瞭如下存儲池
爲ceph-rgw服務添加ingress路由
--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-rgw namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" spec: tls: - hosts: - rook-ceph-rgw.minikube.local secretName: rook-ceph-rgw.minikube.local rules: - host: rook-ceph-rgw.minikube.local http: paths: - path: / backend: serviceName: rook-ceph-rgw-my-store servicePort: http
將域名rook-ceph-rgw.minikube.local加入/etc/hosts後經過瀏覽器訪問
https://rook-ceph-rgw.minikube.local/
添加對象存儲用戶
--- apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: "my display name"
建立對象存儲用戶的同時會生成以{{.metadata.namespace}}-object-user-{{.spec.store}}-{{.metadata.name}}
爲命名規則的secret,其中保存了該S3用戶的AccessKey和SecretKey
獲取AccessKey
kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.AccessKey}'|base64 -d
獲取SecretKey
kubectl get secret rook-ceph-object-user-my-store-my-user -n rook-ceph -o jsonpath='{.data.SecretKey}'|base64 -d
根據上述步驟獲取到的信息,使用S3客戶端進行鏈接便可使用該S3用戶
建立以s3爲存儲的storageclass
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-delete-bucket provisioner: ceph.rook.io/bucket reclaimPolicy: Delete parameters: objectStoreName: my-store objectStoreNamespace: rook-ceph region: default
> 目前不支持以s3存儲建立pvc,僅可用於建立存儲桶
爲storageclass建立對應的存儲桶資源申請
apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-delete-bucket spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-delete-bucket
存儲桶建立後會生成與桶資源申請同名的secret,其中保存着用於鏈接該存儲桶的AccessKey和SecretKey
獲取AccessKey
kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_ACCESS_KEY_ID}'|base64 -d
獲取SecretKey
kubectl get secret ceph-delete-bucket -n rook-ceph -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}'|base64 -d
> 使用該方式獲取的s3用戶已經作了配額限制只能使用一個存儲桶
以上就是對於rook ceph的三位一體(rbd,cephfs,s3)簡單上手體驗,相比較ceph-deploy和ceph-ansible而言更加地簡單方便,適合新手上手體驗ceph,穩定性如何須要時間觀察,暫不推薦用於生產環境