kubernetes集羣三步安裝
git clone https://github.com/rook/rook cd cluster/examples/kubernetes/ceph kubectl create -f operator.yaml
查看operator是否成功:mysql
[root@dev-86-201 ~]# kubectl get pod -n rook-ceph-system NAME READY STATUS RESTARTS AGE rook-ceph-agent-5z6p7 1/1 Running 0 88m rook-ceph-agent-6rj7l 1/1 Running 0 88m rook-ceph-agent-8qfpj 1/1 Running 0 88m rook-ceph-agent-xbhzh 1/1 Running 0 88m rook-ceph-operator-67f4b8f67d-tsnf2 1/1 Running 0 88m rook-discover-5wghx 1/1 Running 0 88m rook-discover-lhwvf 1/1 Running 0 88m rook-discover-nl5m2 1/1 Running 0 88m rook-discover-qmbx7 1/1 Running 0 88m
而後建立ceph集羣:git
kubectl create -f cluster.yaml
查看ceph集羣:github
[root@dev-86-201 ~]# kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE rook-ceph-mgr-a-8649f78d9b-jklbv 1/1 Running 0 64m rook-ceph-mon-a-5d7fcfb6ff-2wq9l 1/1 Running 0 81m rook-ceph-mon-b-7cfcd567d8-lkqff 1/1 Running 0 80m rook-ceph-mon-d-65cd79df44-66rgz 1/1 Running 0 79m rook-ceph-osd-0-56bd7545bd-5k9xk 1/1 Running 0 63m rook-ceph-osd-1-77f56cd549-7rm4l 1/1 Running 0 63m rook-ceph-osd-2-6cf58ddb6f-wkwp6 1/1 Running 0 63m rook-ceph-osd-3-6f8b78c647-8xjzv 1/1 Running 0 63m
參數說明:sql
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags image: ceph/ceph:v13.2.2-20181023 dataDirHostPath: /var/lib/rook # 數據盤目錄 mon: count: 3 allowMultiplePerNode: true dashboard: enabled: true storage: useAllNodes: true useAllDevices: false config: databaseSizeMB: "1024" journalSizeMB: "1024"
訪問ceph dashboard:docker
[root@dev-86-201 ~]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.98.183.33 <none> 9283/TCP 66m rook-ceph-mgr-dashboard NodePort 10.103.84.48 <none> 8443:31631/TCP 66m # 把這個改爲NodePort模式 rook-ceph-mon-a ClusterIP 10.99.71.227 <none> 6790/TCP 83m rook-ceph-mon-b ClusterIP 10.110.245.119 <none> 6790/TCP 82m rook-ceph-mon-d ClusterIP 10.101.79.159 <none> 6790/TCP 81m
而後訪問https://10.1.86.201:31631 便可api
管理帳戶admin,獲取登陸密碼:微信
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode
apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool # operator會監聽並建立一個pool,執行完後界面上也能看到對應的pool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block # 這裏建立一個storage class, 在pvc中指定這個storage class便可實現動態建立PV provisioner: ceph.rook.io/block parameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. fstype: xfs # Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/ reclaimPolicy: Retain
在cluster/examples/kubernetes 目錄下,官方給了個worldpress的例子,能夠直接運行一下:架構
kubectl create -f mysql.yaml kubectl create -f wordpress.yaml
查看PV PVC:app
[root@dev-86-201 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph-block 144m wp-pv-claim Bound pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph-block 144m [root@dev-86-201 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/mysql-pv-claim rook-ceph-block 145m pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/wp-pv-claim rook-ceph-block 145m
看下yaml文件:分佈式
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: storageClassName: rook-ceph-block # 指定storage class accessModes: - ReadWriteOnce resources: requests: storage: 20Gi # 須要一個20G的盤 ... volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim # 指定上面定義的PVC
是否是很是簡單。
要訪問wordpress的話請把service改爲NodePort類型,官方給的是loadbalance類型:
kubectl edit svc wordpress [root@dev-86-201 kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress NodePort 10.109.30.99 <none> 80:30130/TCP 148m
分佈式存儲在容器集羣中充當很是重要的角色,使用容器集羣一個很是重要的理念就是把集羣當成一個總體使用,若是你在使用中還關心單個主機,好比調度到某個節點,
掛載某個節點目錄等,必然會致使不能把雲的威力百分之百發揮出來。 一旦計算存儲分離後,就可真正實現隨意漂移,對集羣維護來講是個極大的福音。
好比集羣機器過保了須要下架,那麼咱們雲化的架構由於全部東西無單點,因此只須要簡單驅逐改節點,而後下架便可,不用關心上面跑的是什麼業務,無論是有狀態仍是無
狀態的均可以自動修復。 不過目前面臨最大的挑戰可能仍是分佈式存儲的性能問題。 在性能要求不苛刻的場景下我是極推薦這種計算存儲分離架構的。
探討可加QQ羣:98488045