目錄html
主機名 | 系統 | ip地址 | 角色 |
---|---|---|---|
ops-k8s-175 | ubuntu16.04 | 192.168.75.175 | k8s-master,glusterfs,heketi |
ops-k8s-176 | ubuntu16.04 | 192.168.75.176 | k8s-node,glusterfs |
ops-k8s-177 | ubuntu16.04 | 192.168.75.177 | k8s-node,glusterfs |
ops-k8s-178 | ubuntu16.04 | 192.168.175.178 | k8s-node,glusterfs |
# 在全部節點執行: apt-get install glusterfs-server glusterfs-common glusterfs-client fuse systemctl start glusterfs-server systemctl enable glusterfs-server # 在175上執行: gluster peer probe 192.168.75.176 gluster peer probe 192.168.75.177 gluster peer probe 192.168.75.178
建立測試卷node
# 建立 gluster volume create test-volume replica 2 192.168.75.175:/home/gluterfs/data 192.168.75.176:/home/glusterfs/data force # 激活卷 gluster volume start test-volume # 掛載 mount -t glusterfs 192.168.75.175:/test-volume /mnt/mytest
擴容測試卷mysql
# 向卷中添加brick gluster volume add-brick test-volume 192.168.75.177:/home/gluterfs/data 192.168.75.178:/home/glusterfs/data force
刪除測試卷linux
gluster volume stop test-volume gluster volume delete test-volume
Heketi提供了一個RESTful管理界面,能夠用來管理GlusterFS卷的生命週期。 經過Heketi,就能夠像使用OpenStack Manila,Kubernetes和OpenShift同樣申請能夠動態配置GlusterFS卷。Heketi會動態在集羣內選擇bricks構建所需的volumes,這樣以確保數據的副本會分散到集羣不一樣的故障域內。同時Heketi還支持任意數量的ClusterFS集羣,以保證接入的雲服務器不侷限於單個GlusterFS集羣。git
heketi項目地址:https://github.com/heketi/heketigithub
下載heketi相關包:
https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-client-v5.0.1.linux.amd64.tar.gz
https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gzsql
修改heketi配置文件/etc/heketi/heketi.json,內容以下:docker
...... #修改端口,防止端口衝突 "port": "18080", ...... #容許認證 "use_auth": true, ...... #admin用戶的key改成adminkey "key": "adminkey" ...... #修改執行插件爲ssh,並配置ssh的所需證書,注意要能對集羣中的機器免密ssh登錄,使用ssh-copy-id把pub key拷到每臺glusterfs服務器上 "executor": "ssh", "sshexec": { "keyfile": "/root/.ssh/id_rsa", "user": "root", "port": "22", "fstab": "/etc/fstab" }, ...... # 定義heketi數據庫文件位置 "db": "/var/lib/heketi/heketi.db" ...... #調整日誌輸出級別 "loglevel" : "warning"
須要說明的是,heketi有三種executor,分別爲mock、ssh、kubernetes,建議在測試環境使用mock,生產環境使用ssh,當glusterfs以容器的方式部署在kubernetes上時,才使用kubernetes。咱們這裏將glusterfs和heketi獨立部署,使用ssh的方式。數據庫
在上面咱們配置heketi的時候使用了ssh的executor,那麼就須要heketi服務器能經過ssh密鑰的方式鏈接到全部glusterfs節點進行管理操做,因此須要先生成ssh密鑰json
ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N '' chmod 600 /etc/heketi/heketi_key.pub # ssh公鑰傳遞,這裏只以一個節點爲例 ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.75.175 # 驗證是否能經過ssh密鑰正常鏈接到glusterfs節點 ssh -i /etc/heketi/heketi_key root@192.168.75.175
nohup heketi -config=/etc/heketi/heketi.json &
在我實際生產中,使用docker-compose來管理heketi,而不直接手動啓動,下面直接給出docker-compose配置示例:
version: "2" services: heketi: container_name: heketi image: dk-reg.op.douyuyuba.com/library/heketi:5 volumes: - "/etc/heketi:/etc/heketi" - "/var/lib/heketi:/var/lib/heketi" - "/etc/localtime:/etc/localtime" network_mode: host
heketi-cli --user admin -server http://192.168.75.175:18080 --secret adminkey --json cluster create {"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}
因爲咱們開啓了heketi認證,因此每次執行heketi-cli操做時,都須要帶上一堆的認證字段,比較麻煩,我在這裏建立一個別名來避免相關操做:
alias heketi-cli='heketi-cli --server "http://192.168.75.175:18080" --user "admin" --secret "adminkey"'
下面添加節點
heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.175 --storage-host-name 192.168.75.175 --zone 1 heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.176 --storage-host-name 192.168.75.176 --zone 1 heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.177 --storage-host-name 192.168.75.177 --zone 1 heketi-cli --json node add --cluster "d102a74079dd79aceb3c70d6a7e8b7c4" --management-host-name 192.168.75.178 --storage-host-name 192.168.75.178 --zone 1
看到有些文檔說須要在centos上部署時,須要註釋每臺glusterfs上的/etc/sudoers中的Defaults requiretty,否則加第二個node死活報錯,最後把日誌級別調高才看到日誌裏有記錄sudo提示require tty。因爲我這裏直接部署在ubuntu上,全部不存在上述問題。若是有遇到這種問題的,能夠照着操做下。
這裏須要特別說明的是,目前heketi僅支持使用裸分區或裸磁盤添加爲device,不支持文件系統。
# --node參數給出的id是上一步建立node時生成的,這裏只給出一個添加的示例,實際配置中,要添加每個節點的每一塊用於存儲的硬盤 heketi-cli -json device add -name="/dev/vda2" --node "c3638f57b5c5302c6f7cd5136c8fdc5e"
上面展現瞭如何手動一步步生成cluster,往cluster中添加節點,添加device的操做,在咱們實際生產配置中,能夠直接經過配置文件完成。
建立一個/etc/heketi/topology-sample.json的文件,內容以下:
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "192.168.75.175" ], "storage": [ "192.168.75.175" ] }, "zone": 1 }, "devices": [ "/dev/vda2" ] }, { "node": { "hostnames": { "manage": [ "192.168.75.176" ], "storage": [ "192.168.75.176" ] }, "zone": 1 }, "devices": [ "/dev/vda2" ] }, { "node": { "hostnames": { "manage": [ "192.168.75.177" ], "storage": [ "192.168.75.177" ] }, "zone": 1 }, "devices": [ "/dev/vda2" ] }, { "node": { "hostnames": { "manage": [ "192.168.75.178" ], "storage": [ "192.168.75.178" ] }, "zone": 1 }, "devices": [ "/dev/vda2" ] } ] } ] }
建立:
heketi-cli topology load --json topology-sample.json
這裏僅僅是作一個測試,實際使用中,會由kubernetes自動建立pvc
若是添加的volume小的話可能會提示No Space,要解決這一問題要在heketi.json添加"brick_min_size_gb" : 1 ,1爲1G
...... "brick_min_size_gb" : 1, "db": "/var/lib/heketi/heketi.db" ......
size要比brick_min_size_gb大,若是設成1仍是報min brick limit,replica必須大於1
heketi-cli --json volume create --size 3 --replica 2
在執行建立的時候,拋出了以下異常:
Error: /usr/sbin/thin_check: execvp failed: No such file or directory WARNING: Integrity check of metadata for pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed. /usr/sbin/thin_check: execvp failed: No such file or directory Check of pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e failed (status:2). Manual repair required! Failed to activate thin pool vg_d9fb2bec56cfdf73e21d612b1b3c1feb/tp_e94d763a9b687bfc8769ac43b57fa41e.
這須要在全部glusterfs節點機上安裝thin-provisioning-tools包:
apt-get -y install thin-provisioning-tools
成功建立的返回輸出以下:
heketi-cli --json volume create --size 3 --replica 2 {"size":3,"name":"vol_7fc61913851227ca2c1237b4c4d51997","durability":{"type":"replicate","replicate":{"replica":2},"disperse":{"data":4,"redundancy":2}},"snapshot":{"enable":false,"factor":1},"id":"7fc61913851227ca2c1237b4c4d51997","cluster":"dae1ab512dfad0001c3911850cecbd61","mount":{"glusterfs":{"hosts":["10.1.61.175","10.1.61.178"],"device":"10.1.61.175:vol_7fc61913851227ca2c1237b4c4d51997","options":{"backup-volfile-servers":"10.1.61.178"}}},"bricks":[{"id":"004f34fd4eb9e04ca3e1ca7cc1a2dd2c","path":"/var/lib/heketi/mounts/vg_d9fb2bec56cfdf73e21d612b1b3c1feb/brick_004f34fd4eb9e04ca3e1ca7cc1a2dd2c/brick","device":"d9fb2bec56cfdf73e21d612b1b3c1feb","node":"20d14c78691d9caef050b5dc78079947","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728},{"id":"2876e9a7574b0381dc0479aaa2b64d46","path":"/var/lib/heketi/mounts/vg_b7fd866d3ba90759d0226e26a790d71f/brick_2876e9a7574b0381dc0479aaa2b64d46/brick","device":"b7fd866d3ba90759d0226e26a790d71f","node":"9cddf0ac7899676c86cb135be16649f5","volume":"7fc61913851227ca2c1237b4c4d51997","size":3145728}]}
參考https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
添加storageclass-glusterfs.yaml文件,內容以下:
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs provisioner: kubernetes.io/glusterfs parameters: resturl: "http://192.168.75.175:18080" restauthenabled: "true" restuser: "admin" restuserkey: "adminkey" volumetype: "replicate:2" kubectl apply -f storageclass-glusterfs.yaml
這是直接將userkey明文寫入配置文件建立storageclass的方式,官方推薦將key使用secret保存。示例以下:
# glusterfs-secret.yaml內容以下: apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: TFRTTkd6TlZJOEpjUndZNg== type: kubernetes.io/glusterfs # storageclass-glusterfs.yaml內容修改以下: apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.1.61.175:18080" clusterid: "dae1ab512dfad0001c3911850cecbd61" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" #restuserkey: "adminkey" gidMin: "40000" gidMax: "50000" volumetype: "replicate:2"
更詳細的用法參考:https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs
glusterfs-pvc.yaml內容以下:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-mysql1 namespace: default annotations: volume.beta.kubernetes.io/storage-class: "glusterfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi kubectl create -f glusterfs-pvc.yaml
mysql-deployment.yaml內容以下:
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: mysql namespace: default spec: replicas: 1 template: metadata: labels: name: mysql spec: containers: - name: mysql image: mysql:5.7 imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: root123456 ports: - containerPort: 3306 volumeMounts: - name: gluster-mysql-data mountPath: "/var/lib/mysql" volumes: - name: glusterfs-mysql-data persistentVolumeClaim: claimName: glusterfs-mysql1 kubectl create -f /etc/kubernetes/mysql-deployment.yaml
須要說明的是,我這裏使用的動態pvc的方式來建立glusterfs掛載盤,還有一種手動建立pvc的方式,能夠參考:http://rdc.hundsun.com/portal/article/826.html