本文包含:html
gluster-kubernetes搭建glusterfs存儲node
傳統的運維中,每每須要管理員手動先在存儲集羣分配空間,而後才能掛載到應用中去。Kubernetes 的最新版中,dynamic provisioning 升級到了 beta ,並支持多種存儲服務的動態預配置,從而能夠更有效地利用存儲環境中的存儲容量,達到按需使用存儲空間的目的。本文將介紹 dynamic provisioning 這一特性,並以 GlusterFS 爲例,說明存儲服務與 k8s 的對接。git
⚠️熟悉的小夥伴直接跳過啦github
dynamic provisioning:
存儲是容器編排中很是重要的一部分。Kubernetes 從 v1.2 開始,提供了 dynamic provisioning 這一強大的特性,能夠給集羣提供按需分配的存儲,並能支持包括 AWS-EBS、GCE-PD、Cinder-Openstack、Ceph、GlusterFS 等多種雲存儲。非官方支持的存儲也能夠經過編寫 plugin 方式支持。
在沒有 dynamic provisioning 時,容器爲了使用 Volume,須要預先在存儲端分配好,這個過程每每是管理員手動的。在引入 dynamic provisioning 以後,Kubernetes 會根據容器所需的 volume 大小,經過調用存儲服務的接口,動態地建立知足所需的存儲。算法
Storageclass:
管理員能夠配置 storageclass,來描述所提供存儲的類型。以 AWS-EBS 爲例,管理員能夠分別定義兩種 storageclass:slow 和 fast。slow 對接 sc1(機械硬盤),fast 對接 gp2(固態硬盤)。應用能夠根據業務的性能需求,分別選擇兩種 storageclass。json
Glusterfs:
一個開源的分佈式文件系統,具備強大的橫向擴展能力,經過擴展可以支持數 PB 存儲容量和處理數千客戶端。GlusterFS 藉助 TCP/IP 或 InfiniBandRDMA 網絡將物理分佈的存儲資源彙集在一塊兒,使用單一全局命名空間來管理數據。
⚠️Glusterfs架構中最大的設計特色是沒有元數據服務器組件,也就是說沒有主/從服務器之分,每個節點均可以是主服務器api
Heketi:
Heketi(https://github.com/heketi/heketi),是一個基於 RESTful API 的 GlusterFS 卷管理框架。
Heketi 能夠方便地和雲平臺整合,提供 RESTful API 供 Kubernetes 調用,實現多 glusterfs 集羣的卷管理。另外,heketi 還有保證 bricks 和它對應的副本均勻分佈在集羣中的不一樣可用區的優勢。bash
heketi官網推薦經過gluster-kubernetes搭建,生產環境能夠直接利用gluster-kubernetes提供的腳本搭建,減少複雜度,我的觀點,仁者見仁,智者見智服務器
1 master網絡
1. 至少須要3個kubernetes slave節點用來部署glusterfs集羣,而且這3個slave節點每一個節點須要至少一個空餘的磁盤
2. 查看是否運行內核模塊lsmod |grep thin
,每一個kubernetes集羣的節點運行modprobe dm_thin_pool
,加載內核模塊。
git clone https://github.com/gluster/gluster-kubernetes.git
cd xxx/gluster-kubernetes/deploy
cp topology.json.sample topology.json
修改對應的主機名(nodes),ip,和數據卷
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "10.8.4.92" ], "storage": [ "10.8.4.92" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "10.8.4.93" ], "storage": [ "10.8.4.93" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "10.8.4.131" ], "storage": [ "10.8.4.131" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "10.8.4.132" ], "storage": [ "10.8.4.132" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] } ] } ] }
{ "_port_comment": "Heketi Server Port Number", "port" : "8080", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth" : true, #開啓用戶認證 "_jwt" : "Private keys for access", "jwt" : { "_admin" : "Admin has access to all APIs", "admin" : { "key" : "adminkey" #管理員密碼 }, "_user" : "User only has access to /volumes endpoint", "user" : { "key" : "userkey" #用戶密碼 } }, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs" : { "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh", "executor" : "${HEKETI_EXECUTOR}",#本文搭建爲kubernete方式 "_db_comment": "Database file name", "db" : "/var/lib/heketi/heketi.db", #heketi數據存儲 "kubeexec" : { "rebalance_on_expansion": true }, "sshexec" : { "rebalance_on_expansion": true, "keyfile" : "/etc/heketi/private_key", "port" : "${SSH_PORT}", "user" : "${SSH_USER}", "sudo" : ${SSH_SUDO} } }, "backup_db_to_kube_secret": false }
./gk-deploy -h
概述
-g, --deploy-gluster #pod部署gluster使用 -s, --ssh-keyfile #ssh方式管理gluster使用,/root/.ssh/id_rsa.pub --admin-key ADMIN_KEY#管理員secret設置 --user-key USER_KEY #用戶secret設置 --abort #刪除heketi資源使用
vi gk-deploy
腳本主要內容
⚠️想要深刻理解腳本都作了什麼,能夠查看https://www.kubernetes.org.cn/3893.html
#添加glusterfs設備節點 heketi_cli="${CLI} exec -i ${heketi_pod} -- heketi-cli -s http://localhost:8080 --user admin --secret '${ADMIN_KEY}'" load_temp=$(mktemp) eval_output "${heketi_cli} topology load --json=/etc/heketi/topology.json 2>&1" | tee "${load_temp}"
⚠️Adding device時比較慢,耐心等待
kubectl create ns glusterfs ./gk-deploy -y -n glusterfs -g --user-key=userkey --admin-key=adminkey Using namespace "glusterfs". Checking that heketi pod is not running ... OK serviceaccount "heketi-service-account" created clusterrolebinding "heketi-sa-view" created node "10.8.4.92" labeled node "10.8.4.93" labeled node "10.8.4.131" labeled node "10.8.4.132" labeled daemonset "glusterfs" created Waiting for GlusterFS pods to start ... OK service "deploy-heketi" created deployment "deploy-heketi" created Waiting for deploy-heketi pod to start ... OK Creating cluster ... ID: 4cfe35ce3cdc64b8afb8dbc46cad0e09 Creating node 10.8.4.92 ... ID: 1d323ddf243fd4d8c7f0ed58eb0e2c0ab Adding device /dev/vdb ... OK Creating node 10.8.4.93 ... ID: 12df23f339dj4jf8jdk3oodd31ba9e12c52 Adding device /dev/vdb ... OK Creating node 10.8.4.131 ... ID: 1c529sd3ewewed1286e29e260668a1 Adding device /dev/vdb ... OK Creating node 10.8.4.132 ... ID: 12ff323cd1121232323fddf9e260668a1 Adding device /dev/vdb ... OK heketi topology loaded. Saving heketi-storage.json secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created service "deploy-heketi" deleted job "heketi-storage-copy-job" deleted deployment "deploy-heketi" deleted secret "heketi-storage-secret" deleted service "heketi" created deployment "heketi" created Waiting for heketi pod to start ... OK heketi is now running and accessible via http://10.10.23.148:8080/ Ready to create and provide GlusterFS volumes.
kubectl get po -o wide -n glusterfs
[root@k8s1-master1 deploy]# export HEKETI_CLI_SERVER=$(kubectl get svc/heketi -n glusterfs --template 'http://{{.spec.clusterIP}}:{{(index .s pec.ports 0).port}}') [root@k8s1-master1 deploy]# echo $HEKETI_CLI_SERVER http://10.0.0.131:8080 [root@k8s1-master1 deploy]# curl $HEKETI_CLI_SERVER/hello Hello from Heketi
kubectl delete -f kube-templates/deploy-heketi-deployment.yaml kubectl delete -f kube-templates/heketi-deployment.yaml kubectl delete -f kube-templates/heketi-service-account.yaml kubectl delete -f kube-templates/glusterfs-daemonset.yaml #每一個節點執行 rm -rf /var/lib/heketi rm -rf /var/lib/glusterd
#每一個節點執行 dd if=/dev/zero of=/dev/vdb bs=1k count=1 blockdev --rereadpt /dev/vdb
Connected狀態
[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash [root@k8s1-master2 /]# gluster peer status Number of Peers: 3 Hostname: 10.8.4.93 Uuid: 52824c41-2fce-468a-b9c9-7c3827ed7a34 State: Peer in Cluster (Connected) Hostname: 10.8.4.131 Uuid: 6a27b31f-dbd9-4de5-aefd-73c1ac9b81c5 State: Peer in Cluster (Connected) Hostname: 10.8.4.132 Uuid: 7b7b53ff-af7f-49aa-b371-29dd1e784ad1 State: Peer in Cluster (Connected)
存儲已經掛載
[root@k8s1-master2 ~]# kubectl exec -ti glusterfs-sb7l9 -n glusterfs bash [root@k8s1-master2 /]# gluster volume info Volume Name: heketidbstorage Type: Replicate Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick Options Reconfigured: user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b transport.address-family: inet nfs.disable: on performance.client-io-threads: off
vi storageclass-dev-glusterfs.yaml
--- apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: glusterfs data: # base64 encoded password. E.g.: echo -n "adminkey" | base64 key: YWRtaW5rZXk= type: kubernetes.io/glusterfs --- apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.8.4.91:42951" clusterid: "364a0a72b3343c537c20db5576ffd46c" restauthenabled: "true" restuser: "admin" secretNamespace: "glusterfs" secretName: "heketi-secret" #restuserkey: "adminkey" gidMin: "40000" gidMax: "50000" volumetype: "none"
heketi-cli --user admin --secret adminkey cluster list
進入Podheketi-549c999b6f-5l8sp
獲取主要說下volumetype
volumetype
volumetype : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it’s up to the provisioner to decide the volume type.
For example:
Replica volume: volumetype: replicate:3 where ‘3’ is replica count.
Disperse/EC volume: volumetype: disperse:4:2 where ‘4’ is data and ‘2’ is the redundancy count.
Distribute volume: volumetype: none
volumetype: disperse:4:2
糾錯卷,應該須要6臺服務器,做者只有4臺,實驗volumetype: disperse:4:1,pv沒有自動建立,可是手動建立volume是成功的,可進入Pod
glusterfs-5jzdh
中執行,注意Type
。
gluster volume create gv1 disperse 4 redundancy 1 10.8.4.92:/var/lib/heketi/mounts/gv1 10.8.4.93:/var/lib/heketi/mounts/gv1 10.8.4.131:/var/lib/heketi/mounts/gv1 10.8.4.132:/var/lib/heketi/mounts/gv1 gluster volume start gv1 gluster volume info
輸出以下
Volume Name: gv2 Type: Disperse Volume ID: e072f9fa-6139-4471-a163-0e0dde0265ef Status: Started Snapshot Count: 0 Number of Bricks: 1 x (3 + 1) = 4 Transport-type: tcp Bricks: Brick1: 10.8.4.92:/var/lib/heketi/mounts/gv2 Brick2: 10.8.4.93:/var/lib/heketi/mounts/gv2 Brick3: 10.8.4.131:/var/lib/heketi/mounts/gv2 Brick4: 10.8.4.132:/var/lib/heketi/mounts/gv2 Options Reconfigured: transport.address-family: inet nfs.disable: on
volumetype: replicate:3
建立3個副本,複製卷模式,耗資源,可是一個磁盤算壞或節點宕機能夠正常使用,
gluster volume info
查看以下,注意Type
Volume Name: vol_d78f449dbeab2286267c7e1842086a8f Type: Replicate Volume ID: 02fd891f-dd43-4c1b-a2ba-87e1be7c706f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.8.4.132:/var/lib/heketi/mounts/vg_5634269dc08edd964032871801920f1e/brick_b980d3f5ce7b1b4314c4b57c8aaf35fa/brick Brick2: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_b375443687051038234e50fe3cd5fe12/brick Brick3: 10.8.4.92:/var/lib/heketi/mounts/vg_a5d145795d59c51d2335153880049760/brick_e8f9ec722a235448fbf6730c25d7441a/brick Options Reconfigured: user.heketi.id: dfed68e6dca82c7cd5911c8ddda7746b transport.address-family: inet nfs.disable: on performance.client-io-threads: off
volumetype: none
分佈式卷,經過hash算法分佈到一個brick上,磁盤算壞或節點宕機不可以使用,gluster volume info查看以下,注意
Type
Volume Name: vol_e1b27d580cbe18a96b0fdf7cbfe69cc2 Type: Distribute Volume ID: cb4a7e4f-3850-4809-b159-fc8000527d71 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.8.4.93:/var/lib/heketi/mounts/vg_1d2cf75ab474dd63edb917a78096e429/brick_8f62218753db589204b753295a318795/brick Options Reconfigured: user.heketi.id: e1b27d580cbe18a96b0fdf7cbfe69cc2 transport.address-family: inet nfs.disable: on
vi glusterfs-pv.yaml
--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs annotations: volume.beta.kubernetes.io/storage-class: "glusterfs" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
親愛的朋友,您應該根據具體狀況做出選擇,想要繼續瞭解存儲卷模式,和使用方式,請查看《GlusterFs卷類型分析及建立、使用(結合kubernetes集羣分析)》(排好版後上傳)
手碼無坑,有問題歡迎打擾,給贊呦!!!贊!!又不花錢!!!!