Kubernetes持久化存儲可分爲靜態存儲以及動態存儲,靜態存儲,經常使用的用hostpath本地存儲,NFS,glusterfs存儲等,須要事先部署好存儲卷pv,再經過K8S的pvc獲取到存儲空間進行存儲。動態存儲,事先部署好glusterfs集羣以及Heketi,經過兩者協做,達到只須要建立申請pvc就能夠動態申請到存儲空間,省去了底層存儲卷以及pv的建立。node
準備三臺虛擬機,配置2核4g kubeadm安裝的高可用K8S集羣
linux
172.30.0.74 k8smaster1 hostname: k8smaster1
git
172.30.0.82 k8smaster2 hostname: k8smaster2github
172.30.0.90 k8snode hostname: k8snode
數據庫
glusterfs集羣建立json
配置glusterfs yum源centos
# CentOS-Gluster-4.1.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information
[centos-gluster41]
name=CentOS-$releasever - Gluster 4.1 (Long Term Maintanance)
baseurl=http://mirror.centos.org/centos-7/7/storage/x86_64/gluster-4.1/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
[centos-gluster41-test]
name=CentOS-$releasever - Gluster 4.1 Testing (Long Term Maintenance)
baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-4.1/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
安裝glusterfs
api
# yum install glusterfs-server服務器
啓動:
app
# systemctl start glusterd
# systemctl start glusterfsd
每臺glusterfs節點都要安裝
glusterfs使用
# glusterfs peer probe k8smaster2
# glusterfs peer probe k8snode
將glusterfs節點加入到集羣中
設置glusterfs卷:
# mkdir /data/brick1/gv2
三臺機子都須要執行
建立複製卷:
# gluster volume create gv2 replica 2 172.30.0.74:/data/brick1/gv2 172.30.0.82:/data/brick1/gv2 force
啓動glusterfs卷:
# gluster volume start gv2
# gluster volume info
查看卷的狀況
glusterfs集羣建立完成
Heketi配置
Heketi提供了一個RESTful管理界面,能夠用來管理GlusterFS卷的生命週期。 經過Heketi,就能夠像使用OpenStack Manila,Kubernetes和OpenShift同樣申請能夠動態配置GlusterFS卷。Heketi會動態在集羣內選擇bricks構建所需的volumes,這樣以確保數據的副本會分散到集羣不一樣的故障域內。同時Heketi還支持任意數量的ClusterFS集羣,以保證接入的雲服務器不侷限於單個GlusterFS集羣。
獲取安裝包
# wget https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz
# tar xf heketi-v5.0.1.linux.amd64.tar.gz
# ln -s /root/heketi/heketi /bin/heketi
# ln -s /root/heketi/heketi-cli /bin/heketi-cli
修改Heketi配置文件
修改heketi配置文件/etc/heketi/heketi.json,內容以下:
......
#修改端口,防止端口衝突
"port": "18080",
......
#容許認證
"use_auth": true,
......
#admin用戶的key改成adminkey
"key": "adminkey"
......
#修改執行插件爲ssh,並配置ssh的所需證書,注意要能對集羣中的機器免密ssh登錄,使用ssh-copy-id把pub key拷到每臺glusterfs服務器上
"executor": "ssh",
"sshexec": {
"keyfile": "/root/.ssh/id_rsa",
"user": "root",
"port": "22",
"fstab": "/etc/fstab"
},
......
# 定義heketi數據庫文件位置
"db": "/var/lib/heketi/heketi.db"
......
#調整日誌輸出級別
"loglevel" : "warning"
PS:須要說明的是,heketi有三種executor,分別爲mock、ssh、kubernetes,建議在測試環境使用mock,生產環境使用ssh,當glusterfs以容器的方式部署在kubernetes上時,才使用kubernetes。咱們這裏將glusterfs和heketi獨立部署,使用ssh的方式。
啓動Heketi:
nohup heketi --config=/etc/heketi/heketi.json &
Heketi添加Glusterfs
建立集羣
# heketi-cli --user admin --server http://172.30.0.74:28080 --secret adminkey --json cluster create
{"id":"7e320f3f04068c0564eb92e865263bd4","nodes":[],"volumes":[]}
使用返回的惟一集羣id將三個節點加入到集羣中
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.73 --storage-host-name 172.30.0.74 --zone 1
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.81 --storage-host-name 172.30.0.82 --zone 1
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.89 --storage-host-name 172.30.0.90 --zone 1
在三個節點建立邏輯卷,做爲Heketi的device,方便後續擴展,注意Heketi只支持裸分區或者裸磁盤,不須要格式化文件系統
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "78850cf6d4a44964b1fdf09970feb0"
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "560f238695f64479298429c062dc4c"
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "4e3e965421d26e6858d18e6ccaf19f"
生產實際配置
上面展現瞭如何手動一步步生成cluster,往cluster中添加節點,添加device的操做,在咱們實際生產配置中,能夠直接經過配置文件完成。
建立一個/etc/heketi/topology-sample.json的文件,內容以下:
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.75.175"
],
"storage": [
"192.168.75.175"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.176"
],
"storage": [
"192.168.75.176"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.177"
],
"storage": [
"192.168.75.177"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.178"
],
"storage": [
"192.168.75.178"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
}
]
}
]
}
建立:
heketi-cli topology load --json topology-sample.json
至此前期glusterfs以及Heketi的準備工做已完成
建立K8S的storageclass文件,使得K8S調用Heketi建立底層pv
建立storageclass
[root@consolefan-1 yaml]# cat glusterfs/storageclass-glusterfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://172.30.0.74:28080"
clusterid: "7e320f3f04068c0564eb92e865263b"
restauthenabled: "true"
restuser: "admin"
restuserkey: "adminkey"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"
# kubectl apply -f storageclass-glusterfs.yaml、
建立pvc進行驗證:
[root@consolefan-1 yaml]# cat glusterfs/pvc-glusterfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-test1
namespace: default
annotations:
volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
# kubectl apply -f pvc-glusterfs.yaml
構建成功