kubernetes經過statefulset爲zookeeper、etcd等這類有狀態的應用程序提供完善支持,statefulset具有如下特性:golang
本文闡述瞭如何在k8s集羣上部署zookeeper和etcd有狀態服務,並結合ceph實現數據持久化。docker
若是用戶肯定刪除pv、pvc對象,同時還須要手動刪除ceph段的rbd鏡像。api
storageclass中引用的ceph客戶端用戶,必需要有mon rw,rbd rwx
權限。若是沒有mon write
權限,會致使釋放rbd鎖失敗,沒法將rbd鏡像掛載到其餘的k8s worker節點。網絡
因zookeeper 3.4版本的集羣配置,是經過靜態加載文件zoo.cfg來實現的,因此當zookeeper節點pod ip變更後,須要重啓zookeeper集羣中的全部節點。session
本次試驗中使用靜態方式部署etcd集羣,若是etcd節點變遷時,須要執行etcdctl member remove/add
等命令手動配置etcd集羣,嚴重限制了etcd集羣自動故障恢復、擴容縮容的能力。所以,須要考慮對部署方式優化,改成使用DNS或者etcd descovery的動態方式部署etcd,才能讓etcd更好的運行在k8s上。app
docker pull gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10 docker tag gcr.mirrors.ustc.edu.cn/google_containers/kubernetes-zookeeper:1.0-3.4.10 172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10 docker push 172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10
cat << EOF | kubectl create -f - apiVersion: v1 data: key: QVFBYy9ndGFRUno4QlJBQXMxTjR3WnlqN29PK3VrMzI1a05aZ3c9PQo= kind: Secret metadata: creationTimestamp: 2017-11-20T10:29:05Z name: ceph-secret namespace: default resourceVersion: "2954730" selfLink: /api/v1/namespaces/default/secrets/ceph-secret uid: a288ff74-cddd-11e7-81cc-000c29f99475 type: kubernetes.io/rbd EOF
cat << EOF | kubectl create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph parameters: adminId: admin adminSecretName: ceph-secret adminSecretNamespace: default fsType: ext4 imageFormat: "2" imagefeatures: layering monitors: 172.16.13.223 pool: k8s userId: admin userSecretName: ceph-secret provisioner: kubernetes.io/rbd reclaimPolicy: Delete EOF
使用rbd存儲zookeeper節點數據tcp
cat << EOF | kubectl create -f - --- apiVersion: v1 kind: Service metadata: name: zk-hs labels: app: zk spec: ports: - port: 2888 name: server - port: 3888 name: leader-election clusterIP: None selector: app: zk --- apiVersion: v1 kind: Service metadata: name: zk-cs labels: app: zk spec: ports: - port: 2181 name: client selector: app: zk --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: zk-pdb spec: selector: matchLabels: app: zk maxUnavailable: 1 --- apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1 kind: StatefulSet metadata: name: zk spec: selector: matchLabels: app: zk serviceName: zk-hs replicas: 3 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: zk spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" containers: - name: kubernetes-zookeeper imagePullPolicy: Always image: "172.16.18.100:5000/gcr.io/google_containers/kubernetes-zookeeper:1.0-3.4.10" ports: - containerPort: 2181 name: client - containerPort: 2888 name: server - containerPort: 3888 name: leader-election command: - sh - -c - "start-zookeeper \ --servers=3 \ --data_dir=/var/lib/zookeeper/data \ --data_log_dir=/var/lib/zookeeper/data/log \ --conf_dir=/opt/zookeeper/conf \ --client_port=2181 \ --election_port=3888 \ --server_port=2888 \ --tick_time=2000 \ --init_limit=10 \ --sync_limit=5 \ --heap=512M \ --max_client_cnxns=60 \ --snap_retain_count=3 \ --purge_interval=12 \ --max_session_timeout=40000 \ --min_session_timeout=4000 \ --log_level=INFO" readinessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 volumeMounts: - name: datadir mountPath: /var/lib/zookeeper securityContext: runAsUser: 1000 fsGroup: 1000 volumeClaimTemplates: - metadata: name: datadir annotations: volume.beta.kubernetes.io/storage-class: ceph spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi EOF
查看建立結果ide
[root@172 zookeeper]# kubectl get no NAME STATUS ROLES AGE VERSION 172.16.20.10 Ready <none> 50m v1.8.2 172.16.20.11 Ready <none> 2h v1.8.2 172.16.20.12 Ready <none> 1h v1.8.2 [root@172 zookeeper]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE zk-0 1/1 Running 0 8m 192.168.5.162 172.16.20.10 zk-1 1/1 Running 0 1h 192.168.2.146 172.16.20.11 [root@172 zookeeper]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-226cb8f0-d322-11e7-9581-000c29f99475 1Gi RWO Delete Bound default/datadir-zk-0 ceph 1h pv/pvc-22703ece-d322-11e7-9581-000c29f99475 1Gi RWO Delete Bound default/datadir-zk-1 ceph 1h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/datadir-zk-0 Bound pvc-226cb8f0-d322-11e7-9581-000c29f99475 1Gi RWO ceph 1h pvc/datadir-zk-1 Bound pvc-22703ece-d322-11e7-9581-000c29f99475 1Gi RWO ceph 1h
zk-0 pod
的rbd
的鎖信息爲測試
[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin There is 1 exclusive lock on this image. Locker ID Address client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350
嘗試將172.16.20.10
節點設置爲污點,讓zk-0 pod自動遷移到172.16.20.12
優化
kubectl cordon 172.16.20.10 [root@172 zookeeper]# kubectl get no NAME STATUS ROLES AGE VERSION 172.16.20.10 Ready,SchedulingDisabled <none> 58m v1.8.2 172.16.20.11 Ready <none> 2h v1.8.2 172.16.20.12 Ready <none> 1h v1.8.2 kubectl delete po zk-0
觀察zk-0的遷移過程
[root@172 zookeeper]# kubectl get po -owide -w NAME READY STATUS RESTARTS AGE IP NODE zk-0 1/1 Running 0 14m 192.168.5.162 172.16.20.10 zk-1 1/1 Running 0 1h 192.168.2.146 172.16.20.11 zk-0 1/1 Terminating 0 16m 192.168.5.162 172.16.20.10 zk-0 0/1 Terminating 0 16m <none> 172.16.20.10 zk-0 0/1 Terminating 0 16m <none> 172.16.20.10 zk-0 0/1 Terminating 0 16m <none> 172.16.20.10 zk-0 0/1 Terminating 0 16m <none> 172.16.20.10 zk-0 0/1 Terminating 0 16m <none> 172.16.20.10 zk-0 0/1 Pending 0 0s <none> <none> zk-0 0/1 Pending 0 0s <none> 172.16.20.12 zk-0 0/1 ContainerCreating 0 0s <none> 172.16.20.12 zk-0 0/1 Running 0 3s 192.168.3.4 172.16.20.12
此時zk-0正常遷移到172.16.20.12
再查看rbd的鎖定信息
[root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin There is 1 exclusive lock on this image. Locker ID Address client.24146 kubelet_lock_magic_172.16.20.10 172.16.20.10:0/1606152350 [root@ceph1 ceph]# rbd lock list kubernetes-dynamic-pvc-227b45e5-d322-11e7-90ab-000c29f99475 -p k8s --user admin There is 1 exclusive lock on this image. Locker ID Address client.24154 kubelet_lock_magic_172.16.20.12 172.16.20.12:0/3715989358
以前在另一個ceph集羣測試這個zk pod遷移的時候,老是報錯沒法釋放lock,經分析應該是使用的ceph帳號沒有相應的權限,因此致使釋放lock失敗。記錄的報錯信息以下:
Nov 27 10:45:55 172 kubelet: W1127 10:45:55.551768 11556 rbd_util.go:471] rbd: no watchers on kubernetes-dynamic-pvc-f35a411e-d317-11e7-90ab-000c29f99475 Nov 27 10:45:55 172 kubelet: I1127 10:45:55.694126 11556 rbd_util.go:181] remove orphaned locker kubelet_lock_magic_172.16.20.12 from client client.171490: err exit status 13, output: 2017-11-27 10:45:55.570483 7fbdbe922d40 -1 did not load config file, using default settings. Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600816 7fbdbe922d40 -1 Errors while parsing config file! Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600824 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.600825 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602492 7fbdbe922d40 -1 Errors while parsing config file! Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602494 7fbdbe922d40 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602495 7fbdbe922d40 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.602496 7fbdbe922d40 -1 parse_file: cannot open ceph.conf: (2) No such file or directory Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.651594 7fbdbe922d40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory Nov 27 10:45:55 172 kubelet: rbd: releasing lock failed: (13) Permission denied Nov 27 10:45:55 172 kubelet: 2017-11-27 10:45:55.682470 7fbdbe922d40 -1 librbd: unable to blacklist client: (13) Permission denied
k8s rbd volume的實現代碼:
if lock { // check if lock is already held for this host by matching lock_id and rbd lock id if strings.Contains(output, lock_id) { // this host already holds the lock, exit glog.V(1).Infof("rbd: lock already held for %s", lock_id) return nil } // clean up orphaned lock if no watcher on the image used, statusErr := util.rbdStatus(&b) if statusErr == nil && !used { re := regexp.MustCompile("client.* " + kubeLockMagic + ".*") locks := re.FindAllStringSubmatch(output, -1) for _, v := range locks { if len(v) > 0 { lockInfo := strings.Split(v[0], " ") if len(lockInfo) > 2 { args := []string{"lock", "remove", b.Image, lockInfo[1], lockInfo[0], "--pool", b.Pool, "--id", b.Id, "-m", mon} args = append(args, secret_opt...) cmd, err = b.exec.Run("rbd", args...) # 執行rbd lock remove命令時返回了錯誤信息 glog.Infof("remove orphaned locker %s from client %s: err %v, output: %s", lockInfo[1], lockInfo[0], err, string(cmd)) } } } } // hold a lock: rbd lock add args := []string{"lock", "add", b.Image, lock_id, "--pool", b.Pool, "--id", b.Id, "-m", mon} args = append(args, secret_opt...) cmd, err = b.exec.Run("rbd", args...) }
能夠看到,rbd lock remove
操做被拒絕了,緣由是沒有權限rbd: releasing lock failed: (13) Permission denied
。
zookeeper集羣節點數從2個擴爲3個。
集羣節點數爲2時,zoo.cfg的配置中定義了兩個實例
zookeeper@zk-0:/opt/zookeeper/conf$ cat zoo.cfg #This file was autogenerated DO NOT EDIT clientPort=2181 dataDir=/var/lib/zookeeper/data dataLogDir=/var/lib/zookeeper/data/log tickTime=2000 initLimit=10 syncLimit=5 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 autopurge.snapRetainCount=3 autopurge.purgeInteval=12 server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888 server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
使用kubectl edit statefulset zk
命令修改replicas=3,start-zookeeper --servers=3
,
此時觀察pod的變化
[root@172 zookeeper]# kubectl get po -owide -w NAME READY STATUS RESTARTS AGE IP NODE zk-0 1/1 Running 0 1h 192.168.5.170 172.16.20.10 zk-1 1/1 Running 0 1h 192.168.3.12 172.16.20.12 zk-2 0/1 Pending 0 0s <none> <none> zk-2 0/1 Pending 0 0s <none> 172.16.20.11 zk-2 0/1 ContainerCreating 0 0s <none> 172.16.20.11 zk-2 0/1 Running 0 1s 192.168.2.154 172.16.20.11 zk-2 1/1 Running 0 11s 192.168.2.154 172.16.20.11 zk-1 1/1 Terminating 0 1h 192.168.3.12 172.16.20.12 zk-1 0/1 Terminating 0 1h <none> 172.16.20.12 zk-1 0/1 Terminating 0 1h <none> 172.16.20.12 zk-1 0/1 Terminating 0 1h <none> 172.16.20.12 zk-1 0/1 Terminating 0 1h <none> 172.16.20.12 zk-1 0/1 Pending 0 0s <none> <none> zk-1 0/1 Pending 0 0s <none> 172.16.20.12 zk-1 0/1 ContainerCreating 0 0s <none> 172.16.20.12 zk-1 0/1 Running 0 2s 192.168.3.13 172.16.20.12 zk-1 1/1 Running 0 20s 192.168.3.13 172.16.20.12 zk-0 1/1 Terminating 0 1h 192.168.5.170 172.16.20.10 zk-0 0/1 Terminating 0 1h <none> 172.16.20.10 zk-0 0/1 Terminating 0 1h <none> 172.16.20.10 zk-0 0/1 Terminating 0 1h <none> 172.16.20.10 zk-0 0/1 Terminating 0 1h <none> 172.16.20.10 zk-0 0/1 Pending 0 0s <none> <none> zk-0 0/1 Pending 0 0s <none> 172.16.20.10 zk-0 0/1 ContainerCreating 0 0s <none> 172.16.20.10 zk-0 0/1 Running 0 2s 192.168.5.171 172.16.20.10 zk-0 1/1 Running 0 12s 192.168.5.171 172.16.20.10
能夠看到zk-0/zk-1都重啓了,這樣能夠加載新的zoo.cfg配置文件,保證集羣正確配置。
新的zoo.cfg配置文件記錄了3個實例:
[root@172 ~]# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg #This file was autogenerated DO NOT EDIT clientPort=2181 dataDir=/var/lib/zookeeper/data dataLogDir=/var/lib/zookeeper/data/log tickTime=2000 initLimit=10 syncLimit=5 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 autopurge.snapRetainCount=3 autopurge.purgeInteval=12 server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888 server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888 server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
縮容的時候,zk集羣也自動重啓了全部的zk節點,縮容過程以下:
[root@172 ~]# kubectl get po -owide -w NAME READY STATUS RESTARTS AGE IP NODE zk-0 1/1 Running 0 5m 192.168.5.171 172.16.20.10 zk-1 1/1 Running 0 6m 192.168.3.13 172.16.20.12 zk-2 1/1 Running 0 7m 192.168.2.154 172.16.20.11 zk-2 1/1 Terminating 0 7m 192.168.2.154 172.16.20.11 zk-1 1/1 Terminating 0 7m 192.168.3.13 172.16.20.12 zk-2 0/1 Terminating 0 8m <none> 172.16.20.11 zk-1 0/1 Terminating 0 7m <none> 172.16.20.12 zk-2 0/1 Terminating 0 8m <none> 172.16.20.11 zk-1 0/1 Terminating 0 7m <none> 172.16.20.12 zk-1 0/1 Terminating 0 7m <none> 172.16.20.12 zk-1 0/1 Terminating 0 7m <none> 172.16.20.12 zk-1 0/1 Pending 0 0s <none> <none> zk-1 0/1 Pending 0 0s <none> 172.16.20.12 zk-1 0/1 ContainerCreating 0 0s <none> 172.16.20.12 zk-1 0/1 Running 0 2s 192.168.3.14 172.16.20.12 zk-2 0/1 Terminating 0 8m <none> 172.16.20.11 zk-2 0/1 Terminating 0 8m <none> 172.16.20.11 zk-1 1/1 Running 0 19s 192.168.3.14 172.16.20.12 zk-0 1/1 Terminating 0 7m 192.168.5.171 172.16.20.10 zk-0 0/1 Terminating 0 7m <none> 172.16.20.10 zk-0 0/1 Terminating 0 7m <none> 172.16.20.10 zk-0 0/1 Terminating 0 7m <none> 172.16.20.10 zk-0 0/1 Pending 0 0s <none> <none> zk-0 0/1 Pending 0 0s <none> 172.16.20.10 zk-0 0/1 ContainerCreating 0 0s <none> 172.16.20.10 zk-0 0/1 Running 0 3s 192.168.5.172 172.16.20.10 zk-0 1/1 Running 0 13s 192.168.5.172 172.16.20.10
cat << EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: "etcd" annotations: # Create endpoints also if the related pod isn't ready service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 2379 name: client - port: 2380 name: peer clusterIP: None selector: component: "etcd" --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: "etcd" labels: component: "etcd" spec: serviceName: "etcd" # changing replicas value will require a manual etcdctl member remove/add # command (remove before decreasing and add after increasing) replicas: 3 template: metadata: name: "etcd" labels: component: "etcd" spec: containers: - name: "etcd" image: "172.16.18.100:5000/quay.io/coreos/etcd:v3.2.3" ports: - containerPort: 2379 name: client - containerPort: 2380 name: peer env: - name: CLUSTER_SIZE value: "3" - name: SET_NAME value: "etcd" volumeMounts: - name: data mountPath: /var/run/etcd command: - "/bin/sh" - "-ecx" - | IP=$(hostname -i) for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do while true; do echo "Waiting for ${SET_NAME}-${i}.${SET_NAME} to come up" ping -W 1 -c 1 ${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local > /dev/null && break sleep 1s done done PEERS="" for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}.default.svc.cluster.local:2380" done # start etcd. If cluster is already initialized the `--initial-*` options will be ignored. exec etcd --name ${HOSTNAME} \ --listen-peer-urls http://${IP}:2380 \ --listen-client-urls http://${IP}:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://${HOSTNAME}.${SET_NAME}:2379 \ --initial-advertise-peer-urls http://${HOSTNAME}.${SET_NAME}:2380 \ --initial-cluster-token etcd-cluster-1 \ --initial-cluster ${PEERS} \ --initial-cluster-state new \ --data-dir /var/run/etcd/default.etcd ## We are using dynamic pv provisioning using the "standard" storage class so ## this resource can be directly deployed without changes to minikube (since ## minikube defines this class for its minikube hostpath provisioner). In ## production define your own way to use pv claims. volumeClaimTemplates: - metadata: name: data annotations: volume.beta.kubernetes.io/storage-class: ceph spec: accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi EOF
建立完成以後的po,pv,pvc清單以下:
[root@172 etcd]# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE etcd-0 1/1 Running 0 15m 192.168.5.174 172.16.20.10 etcd-1 1/1 Running 0 15m 192.168.3.16 172.16.20.12 etcd-2 1/1 Running 0 5s 192.168.5.176 172.16.20.10
kubectl scale statefulset etcd --replicas=2 [root@172 ~]# kubectl get po -owide -w NAME READY STATUS RESTARTS AGE IP NODE etcd-0 1/1 Running 0 17m 192.168.5.174 172.16.20.10 etcd-1 1/1 Running 0 17m 192.168.3.16 172.16.20.12 etcd-2 1/1 Running 0 1m 192.168.5.176 172.16.20.10 etcd-2 1/1 Terminating 0 1m 192.168.5.176 172.16.20.10 etcd-2 0/1 Terminating 0 1m <none> 172.16.20.10
檢查集羣健康
kubectl exec etcd-0 -- etcdctl cluster-health failed to check the health of member 42c8b94265b9b79a on http://etcd-2.etcd:2379: Get http://etcd-2.etcd:2379/health: dial tcp: lookup etcd-2.etcd on 10.96.0.10:53: no such host member 42c8b94265b9b79a is unreachable: [http://etcd-2.etcd:2379] are all unreachable member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379 member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379 cluster is healthy
發現縮容後,etcd-2並無從etcd集羣中自動刪除,可見這個etcd鏡像對自動擴容縮容的支持並不夠好。
咱們手工刪除掉etcd-2
[root@172 etcd]# kubectl exec etcd-0 -- etcdctl member remove 42c8b94265b9b79a Removed member 42c8b94265b9b79a from cluster [root@172 etcd]# kubectl exec etcd-0 -- etcdctl cluster-health member 9869f0647883a00d is healthy: got healthy result from http://etcd-1.etcd:2379 member c799a6ef06bc8c14 is healthy: got healthy result from http://etcd-0.etcd:2379 cluster is healthy
從etcd.yaml的啓動腳本中能夠看出,擴容時新啓動一個etcd pod時參數--initial-cluster-state new
,該etcd鏡像並不支持動態擴容,能夠考慮使用基於dns動態部署etcd集羣的方式
來修改啓動腳本,這樣才能支持etcd cluster動態擴容。