使用Statefulset在Kubernetes集羣中部署etcd集羣

隨着微服務架構的火爆,Etcd做爲服務發現或者分部式存儲的基礎平臺也愈來愈頻繁的出如今咱們的視野裏。所以對於快速部署一套高可用的Etcd集羣的需求也愈來愈強烈,本次就帶領你們一塊兒使用Kubernetes的Statefulset特性快速部署一套Etcd集羣。git

什麼是Kubernetes?

Kubernetes 是一個用於容器集羣的自動化部署、擴容以及運維的開源平臺。github

使用Kubernetes,你能夠快速高效地響應客戶需求:數據庫

  • 快速而且無心外的部署你的應用。
  • 動態地對應用進行擴容。
  • 無縫地發佈新特性。
  • 僅使用須要的資源以優化硬件使用。

什麼是Etcd?

Etcd的目的是提供一個分佈式鍵值動態數據庫,維護一個"Configuration Registry"。 這個Registry的基礎之一是Kubernetes集羣發現和集中的配置管理。 它在某些方面相似於Redis,經典的LDAP配置後端以及Windows註冊表。後端

Etcd的目標是:api

  • 簡單:定義良好,面向用戶的API(JSON and gRPC)
  • 安全:自動TLS和可選的客戶端證書身份驗證
  • 快速:benchmarked 寫 10,000 次/秒
  • 可靠:使用Raft協議做爲分佈式基礎

官方已經有了Etcd-Operator,我爲何還要使用這種方式部署?

首先來看優勢:安全

  • Etcd-Operator的部署方式對Etcd版本和Kubernetes版本都有要求,詳見官方文檔
  • 若是使用Etcd v2 Api則沒法對數據進行備份,官方的Etcd集羣數據備份僅僅支持Etcd v3版本
  • Statefulset部署Etcd數據更加可靠。若是使用Etcd-Operator部署Etcd,不幸的是某天個人Kubernetes集羣出現了問題,致使全部的Etcd Pod出現故障,那麼很遺憾,若是沒有對數據進行備份,我只能Pray God Bless,祈求本身不要背鍋。即使使用Etcd V3 Api而且使用了數據備份,那麼也可能丟失一部分數據,由於Etcd-Operator的備份是定時備份的。
  • Statefulset的配置更加靈活,好比須要親和性的配置,能夠直接在Statefulset中增長便可。

固然,任何美好的事務都不是完美的。 使用Statefulset部署Etcd也須要必定的條件:網絡

  • 必須有可靠的網絡存儲做爲支持。
  • 建立集羣比Etcd-Operator略微繁瑣。

好,接下來,讓咱們進入正題:session

如何在Kubernetes上快速部署一套Etcd集羣。

首先,在Kubernetes上建立一個Headless的Service架構

apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: infra-etcd-cluster
    app: infra-etcd
  name: infra-etcd-cluster
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: infra-etcd-cluster-2379
    port: 2379
    protocol: TCP
    targetPort: 2379
  - name: infra-etcd-cluster-2380
    port: 2380
    protocol: TCP
    targetPort: 2380
  selector:
    k8s-app: infra-etcd-cluster
    app: infra-etcd
  type: ClusterIP

建立Headless類型的Service是爲了方便使用域名訪問到Etcd的節點。2379和2380分別對應Etcd的Client Port和Peer Port。app

接下來,讓咱們來建立Statefulset資源:

前提條件:你的集羣必須提早建立好PV,以便Statefulset生成的Pod可使用到。若是你使用StorageClass來管理PV,則無需手動建立。這裏已Ceph-RBD爲例。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    k8s-app: infra-etcd-cluster
    app: etcd
  name: infra-etcd-cluster
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: infra-etcd-cluster
      app: etcd
  serviceName: infra-etcd-cluster
  template:
    metadata:
      labels:
        k8s-app: infra-etcd-cluster
        app: etcd
      name: infra-etcd-cluster
    spec:
      containers:
      - command:
        - /bin/sh
        - -ec
        - |
          HOSTNAME=$(hostname)
          echo "etcd api version is ${ETCDAPI_VERSION}"

          eps() {
              EPS=""
              for i in $(seq 0 $((${INITIAL_CLUSTER_SIZE} - 1))); do
                  EPS="${EPS}${EPS:+,}http://${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE}:2379"
              done
              echo ${EPS}
          }

          member_hash() {
              etcdctl member list | grep http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 | cut -d':' -f1 | cut -d'[' -f1
          }

          initial_peers() {
                PEERS=""
                for i in $(seq 0 $((${INITIAL_CLUSTER_SIZE} - 1))); do
                PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380"
                done
                echo ${PEERS}
          }

          # etcd-SET_ID
          SET_ID=${HOSTNAME##*-}
          # adding a new member to existing cluster (assuming all initial pods are available)
          if [ "${SET_ID}" -ge ${INITIAL_CLUSTER_SIZE} ]; then
              export ETCDCTL_ENDPOINTS=$(eps)

              # member already added?
              MEMBER_HASH=$(member_hash)
              if [ -n "${MEMBER_HASH}" ]; then
                  # the member hash exists but for some reason etcd failed
                  # as the datadir has not be created, we can remove the member
                  # and retrieve new hash
                  if [ "${ETCDAPI_VERSION}" -eq 3 ]; then
                      ETCDCTL_API=3 etcdctl --user=root:${ROOT_PASSWORD} member remove ${MEMBER_HASH}
                  else
                      etcdctl --username=root:${ROOT_PASSWORD} member remove ${MEMBER_HASH}
                  fi
              fi
              echo "Adding new member"
              rm -rf /var/run/etcd/*
              # ensure etcd dir exist
              mkdir -p /var/run/etcd/
              # sleep 60s wait endpoint become ready
              echo "sleep 60s wait endpoint become ready,sleeping..."
              sleep 60

              if [ "${ETCDAPI_VERSION}" -eq 3 ]; then
                  ETCDCTL_API=3 etcdctl --user=root:${ROOT_PASSWORD} member add ${HOSTNAME} --peer-urls=http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 | grep "^ETCD_" > /var/run/etcd/new_member_envs
              else
                  etcdctl --username=root:${ROOT_PASSWORD} member add ${HOSTNAME} http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 | grep "^ETCD_" > /var/run/etcd/new_member_envs
              fi
              
              

              if [ $? -ne 0 ]; then
                  echo "member add ${HOSTNAME} error."
                  rm -f /var/run/etcd/new_member_envs
                  exit 1
              fi

              cat /var/run/etcd/new_member_envs
              source /var/run/etcd/new_member_envs

              exec etcd --name ${HOSTNAME} \
                  --initial-advertise-peer-urls http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 \
                  --listen-peer-urls http://0.0.0.0:2380 \
                  --listen-client-urls http://0.0.0.0:2379 \
                  --advertise-client-urls http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2379 \
                  --data-dir /var/run/etcd/default.etcd \
                  --initial-cluster ${ETCD_INITIAL_CLUSTER} \
                  --initial-cluster-state ${ETCD_INITIAL_CLUSTER_STATE}
          fi

          for i in $(seq 0 $((${INITIAL_CLUSTER_SIZE} - 1))); do
              while true; do
                  echo "Waiting for ${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE} to come up"
                  ping -W 1 -c 1 ${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE} > /dev/null && break
                  sleep 1s
              done
          done

          echo "join member ${HOSTNAME}"
          # join member
          exec etcd --name ${HOSTNAME} \
              --initial-advertise-peer-urls http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 \
              --listen-peer-urls http://0.0.0.0:2380 \
              --listen-client-urls http://0.0.0.0:2379 \
              --advertise-client-urls http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2379 \
              --initial-cluster-token etcd-cluster-1 \
              --data-dir /var/run/etcd/default.etcd \
              --initial-cluster $(initial_peers) \
              --initial-cluster-state new

        env:
        - name: INITIAL_CLUSTER_SIZE
          value: "3"
        - name: CLUSTER_NAMESPACE
          valueFrom: 
            fieldRef:
              fieldPath: metadata.namespace
        - name: ETCDAPI_VERSION
          value: "3"
        - name: ROOT_PASSWORD
          value: '@123#'
        - name: SET_NAME
          value: "infra-etcd-cluster"
        - name: GOMAXPROCS
          value: "4"
        image: gcr.io/etcd-development/etcd:v3.3.8
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -ec
              - |
                HOSTNAME=$(hostname)

                member_hash() {
                    etcdctl member list | grep http://${HOSTNAME}.${SET_NAME}.${CLUSTER_NAMESPACE}:2380 | cut -d':' -f1 | cut -d'[' -f1
                }

                eps() {
                    EPS=""
                    for i in $(seq 0 $((${INITIAL_CLUSTER_SIZE} - 1))); do
                        EPS="${EPS}${EPS:+,}http://${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE}:2379"
                    done
                    echo ${EPS}
                }
                
                export ETCDCTL_ENDPOINTS=$(eps)

                SET_ID=${HOSTNAME##*-}
                # Removing member from cluster
                if [ "${SET_ID}" -ge ${INITIAL_CLUSTER_SIZE} ]; then
                    echo "Removing ${HOSTNAME} from etcd cluster"
                    if [ "${ETCDAPI_VERSION}" -eq 3 ]; then
                        ETCDCTL_API=3 etcdctl --user=root:${ROOT_PASSWORD} member remove $(member_hash)
                    else
                        etcdctl --username=root:${ROOT_PASSWORD} member remove $(member_hash)
                    fi
                    if [ $? -eq 0 ]; then
                        # Remove everything otherwise the cluster will no longer scale-up
                        rm -rf /var/run/etcd/*
                    fi
                fi
        name: infra-etcd-cluster
        ports:
        - containerPort: 2380
          name: peer
          protocol: TCP
        - containerPort: 2379
          name: client
          protocol: TCP
        resources:
          limits:
            cpu: "4"
            memory: 4Gi
          requests:
            cpu: "4"
            memory: 4Gi
        volumeMounts:
        - mountPath: /var/run/etcd
          name: datadir
  updateStrategy:
    type: OnDelete
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      selector:
        matchLabels:
          k8s.cloud/storage-type: ceph-rbd

注意:SET_NAME必須與Statefulset的Name一致

這個時候你的Etcd已經能夠在內部經過

http://${SET_NAME}-${i}.${SET_NAME}.${CLUSTER_NAMESPACE}:2379

訪問了。

最後一步:建立Client Service

若是你的集羣網絡方案使Pod能夠從Kubernetes集羣外部訪問,或者你的Etcd 集羣只須要在Kubernetes集羣內部訪問,能夠省略該步驟

apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: infra-etcd-cluster-client
    app: infra-etcd
  name: infra-etcd-cluster-client
  namespace: default
spec:
  ports:
  - name: infra-etcd-cluster-2379
    port: 2379
    protocol: TCP
    targetPort: 2379
  selector:
    k8s-app: infra-etcd-cluster
    app: infra-etcd
  sessionAffinity: None
  type: NodePort

大功告成!你可使用NodePort順利訪問到Etcd集羣。

擴容與縮容

###擴容

只須要將Statefulset中的replicas改變便可。例如,我想把集羣數量擴容爲5個。

kubectl scale --replicas=5 statefulset infra-etcd-cluster

縮容

而後某一天我發現五個節點對我來講有些浪費,想使用三個節點。OK,只須要執行如下命令便可,

kubectl scale --replicas=3 statefulset infra-etcd-cluster

全部的源碼都可以在個人Github中找到,若是感受本文對你有用,請在Github上點下Star。若是發現任何問題能夠提交PR,一塊兒爲開源作貢獻。

相關文章
相關標籤/搜索