Kubernetes探祕—etcd狀態數據及其備份

Kubernetes使用etcd來存儲集羣的實時運行數據(如節點狀態信息),而其它pod都是無狀態的、能夠根據負載調度,在多個節點(node)間進行漂移。etcd自己是能夠部署爲無中心的多節點互備集羣,從而消除整個集羣的單一故障點。在kubeadm的缺省部署下,只在master上運行一個etcd實例(etcd-xxx),可使用kubectl get pod -n kube-system 查看運行狀態。html

一、查看etcd服務容器信息

下面咱們來探索一下kubernetes的etcd實例究竟是如何實現和管理的。在kubernetes的master節點上輸入:node

kubectl describe pod/etcd-podc01 -n kube-system > etcd.txt

輸出以下:docker

Name:               etcd-podc01
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               podc01/10.1.1.181
Start Time:         Mon, 03 Dec 2018 10:42:05 +0800
Labels:             component=etcd
                    tier=control-plane
Annotations:        kubernetes.io/config.hash: bcc0eea4c53f3b70d13b771ad88e31b7
                    kubernetes.io/config.mirror: bcc0eea4c53f3b70d13b771ad88e31b7
                    kubernetes.io/config.seen: 2018-12-05T11:05:31.8690622+08:00
                    kubernetes.io/config.source: file
                    scheduler.alpha.kubernetes.io/critical-pod: 
Status:             Running
IP:                 10.1.1.181
Containers:
  etcd:
    Container ID:  docker://8f301c91902a9399f144943013166a09dd0766a9b96c26fe2d8e335418a55cab
    Image:         k8s.gcr.io/etcd:3.2.24
    Image ID:      docker-pullable://registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-etcd@sha256:7b073bdab8c52dc23dfb3e2101597d30304437869ad8c0b425301e96a066c408
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://127.0.0.1:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://127.0.0.1:2380
      --initial-cluster=podc01=https://127.0.0.1:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379
      --listen-peer-urls=https://127.0.0.1:2380
      --name=podc01
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Wed, 05 Dec 2018 11:05:35 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:    <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>

能夠看到,etcd是使用的host-network網絡,而後把系統參數和數據等都映射到了宿主機的目錄(配置參數位於宿主機的/var/lib/etcd,證書文件位於/etc/kubernetes/pki/etcd)。數據庫

二、查看etcd配置參數文件

在宿主機下輸入  sudo ls -l /var/lib/etcd/member/snap 能夠看到etcd服務所產生的快照文件,以下所示:api

supermap@podc01:~/openthings/kubernetes-tools/jupyter$ sudo ls -l /var/lib/etcd/member/snap
總用量 8924
-rw-r--r-- 1 root root     8160 12月  5 09:19 0000000000000005-00000000001fbdd0.snap
-rw-r--r-- 1 root root     8160 12月  5 10:37 0000000000000005-00000000001fe4e1.snap
-rw-r--r-- 1 root root     8508 12月  5 11:42 0000000000000006-0000000000200bf2.snap
-rw-r--r-- 1 root root     8509 12月  5 12:49 0000000000000006-0000000000203303.snap
-rw-r--r-- 1 root root     8509 12月  5 13:56 0000000000000006-0000000000205a14.snap
-rw------- 1 root root 24977408 12月  5 14:13 db

查看etcd的證書文件:網絡

supermap@podc01:~/openthings/kubernetes-tools/jupyter$ ls -l /etc/kubernetes/pki/etcd
總用量 32
-rw-r--r-- 1 root root 1017 11月 23 10:08 ca.crt
-rw------- 1 root root 1679 11月 23 10:08 ca.key
-rw-r--r-- 1 root root 1094 11月 23 10:08 healthcheck-client.crt
-rw------- 1 root root 1679 11月 23 10:08 healthcheck-client.key
-rw-r--r-- 1 root root 1127 11月 23 10:08 peer.crt
-rw------- 1 root root 1679 11月 23 10:08 peer.key
-rw-r--r-- 1 root root 1119 11月 23 10:08 server.crt
-rw------- 1 root root 1675 11月 23 10:08 server.key

這些文件和從Kubernetes的pod命令行進去看到的是徹底同樣的(原本就是同一個目錄)。app

三、直接訪問etcd的服務

下一步,咱們來鏈接到這個實例,查看具體的運行信息。ui

首先,安裝etcd-client,這是etcd的獨立客戶端。url

sudo apt  install etcd-client

而後,鏈接到etcd實例(endpoints爲上面所顯示的地址參數:advertise-client-urls):spa

sudo etcdctl --endpoints https://127.0.0.1:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key --ca-file=/etc/kubernetes/pki/etcd/ca.crt member list
  • 注意:由於kubernetes集羣使用https,所以須要指定--cert-file、--key-file和--ca-file三個參數,參數文件都位於 /etc/kubernetes/pki/etcd目錄下。

我這裏的輸出爲:

a874c87fd42044f: name=podc01 peerURLs=https://127.0.0.1:2380 clientURLs=https://127.0.0.1:2379 isLeader=true

能夠照此輸入其餘命令,來訪問由kubernetes所啓動的實例(實際運行時由kubelet服務控制)。

四、備份與恢復

知道了上面的祕密,備份etcd 就不難了。有三個辦法:

  • 能夠直接備份/etc/kubernetes/pki/etcd/var/lib/etcd下的文件內容
    • 對於多節點的etcd服務,不能使用直接備份和恢復目錄文件的方法。
    • 備份以前先使用docker stop中止相應的服務,而後再啓動便可。
      • 若是中止etcd服務,備份過程當中服務會中斷。
    • 缺省配置狀況下,每隔10000次改變,etcd將會產生一個snap。
      • 若是隻備份/var/lib/etcd/member/snap下的文件,不須要中止服務。
  • 經過etcd-client客戶端備份。以下(注意,snapshot是在API3裏支持的,cert/key/cacert 三個參數名稱與API2的命令不一樣):
sudo ETCDCTL_API=3 etcdctl snapshot save "/home/supermap/k8s-backup/data/etcd-snapshot/$(date +%Y%m%d_%H%M%S)_snapshot.db" --endpoints=127.0.0.1:2379 --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt"

使用kubernetes的cronjob實現按期自動化備份須要對images和啓動參數有一些調整,我修改後的yaml文件以下:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: etcd-disaster-recovery
  namespace: cronjob
spec:
 schedule: "0 22 * * *"
 jobTemplate:
  spec:
    template:
      metadata:
       labels:
        app: etcd-disaster-recovery
      spec:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: kubernetes.io/hostname
                      operator: In
                      values:
                      - podc01
        containers:
        - name: etcd
          image: k8s.gcr.io/etcd:3.2.24
          imagePullPolicy: "IfNotPresent"
          command:
          - sh
          - -c
          - "export ETCDCTL_API=3; \
             etcdctl --endpoints=$ENDPOINT \
             --cert=/etc/kubernetes/pki/etcd/server.crt \
             --key=/etc/kubernetes/pki/etcd/server.key \
             --cacert=/etc/kubernetes/pki/etcd/ca.crt \
             snapshot save /snapshot/$(date +%Y%m%d_%H%M%S)_snapshot.db; \
             echo etcd backup success"
          env:
          - name: ENDPOINT
            value: "https://127.0.0.1:2379"
          volumeMounts:
            - mountPath: "/etc/kubernetes/pki/etcd"
              name: etcd-certs
            - mountPath: "/var/lib/etcd"
              name: etcd-data
            - mountPath: "/snapshot"
              name: snapshot
              subPath: data/etcd-snapshot
            - mountPath: /etc/localtime
              name: lt-config
            - mountPath: /etc/timezone
              name: tz-config
        restartPolicy: OnFailure
        volumes:
          - name: etcd-certs
            hostPath:
              path: /etc/kubernetes/pki/etcd
          - name: etcd-data
            hostPath:
              path: /var/lib/etcd
          - name: snapshot
            hostPath:
              path: /home/supermap/k8s-backup
          - name: lt-config
            hostPath:
              path: /etc/localtime
          - name: tz-config
            hostPath:
              path: /etc/timezone
        hostNetwork: true

除此以外,這樣Kubernetes的etcd主數據庫就備份完成了。

不過,完整地備份和恢復kubernetes集羣還須要一些其它的操做,對於每個運行的應用都還須要執行單獨的備份操做。

五、更多參考內容

相關文章
相關標籤/搜索