若是沒有特殊指明,全部操做均在 zhaoyixin-k8s-01 節點上執行。node
etcd 是基於 Raft 的分佈式 KV 存儲系統,由 CoreOS 開發,經常使用於服務發現、共享配置以及併發控制(如 leader 選舉、分佈式鎖等)。linux
kubernetes 使用 etcd 集羣持久化存儲全部 API 對象、運行數據。git
本節將部署一個三節點高可用 etcd 集羣。github
cd /opt/k8s/work wget https://github.com/coreos/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz tar -xvf etcd-v3.4.3-linux-amd64.tar.gz
分發二進制文件到集羣全部節點:json
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-v3.4.3-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
建立證書籤名請求文件:數組
cd /opt/k8s/work cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.16.8", "192.168.16.10", "192.168.16.6" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "zhaoyixin" } ] } EOF
hosts
:指定受權使用該證書的 etcd 節點 IP 列表,須要將 etcd 集羣全部節點 IP 都列在其中。生成證書和私鑰:bash
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*pem
分發生成的證書和私鑰到各 etcd 節點:併發
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/etcd/cert" scp etcd*.pem root@${node_ip}:/etc/etcd/cert/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=${ETCD_DATA_DIR} ExecStart=/opt/k8s/bin/etcd \\ --data-dir=${ETCD_DATA_DIR} \\ --wal-dir=${ETCD_WAL_DIR} \\ --name=##NODE_NAME## \\ --cert-file=/etc/etcd/cert/etcd.pem \\ --key-file=/etc/etcd/cert/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-cert-file=/etc/etcd/cert/etcd.pem \\ --peer-key-file=/etc/etcd/cert/etcd-key.pem \\ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --listen-peer-urls=https://##NODE_IP##:2380 \\ --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\ --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://##NODE_IP##:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new \\ --auto-compaction-mode=periodic \\ --auto-compaction-retention=1 \\ --max-request-bytes=33554432 \\ --quota-backend-bytes=6442450944 \\ --heartbeat-interval=250 \\ --election-timeout=2000 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
WorkingDirectory
、--data-dir
:指定工做目錄和數據目錄爲 ${ETCD_DATA_DIR}
,需在啓動服務前建立這個目錄;--wal-dir
:指定 wal 目錄,爲了提升性能,通常使用 SSD 或者和 --data-dir
不一樣的磁盤;--name
:指定節點名稱,當 --initial-cluster-state
值爲 new
時,--name
的參數值必須位於 --initial-cluster
列表中;--cert-file
、--key-file
:etcd server 與 client 通訊時使用的證書和私鑰;--trusted-ca-file
:簽名 client 證書的 CA 證書,用於驗證 client 證書;--peer-cert-file
、--peer-key-file
:etcd 與 peer 通訊使用的證書和私鑰;--peer-trusted-ca-file
:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;替換模板文件中的變量,爲各節點建立 systemd unit 文件:ssh
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service done ls *.service
NODE_NAMES
和 NODE_IPS
爲相同長度的 bash 數組,分別爲節點名稱和對應的 IP;分發生成的 systemd unit 文件:分佈式
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " & done
- 必須先建立 etcd 數據目錄和工做目錄;
- etcd 進程首次啓動時會等待其它節點的 etcd 加入集羣,命令
systemctl start etcd
會卡住一段時間,爲正常現象;
檢查啓動結果
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status etcd|grep Active" done
確保狀態爲 active(running)
,不然經過 journalctl -u etcd
查看日誌,確認緣由。
部署完 etcd 集羣后,在任一 etcd 節點上執行以下命令:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" /opt/k8s/bin/etcdctl \ --endpoints=https://${node_ip}:2379 \ --cacert=/etc/kubernetes/cert/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem endpoint health done
輸出均爲 healthy
時表示集羣服務正常。
查看當前的 leader:
source /opt/k8s/bin/environment.sh /opt/k8s/bin/etcdctl \ -w table --cacert=/etc/kubernetes/cert/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem \ --endpoints=${ETCD_ENDPOINTS} endpoint status
輸出:
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.16.8:2379 | 75d7781bdcb5dee5 | 3.4.3 | 20 kB | true | false | 2 | 8 | 8 | | | https://192.168.16.10:2379 | d61a20f584018b21 | 3.4.3 | 20 kB | false | false | 2 | 8 | 8 | | | https://192.168.16.6:2379 | 6ee27caaed0bfb6b | 3.4.3 | 20 kB | false | false | 2 | 8 | 8 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+