部署 etcd 集羣

說明:本部署文章參照了 https://github.com/opsnull/follow-me-install-kubernetes-cluster ,歡迎給做者star

 

etcd 是基於 Raft 的分佈式 key-value 存儲系統,由 CoreOS 開發,經常使用於服務發現、共享配置以及併發控制(如 leader 選舉、分佈式鎖等)。kubernetes 使用 etcd 存儲全部運行數據。node

本文檔介紹部署一個三節點高可用 etcd 集羣的步驟:linux

  • 下載和分發 etcd 二進制文件;
  • 建立 etcd 集羣各節點的 x509 證書,用於加密客戶端(如 etcdctl) 與 etcd 集羣、etcd 集羣之間的數據流;
  • 建立 etcd 的 systemd unit 文件,配置服務參數;
  • 檢查集羣工做狀態;

 

etcd 集羣各節點的名稱和 IP 以下:git

  • k8s-master1:192.168.161.150
  • k8s-master2:192.168.161.151
  • k8s-master3:192.168.161.152

注意:若是沒有特殊指明,本文檔的全部操做均在 master1 節點上執行,而後遠程分發文件和執行命令。github

 

1.下載和分發 etcd 二進制文件

到 https://github.com/coreos/etcd/releases 頁面下載最新版本的發佈包:json

cd /opt/k8s/work
wget https://github.com/coreos/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz

分發二進制文件到集羣master節點:數組

cd /opt/k8s/work
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" scp etcd-v3.3.10-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
 

 

2.建立 etcd 證書和私鑰

建立證書籤名請求:bash

cd /opt/k8s/work
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.161.150",
    "192.168.161.151",
    "192.168.161.152"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

hosts 字段指定受權使用該證書的 etcd 節點 IP 或域名列表,這裏將 etcd 集羣的三個master節點 IP 都列在其中;併發

 

生成證書和私鑰:ssh

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
    -ca-key=/opt/k8s/work/ca-key.pem \
    -config=/opt/k8s/work/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*pem

分發生成的證書和私鑰到各 etcd 節點:分佈式

cd /opt/k8s/work
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
    scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
  done

 

3.建立 etcd 的 systemd unit 模板文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
  --data-dir=${ETCD_DATA_DIR} \\
  --wal-dir=${ETCD_WAL_DIR} \\
  --name=##NODE_NAME## \\
  --cert-file=/etc/etcd/cert/etcd.pem \\
  --key-file=/etc/etcd/cert/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-cert-file=/etc/etcd/cert/etcd.pem \\
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --listen-peer-urls=https://##NODE_IP##:2380 \\
  --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
  --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://##NODE_IP##:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • WorkingDirectory--data-dir:指定工做目錄和數據目錄爲 ${ETCD_DATA_DIR},需在啓動服務前建立這個目錄;
  • --wal-dir:指定 wal 目錄,爲了提升性能,通常使用 SSD 或者和 --data-dir 不一樣的磁盤;
  • --name:指定節點名稱,當 --initial-cluster-state 值爲 new 時,--name 的參數值必須位於 --initial-cluster列表中;
  • --cert-file--key-file:etcd server 與 client 通訊時使用的證書和私鑰;
  • --trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
  • --peer-cert-file--peer-key-file:etcd 與 peer 通訊使用的證書和私鑰;
  • --peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

 

4.爲各節點建立和分發 etcd systemd unit 文件

替換模板文件中的變量,爲各節點建立 systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
  done
ls *.service
  • NODE_NAMES 和 NODE_IPS 爲相同長度的 bash 數組,分別爲節點名稱和對應的 IP;

 

分發生成的 systemd unit 文件:

cd /opt/k8s/work
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152 do echo ">>> ${node_ip}" scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done
  • 文件重命名爲 etcd.service;

完整 unit 文件見:etcd.service

 

5.啓動 etcd 服務

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  done
  • 必須建立 etcd 數據目錄和工做目錄;
  • etcd 進程首次啓動時會等待其它節點的 etcd 加入集羣,命令 systemctl start etcd 會卡住一段時間,爲正常現象。

6.檢查啓動結果

cd /opt/k8s/work
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status etcd|grep Active"
  done

確保狀態爲 active (running),不然查看日誌,確認緣由:

$ journalctl -u etcd

 

7.驗證服務狀態

部署完 etcd 集羣后,在master1(192.168.161.150)節點上執行以下命令:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.150 192.168.161.151 192.168.161.152
  do
    echo ">>> ${node_ip}"
    ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/opt/k8s/work/ca.pem \
    --cert=/etc/etcd/cert/etcd.pem \
    --key=/etc/etcd/cert/etcd-key.pem endpoint health
  done

 預期輸出:

>>> 192.168.161.150
https://192.168.161.150:2379 is healthy: successfully committed proposal: took = 4.183308ms
>>> 192.168.161.151
https://192.168.161.151:2379 is healthy: successfully committed proposal: took = 5.532617ms
>>> 192.168.161.152
https://192.168.161.152:2379 is healthy: successfully committed proposal: took = 4.016865ms

 

8.查看當前的 leader

source /opt/k8s/bin/environment.sh
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  -w table --cacert=/opt/k8s/work/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status 

輸出:

  • 可見,當前的 leader 爲 192.168.161.151(master2)。
相關文章
相關標籤/搜索