繼續第一章的部署。node
4、部署etcd集羣linux
4.1 kubernetes使用etcd存儲全部數據,本節部署一個2個節點高可用的etcd集羣,複用第一章的master節點。git
192.168.56.20 k8s-m1 192.168.56.21 k8s-m2
4.2 下載和分發etcd二進制文件github
[k8s@k8s-m1 ~]$ cd /home/k8s/k8s [k8s@k8s-m1 k8s]$ wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz [k8s@k8s-m1 k8s]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 k8s]$ for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" scp etcd-v3.3.7-linux-amd64/etcd* k8s@${master_ip}:/opt/k8s/bin ssh k8s@${master_ip} "chmod +x /opt/k8s/bin/*" done
4.3 建立etcd證書和私鑰json
建立證書籤名請求ssh
[k8s@k8s-m1 k8s]$ cd /opt/k8s/cert/ [k8s@k8s-m1 cert]$ cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.56.20", "192.168.56.21" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
4.4 生成證書和私鑰url
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*
4.5 分發到各etcd節點spa
[k8s@k8s-m1 cert]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 cert]$ for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert" scp etcd*.pem k8s@${master_ip}:/etc/etcd/cert/ done
4.6 建立etcd的systemd unit模板文件rest
[k8s@k8s-m1 cert]$ mkdir -p /opt/k8s/template && cd /opt/k8s/template [k8s@k8s-m1 template]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 template]$ cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/opt/k8s/bin/etcd \\ --data-dir=/var/lib/etcd \\ --name=##NODE_NAME## \\ --cert-file=/etc/etcd/cert/etcd.pem \\ --key-file=/etc/etcd/cert/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-cert-file=/etc/etcd/cert/etcd.pem \\ --peer-key-file=/etc/etcd/cert/etcd-key.pem \\ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --listen-peer-urls=https://##master_ip##:2380 \\ --initial-advertise-peer-urls=https://##master_ip##:2380 \\ --listen-client-urls=https://##master_ip##:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://##master_ip##:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
4.7 將模板文件修改正確的配置,並分發到各etcd節點code
# 修改成正確的配置 [k8s@k8s-m1 template]$ for (( i=0; i < 2; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##master_ip##/${master_ipS[i]}/" etcd.service.template > etcd-${master_ipS[i]}.service done # 建立etcd的數據目錄和工做目錄,分發到各etcd節點,同時文件名重命名爲etcd.service for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd" scp etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service done
4.8 完整的etcd配置文件,參考以下(k8s-m1節點)
[root@k8s-m1 ~]# cat /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/opt/k8s/bin/etcd \ --data-dir=/var/lib/etcd \ --name=kube-node1 \ --cert-file=/etc/etcd/cert/etcd.pem \ --key-file=/etc/etcd/cert/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-cert-file=/etc/etcd/cert/etcd.pem \ --peer-key-file=/etc/etcd/cert/etcd-key.pem \ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://192.168.56.20:2380 \ --initial-advertise-peer-urls=https://192.168.56.20:2380 \ --listen-client-urls=https://192.168.56.20:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.56.20:2379 \ --initial-cluster-token=etcd-cluster-0 \ --initial-cluster=kube-node1=https://192.168.56.20:2380,kube-node2=https://192.168.56.21:2380 \ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
4.9 啓動etcd服務
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd &" done
4.10 檢查etcd服務是否啓動
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "systemctl status etcd|grep Active" done
驗證服務狀態
[k8s@k8s-m1 ~]$ for master_ip in ${MASTER_IPS[@]} > do > echo ">>> ${master_ip}" > ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ > --endpoints=https://${master_ip}:2379 \ > --cacert=/etc/kubernetes/cert/ca.pem \ > --cert=/etc/etcd/cert/etcd.pem \ > --key=/etc/etcd/cert/etcd-key.pem endpoint health > done >>> 192.168.56.20 https://192.168.56.20:2379 is healthy: successfully committed proposal: took = 5.458846ms >>> 192.168.56.21 https://192.168.56.21:2379 is healthy: successfully committed proposal: took = 3.662995ms