本篇延續過往手動安裝方式來部署 Kubernetes v1.10.x 版本的 High Availability 集羣,主要目的是學習 Kubernetes 安裝的一些元件關析與流程。若不想這麼累的話,能夠參考 Picking the Right Solution 來選擇本身最喜歡的方式。html
本次安裝的軟件版本:node
本教學將如下列節點數與規格來進行部署 Kubernetes 集羣,操做系統可採用Ubuntu 16.x與CentOS 7.x:linux
IP Address | Hostname | CPU | Memory |
---|---|---|---|
192.16.35.11 | k8s-m1 | 1 | 4G |
192.16.35.12 | k8s-m2 | 1 | 4G |
192.16.35.13 | k8s-m3 | 1 | 4G |
192.16.35.14 | k8s-n1 | 1 | 4G |
192.16.35.15 | k8s-n2 | 1 | 4G |
192.16.35.16 | k8s-n2 | 1 | 4G |
另外由全部 master 節點提供一組 VIP 192.16.35.10。nginx
開始安裝前須要確保如下條件已達成:git
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0 $ vim /etc/selinux/config SELINUX=disabled
... 192.16.35.11 k8s-m1 192.16.35.12 k8s-m2 192.16.35.13 k8s-m3 192.16.35.14 k8s-n1 192.16.35.15 k8s-n2 192.16.35.16 k8s-n3
$ curl -fsSL "https://get.docker.com/" | sh
不論是在 Ubuntu 或 CentOS 都只須要執行該指令就會自動安裝最新版 Docker。
CentOS 安裝完成後,須要再執行如下指令:github
$ systemctl enable docker && systemctl start docker
全部節點須要設定/etc/sysctl.d/k8s.conf的系統參數。docker
$ cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl -p /etc/sysctl.d/k8s.conf
$ swapoff -a && sysctl -w vm.swappiness=0
記得/etc/fstab也要註解掉SWAP掛載。json
$ export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64" $ wget "${KUBE_URL}/kubelet" -O /usr/local/bin/kubelet $ chmod +x /usr/local/bin/kubelet # node 請忽略下載 kubectl $ wget "${KUBE_URL}/kubectl" -O /usr/local/bin/kubectl $ chmod +x /usr/local/bin/kubectl
$ mkdir -p /opt/cni/bin && cd /opt/cni/bin $ export CNI_URL="https://github.com/containernetworking/plugins/releases/download" $ wget -qO- --show-progress "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx
$ export CFSSL_URL="https://pkg.cfssl.org/R1.2" $ wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl $ wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson $ chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
在這個部分,將須要產生多個元件的 Certificates,這包含 Etcd、Kubernetes 元件等,而且每一個集羣都會有一個根數位憑證認證機構(Root Certificate Authority)被用在認證 API Server 與 Kubelet 端的憑證。bootstrap
P.S. 這邊要注意 CA JSON 檔的CN(Common Name)與O(Organization)等內容是會影響 Kubernetes 元件認證的。vim
首先在k8s-m1創建/etc/etcd/ssl資料夾,而後進入目錄完成如下操做。
$ mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl $ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki"
下載ca-config.json與etcd-ca-csr.json文件,並從 CSR json 產生 CA keys 與 Certificate:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/etcd-ca-csr.json" $ cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
下載etcd-csr.json文件,併產生 Etcd 證書:
$ wget "${PKI_URL}/etcd-csr.json" $ cfssl gencert \ -ca=etcd-ca.pem \ -ca-key=etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,192.16.35.11,192.16.35.12,192.16.35.13 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare etcd
-hostname需修改爲全部 masters 節點。
完成後刪除沒必要要文件:
$ rm -rf *.json *.csr
確認/etc/etcd/ssl有如下文件:
$ ls /etc/etcd/ssl etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
複製相關文件至其餘 Etcd 節點,這邊爲全部master節點:
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE} done done
在k8s-m1創建pki資料夾,而後進入目錄完成如下章節操做。
$ mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki $ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki" $ export KUBE_APISERVER="https://192.16.35.10:6443"
下載ca-config.json與ca-csr.json文件,併產生 CA 金鑰:
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/ca-csr.json" $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca*.pem ca-key.pem ca.pem
下載apiserver-csr.json文件,併產生 kube-apiserver 憑證:
$ wget "${PKI_URL}/apiserver-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.16.35.10,127.0.0.1,kubernetes.default \ -profile=kubernetes \ apiserver-csr.json | cfssljson -bare apiserver $ ls apiserver*.pem apiserver-key.pem apiserver.pem
下載front-proxy-ca-csr.json文件,併產生 Front Proxy CA 金鑰,Front Proxy 主要是用在 API aggregator 上:
$ wget "${PKI_URL}/front-proxy-ca-csr.json" $ cfssl gencert \ -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca $ ls front-proxy-ca*.pem front-proxy-ca-key.pem front-proxy-ca.pem
下載front-proxy-client-csr.json文件,併產生 front-proxy-client 證書:
$ wget "${PKI_URL}/front-proxy-client-csr.json" $ cfssl gencert \ -ca=front-proxy-ca.pem \ -ca-key=front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ front-proxy-client-csr.json | cfssljson -bare front-proxy-client $ ls front-proxy-client*.pem front-proxy-client-key.pem front-proxy-client.pem
下載admin-csr.json文件,併產生 admin certificate 憑證:
$ wget "${PKI_URL}/admin-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin $ ls admin*.pem admin-key.pem admin.pem
接着經過如下指令產生名稱爲 admin.conf 的 kubeconfig 檔:
# admin set cluster $ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../admin.conf # admin set credentials $ kubectl config set-credentials kubernetes-admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=../admin.conf # admin set context $ kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=../admin.conf # admin set default context $ kubectl config use-context kubernetes-admin@kubernetes \ --kubeconfig=../admin.conf
下載manager-csr.json文件,併產生 kube-controller-manager certificate 憑證:
$ wget "${PKI_URL}/manager-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare controller-manager $ ls controller-manager*.pem controller-manager-key.pem controller-manager.pem
若節點 IP 不一樣,須要修改manager-csr.json的hosts。
接着經過如下指令產生名稱爲controller-manager.conf的 kubeconfig 檔:
# controller-manager set cluster $ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../controller-manager.conf # controller-manager set credentials $ kubectl config set-credentials system:kube-controller-manager \ --client-certificate=controller-manager.pem \ --client-key=controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=../controller-manager.conf # controller-manager set context $ kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=../controller-manager.conf # controller-manager set default context $ kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=../controller-manager.conf
下載scheduler-csr.json文件,併產生 kube-scheduler certificate 憑證:
$ wget "${PKI_URL}/scheduler-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare scheduler $ ls scheduler*.pem scheduler-key.pem scheduler.pem
若節點 IP 不一樣,須要修改scheduler-csr.json的hosts。
接着經過如下指令產生名稱爲 scheduler.conf 的 kubeconfig 檔:
# scheduler set cluster $ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../scheduler.conf # scheduler set credentials $ kubectl config set-credentials system:kube-scheduler \ --client-certificate=scheduler.pem \ --client-key=scheduler-key.pem \ --embed-certs=true \ --kubeconfig=../scheduler.conf # scheduler set context $ kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=../scheduler.conf # scheduler use default context $ kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=../scheduler.conf
接着在全部k8s-m1節點下載kubelet-csr.json文件,併產生憑證:
$ wget "${PKI_URL}/kubelet-csr.json" $ for NODE in k8s-m1 k8s-m2 k8s-m3; do echo "--- $NODE ---" cp kubelet-csr.json kubelet-$NODE-csr.json; sed -i "s/\$NODE/$NODE/g" kubelet-$NODE-csr.json; cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=$NODE \ -profile=kubernetes \ kubelet-$NODE-csr.json | cfssljson -bare kubelet-$NODE done $ ls kubelet*.pem kubelet-k8s-m1-key.pem kubelet-k8s-m1.pem kubelet-k8s-m2-key.pem kubelet-k8s-m2.pem kubelet-k8s-m3-key.pem kubelet-k8s-m3.pem
這邊須要依據節點修改-hostname與$NODE。
完成後複製 kubelet 憑證至其餘master節點:
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki" for FILE in kubelet-$NODE-key.pem kubelet-$NODE.pem ca.pem; do scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE} done done
接着執行如下指令產生名稱爲kubelet.conf的 kubeconfig 檔:
$ for NODE in k8s-m1 k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "cd /etc/kubernetes/pki && \ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../kubelet.conf && \ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../kubelet.conf && \ kubectl config set-credentials system:node:${NODE} \ --client-certificate=kubelet-${NODE}.pem \ --client-key=kubelet-${NODE}-key.pem \ --embed-certs=true \ --kubeconfig=../kubelet.conf && \ kubectl config set-context system:node:${NODE}@kubernetes \ --cluster=kubernetes \ --user=system:node:${NODE} \ --kubeconfig=../kubelet.conf && \ kubectl config use-context system:node:${NODE}@kubernetes \ --kubeconfig=../kubelet.conf && \ rm kubelet-${NODE}.pem kubelet-${NODE}-key.pem" done
Service account 不是經過 CA 進行認證,所以不要經過 CA 來作 Service account key 的檢查,這邊創建一組 Private 與 Public 金鑰提供給 Service account key 使用:
$ openssl genrsa -out sa.key 2048 $ openssl rsa -in sa.key -pubout -out sa.pub $ ls sa.* sa.key sa.pub
全部信息準備完成後,就能夠將一些沒必要要文件刪除:
$ rm -rf *.json *.csr scheduler*.pem controller-manager*.pem admin*.pem kubelet*.pem
複製憑證文件至其餘master節點:
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" for FILE in $(ls /etc/kubernetes/pki/); do scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE} done done
複製 Kubernetes config 文件至其餘master節點:
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" for FILE in admin.conf controller-manager.conf scheduler.conf; do scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE} done done
本部分將說明如何創建與設定 Kubernetes Master 角色,過程當中會部署如下元件:
首先在全部 master 節點下載部署元件的 YAML 文件,這邊不採用二進制執行檔與 Systemd 來管理這些元件,所有采用 Static Pod 來達成。這邊將文件下載至/etc/kubernetes/manifests目錄:
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/master" $ mkdir -p /etc/kubernetes/manifests && cd /etc/kubernetes/manifests $ for FILE in kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived etcd etcd.config; do wget "${CORE_URL}/${FILE}.yml.conf" -O ${FILE}.yml if [ ${FILE} == "etcd.config" ]; then mv etcd.config.yml /etc/etcd/etcd.config.yml sed -i "s/\${HOSTNAME}/${HOSTNAME}/g" /etc/etcd/etcd.config.yml sed -i "s/\${PUBLIC_IP}/$(hostname -i)/g" /etc/etcd/etcd.config.yml fi done $ ls /etc/kubernetes/manifests etcd.yml haproxy.yml keepalived.yml kube-apiserver.yml kube-controller-manager.yml kube-scheduler.yml
產生一個用來加密 Etcd 的 Key:
$ head -c 32 /dev/urandom | base64 SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ=
注意每臺master節點須要用同樣的 Key。
在/etc/kubernetes/目錄下,創建encryption.yml的加密 YAML 文件:
$ cat <<EOF > /etc/kubernetes/encryption.yml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ= - identity: {} EOF
Etcd 資料加密可參考這篇 Encrypting data at rest。
在/etc/kubernetes/目錄下,創建audit-policy.yml的進階稽覈策略 YAML 檔:
$ cat <<EOF > /etc/kubernetes/audit-policy.yml apiVersion: audit.k8s.io/v1beta1 kind: Policy rules:- level: Metadata EOF
Audit Policy 請參考這篇 Auditing。
下載haproxy.cfg文件來提供給 HAProxy 容器使用:
$ mkdir -p /etc/haproxy/ $ wget "${CORE_URL}/haproxy.cfg" -O /etc/haproxy/haproxy.cfg
若與本教學 IP 不一樣的話,請記得修改設定檔。
下載kubelet.service相關文件來管理 kubelet:
$ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
若 cluster dns或domain有改變的話,須要修改10-kubelet.conf。
最後創建 var 存放信息,而後啓動 kubelet 服務:
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /var/lib/etcd $ systemctl enable kubelet.service && systemctl start kubelet.service
完成後會須要一段時間來下載鏡像檔與啓動元件,能夠利用該指令來監看:
$ watch netstat -ntlpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10344/kubelet tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 11324/kube-schedule tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 11416/haproxy tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 11235/kube-controll tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN 11416/haproxy tcp6 0 0 :::2379 :::* LISTEN 10479/etcd tcp6 0 0 :::2380 :::* LISTEN 10479/etcd tcp6 0 0 :::10255 :::* LISTEN 10344/kubelet tcp6 0 0 :::5443 :::* LISTEN 11295/kube-apiserve
若看到以上信息表示服務正常啓動,若發生問題能夠用docker指令來查看。
完成後,在任意一臺master節點複製 admin kubeconfig 文件,並經過簡單指令驗證:
$ cp /etc/kubernetes/admin.conf ~/.kube/config $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-m1 NotReady master 52s v1.10.0 k8s-m2 NotReady master 51s v1.10.0 k8s-m3 NotReady master 50s v1.10.0 $ kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE etcd-k8s-m1 1/1 Running 0 7s etcd-k8s-m2 1/1 Running 0 57s haproxy-k8s-m3 1/1 Running 0 1m...
接着確認服務可以執行 logs 等指令:
$ kubectl -n kube-system logs -f kube-scheduler-k8s-m2Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-scheduler-k8s-m2)
這邊會發現出現 403 Forbidden 問題,這是由於 kube-apiserver user 並無 nodes 的資源存取權限,屬於正常。
因爲上述權限問題,必需創建一個apiserver-to-kubelet-rbac.yml來定義權限,以供對 Nodes 容器執行 logs、exec 等指令。在任意一臺master節點執行如下指令:
$ kubectl apply -f "${CORE_URL}/apiserver-to-kubelet-rbac.yml.conf" clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" configured clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" configured # 測試 logs $ kubectl -n kube-system logs -f kube-scheduler-k8s-m2... I0403 02:30:36.375935 1 server.go:555] Version: v1.10.0 I0403 02:30:36.378208 1 server.go:574] starting healthz server on 127.0.0.1:10251 # 設定master節點容許 Taint: $ kubectl taint nodes node-role.kubernetes.io/master="":NoSchedule --all node "k8s-m1" tainted node "k8s-m2" tainted node "k8s-m3" tainted
因爲本次安裝啓用了 TLS 認證,所以每一個節點的 kubelet 都必須使用 kube-apiserver 的 CA 的憑證後,才能與 kube-apiserver 進行溝通,而該過程須要手動針對每臺節點單獨簽署憑證是一件繁瑣的事情,且一旦節點增長會延伸出管理不易問題; 而 TLS bootstrapping 目標就是解決該問題,經過讓 kubelet 先使用一個預約低權限使用者鏈接到 kube-apiserver,而後在對 kube-apiserver 申請憑證簽署,當受權 Token 一致時,Node 節點的 kubelet 憑證將由 kube-apiserver 動態簽署提供。具體做法能夠參考 TLS Bootstrapping 與 Authenticating with Bootstrap Tokens。
首先在k8s-m1創建一個變量來產生BOOTSTRAP_TOKEN,並創建bootstrap-kubelet.conf的 Kubernetes config 檔:
$ cd /etc/kubernetes/pki $ export TOKEN_ID=$(openssl rand 3 -hex) $ export TOKEN_SECRET=$(openssl rand 8 -hex) $ export BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET} $ export KUBE_APISERVER="https://192.16.35.10:6443" # bootstrap set cluster $ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../bootstrap-kubelet.conf # bootstrap set credentials $ kubectl config set-credentials tls-bootstrap-token-user \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=../bootstrap-kubelet.conf # bootstrap set context $ kubectl config set-context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes \ --user=tls-bootstrap-token-user \ --kubeconfig=../bootstrap-kubelet.conf # bootstrap use default context $ kubectl config use-context tls-bootstrap-token-user@kubernetes \ --kubeconfig=../bootstrap-kubelet.conf
若想要用手動簽署憑證來進行受權的話,能夠參考 Certificate。
接着在k8s-m1創建 TLS bootstrap secret 來提供自動簽證使用:
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Secret metadata: name: bootstrap-token-${TOKEN_ID} namespace: kube-system type: bootstrap.kubernetes.io/token stringData: token-id: ${TOKEN_ID} token-secret: ${TOKEN_SECRET} usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token EOF secret "bootstrap-token-65a3a9" created # 在k8s-m1創建 TLS Bootstrap Autoapprove RBAC: $ kubectl apply -f "${CORE_URL}/kubelet-bootstrap-rbac.yml.conf" clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-certificate-rotation" created
本部分將說明如何創建與設定 Kubernetes Node 角色,Node 是主要執行容器實例(Pod)的工做節點。
在開始部署前,先在k8-m1將須要用到的文件複製到全部node節點上:
$ cd /etc/kubernetes/pki $ for NODE in k8s-n1 k8s-n2 k8s-n3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki/" ssh ${NODE} "mkdir -p /etc/etcd/ssl" # Etcd for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE} done # Kubernetes for FILE in pki/ca.pem pki/ca-key.pem bootstrap-kubelet.conf; do scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE} done done
在每臺node節點下載kubelet.service相關文件來管理 kubelet:
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/node" $ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
若 cluster dns或domain有改變的話,須要修改10-kubelet.conf。
最後創建 var 存放信息,而後啓動 kubelet 服務:
$ mkdir -p /var/lib/kubelet /var/log/kubernetes $ systemctl enable kubelet.service && systemctl start kubelet.service
完成後,在任意一臺master節點並經過簡單指令驗證:
$ kubectl get csr NAME AGE REQUESTOR CONDITION csr-bvz9l 11m system:node:k8s-m1 Approved,Issued csr-jwr8k 11m system:node:k8s-m2 Approved,Issued csr-q867w 11m system:node:k8s-m3 Approved,Issued node-csr-Y-FGvxZWJqI-8RIK_IrpgdsvjGQVGW0E4UJOuaU8ogk 17s system:bootstrap:dca3e1 Approved,Issued node-csr-cnX9T1xp1LdxVDc9QW43W0pYkhEigjwgceRshKuI82c 19s system:bootstrap:dca3e1 Approved,Issued node-csr-m7SBA9RAGCnsgYWJB-u2HoB2qLSfiQZeAxWFI2WYN7Y 18s system:bootstrap:dca3e1 Approved,Issued $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-m1 NotReady master 12m v1.10.0 k8s-m2 NotReady master 11m v1.10.0 k8s-m3 NotReady master 11m v1.10.0 k8s-n1 NotReady node 32s v1.10.0 k8s-n2 NotReady node 31s v1.10.0 k8s-n3 NotReady node 29s v1.10.0
當完成上面全部步驟後,接着須要部署一些插件,其中如Kubernetes DNS與Kubernetes Proxy等這種 Addons 是很是重要的。
Kube-proxy 是實現 Service 的關鍵插件,kube-proxy 會在每臺節點上執行,而後監聽 API Server 的 Service 與 Endpoint 資源物件的改變,而後來依據變化執行 iptables 來實現網絡的轉發。這邊咱們會須要建議一個 DaemonSet 來執行,而且創建一些須要的 Certificates。
在k8s-m1下載kube-proxy.yml來創建 Kubernetes Proxy Addon:
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-proxy.yml.conf" serviceaccount "kube-proxy" created clusterrolebinding.rbac.authorization.k8s.io "system:kube-proxy" created configmap "kube-proxy" created daemonset.apps "kube-proxy" created $ kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE IP NODE kube-proxy-8j5w8 1/1 Running 0 29s 192.16.35.16 k8s-n3 kube-proxy-c4zvt 1/1 Running 0 29s 192.16.35.11 k8s-m1 kube-proxy-clpl6 1/1 Running 0 29s 192.16.35.12 k8s-m2...
Kube DNS 是 Kubernetes 集羣內部 Pod 之間互相溝通的重要 Addon,它容許 Pod 能夠經過 Domain Name 方式來鏈接 Service,其主要由 Kube DNS 與 Sky DNS 組合而成,經過 Kube DNS 監聽 Service 與 Endpoint 變化,來提供給 Sky DNS 信息,已更新解析位址。
在k8s-m1下載kube-proxy.yml來創建 Kubernetes Proxy Addon:
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-dns.yml.conf" serviceaccount "kube-dns" created service "kube-dns" created deployment.extensions "kube-dns" created $ kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-654684d656-zq5t8 0/3 Pending 0 1m
這邊會發現處於Pending狀態,是因爲 Kubernetes Pod Network 還未創建完成,所以全部節點會處於NotReady狀態,而形成 Pod 沒法被排程分配到指定節點上啓動,因爲爲了解決該問題,下節將說明如何創建 Pod Network。
Calico 是一款純 Layer 3 的資料中心網絡方案(不須要 Overlay 網絡),Calico 好處是它整合了各類雲原平生臺,且 Calico 在每個節點利用 Linux Kernel 實現高效的 vRouter 來負責資料的轉發,而當資料中心複雜度增長時,能夠用 BGP route reflector 來達成。
本次不採用手動方式來創建 Calico 網絡,若想了解能夠參考 Integration Guide。
在k8s-m1下載calico.yaml來創建 Calico Network:
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/network/calico.yml.conf" configmap "calico-config" created daemonset "calico-node" created deployment "calico-kube-controllers" created clusterrolebinding "calico-cni-plugin" created clusterrole "calico-cni-plugin" created serviceaccount "calico-cni-plugin" created clusterrolebinding "calico-kube-controllers" created clusterrole "calico-kube-controllers" created serviceaccount "calico-kube-controllers" created $ kubectl -n kube-system get po -l k8s-app=calico-node -o wide NAME READY STATUS RESTARTS AGE IP NODE calico-node-22mbb 2/2 Running 0 1m 192.16.35.12 k8s-m2 calico-node-2qwf5 2/2 Running 0 1m 192.16.35.11 k8s-m1 calico-node-g2sp8 2/2 Running 0 1m 192.16.35.13 k8s-m3 calico-node-hghp4 2/2 Running 0 1m 192.16.35.14 k8s-n1 calico-node-qp6gf 2/2 Running 0 1m 192.16.35.15 k8s-n2 calico-node-zfx4n 2/2 Running 0 1m 192.16.35.16 k8s-n3
這邊若節點 IP 與網卡不一樣的話,請修改calico.yml文件。
在k8s-m1下載 Calico CLI 來查看 Calico nodes:
$ wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.0/calicoctl -O /usr/local/bin/calicoctl $ chmod u+x /usr/local/bin/calicoctl $ cat <<EOF > ~/calico-rcexport ETCD_ENDPOINTS="https://192.16.35.11:2379,https://192.16.35.12:2379,https://192.16.35.13:2379"export ETCD_CA_CERT_FILE="/etc/etcd/ssl/etcd-ca.pem"export ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"export ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" EOF $ . ~/calico-rc $ calicoctl node statusCalico process is running. IPv4 BGP status+--------------+-------------------+-------+----------+-------------+| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |+--------------+-------------------+-------+----------+-------------+| 192.16.35.12 | node-to-node mesh | up | 04:42:37 | Established || 192.16.35.13 | node-to-node mesh | up | 04:42:42 | Established || 192.16.35.14 | node-to-node mesh | up | 04:42:37 | Established || 192.16.35.15 | node-to-node mesh | up | 04:42:41 | Established || 192.16.35.16 | node-to-node mesh | up | 04:42:36 | Established |+--------------+-------------------+-------+----------+-------------+... 查看 pending 的 pod 是否已執行: $ kubectl -n kube-system get po -l k8s-app=kube-dns kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-654684d656-j8xzx 3/3 Running 0 10m
本節說明如何部署一些官方經常使用的 Addons,如 Dashboard、Heapster 等。
Dashboard 是 Kubernetes 社區官方開發的儀表板,有了儀表板後管理者就可以經過 Web-based 方式來管理 Kubernetes 集羣,除了提高管理方便,也讓資源視覺化,讓人更直覺看見系統信息的呈現結果。
在k8s-m1經過 kubectl 來創建 kubernetes dashboard 便可:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard NAME READY STATUS RESTARTS AGE kubernetes-dashboard-7d5dcdb6d9-j492l 1/1 Running 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.111.22.111 <none> 443/TCP 12s
這邊會額外創建一個名稱爲open-api Cluster Role Binding,這僅做爲方便測試時使用,在通常狀況下不要開啓,否則就會直接被存取全部 API:
$ cat <<EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-api namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:anonymous EOF
注意!管理者能夠針對特定使用者來開放 API 存取權限,但這邊方便使用直接綁在 cluster-admin cluster role。
完成後,就能夠經過瀏覽器存取 Dashboard。
在 1.7 版本之後的 Dashboard 將再也不提供全部權限,所以須要創建一個 service account 來綁定 cluster-admin role:
$ kubectl -n kube-system create sa dashboard $ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard $ SECRET=$(kubectl -n kube-system get sa dashboard -o yaml | awk '/dashboard-token/ {print $3}') $ kubectl -n kube-system describe secrets ${SECRET} | awk '/token:/{print $2}' eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdzVocmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWJmMTFjYzMtZjRlYi0xMWU3LTgzYWUtMDgwMDI3NjdkOWI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.Xuyq34ci7Mk8bI97o4IldDyKySOOqRXRsxVWIJkPNiVUxKT4wpQZtikNJe2mfUBBD-JvoXTzwqyeSSTsAy2CiKQhekW8QgPLYelkBPBibySjBhJpiCD38J1u7yru4P0Pww2ZQJDjIxY4vqT46ywBklReGVqY3ogtUQg-eXueBmz-o7lJYMjw8L14692OJuhBjzTRSaKW8U2MPluBVnD7M2SOekDff7KpSxgOwXHsLVQoMrVNbspUCvtIiEI1EiXkyCNRGwfnd2my3uzUABIHFhm0_RZSmGwExPbxflr8Fc6bxmuz-_jSdOtUidYkFIzvEWw2vRovPgs3MXTv59RwUw
複製token,而後貼到 Kubernetes dashboard。注意這邊通常來講要針對不一樣 User 開啓特定存取權限。
Heapster 是 Kubernetes 社區維護的容器集羣監控與效能分析工具。Heapster 會從 Kubernetes apiserver 取得全部 Node 信息,而後再經過這些 Node 來取得 kubelet 上的資料,最後再將全部收集到資料送到 Heapster 的後臺儲存 InfluxDB,最後利用 Grafana 來抓取 InfluxDB 的資料源來進行視覺化。
在k8s-m1經過 kubectl 來創建 kubernetes monitor 便可:
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-monitor.yml.conf" $ kubectl -n kube-system get po,svc NAME READY STATUS RESTARTS AGE... po/heapster-74fb5c8cdc-62xzc 4/4 Running 0 7m po/influxdb-grafana-55bd7df44-nw4nc 2/2 Running 0 7m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE... svc/heapster ClusterIP 10.100.242.225 <none> 80/TCP 7m svc/monitoring-grafana ClusterIP 10.101.106.180 <none> 80/TCP 7m svc/monitoring-influxdb ClusterIP 10.109.245.142 <none> 8083/TCP,8086/TCP 7m···
完成後,就能夠經過瀏覽器存取 Grafana Dashboard。
Ingress是利用 Nginx 或 HAProxy 等負載平衡器來曝露集羣內服務的元件,Ingress 主要經過設定 Ingress 規格來定義 Domain Name 映射 Kubernetes 內部 Service,這種方式能夠避免掉使用過多的 NodePort 問題。
在k8s-m1經過 kubectl 來創建 Ingress Controller 便可:
$ kubectl create ns ingress-nginx $ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/ingress-controller.yml.conf" $ kubectl -n ingress-nginx get po NAME READY STATUS RESTARTS AGEdefault-http-backend-5c6d95c48-rzxfb 1/1 Running 0 7m nginx-ingress-controller-699cdf846-982n4 1/1 Running 0 7m
這裏也能夠選擇 Traefik 的 Ingress Controller。
這邊先創建一個 Nginx HTTP server Deployment 與 Service:
$ kubectl run nginx-dp --image nginx --port 80 $ kubectl expose deploy nginx-dp --port 80 $ kubectl get po,svc $ cat <<EOF | kubectl create -f - apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-nginx-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: test.nginx.com http: paths: - path: / backend: serviceName: nginx-dp servicePort: 80 EOF
經過 curl 來進行測試:
$ curl 192.16.35.10 -H 'Host: test.nginx.com'<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...
# 測試其餘 domain name 是否會回傳 404
$ curl 192.16.35.10 -H 'Host: test.nginx.com1'default backend - 404
Helm 是 Kubernetes Chart 的管理工具,Kubernetes Chart 是一套預先組態的 Kubernetes 資源套件。其中Tiller Server主要負責接收來至 Client 的指令,並經過 kube-apiserver 與 Kubernetes 集羣作溝通,根據 Chart 定義的內容,來產生與管理各類對應 API 物件的 Kubernetes 部署文檔(又稱爲 Release)。
首先在k8s-m1安裝 Helm tool:
$ wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz | tar -zx $ sudo mv linux-amd64/helm /usr/local/bin/
另外在全部node節點安裝 socat:
$ sudo apt-get install -y socat
接着初始化 Helm(這邊會安裝 Tiller Server):
$ kubectl -n kube-system create sa tiller $ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account tiller...Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Happy Helming! $ kubectl -n kube-system get po -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-5f789bd9f7-tzss6 1/1 Running 0 29s $ helm versionClient: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
這邊部署簡單 Jenkins 來進行功能測試:
$ helm install --name demo --set Persistence.Enabled=false stable/jenkins $ kubectl get po,svc -l app=demo-jenkins NAME READY STATUS RESTARTS AGE demo-jenkins-7bf4bfcff-q74nt 1/1 Running 0 2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-jenkins LoadBalancer 10.103.15.129 <pending> 8080:31161/TCP 2m demo-jenkins-agent ClusterIP 10.103.160.126 <none> 50000/TCP 2m
# 取得 admin 帳號的密碼
$ printf $(kubectl get secret --namespace default demo-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo r6y9FMuF2u
完成後,就能夠經過瀏覽器存取 Jenkins Web。
測試完成後,便可刪除:
$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE demo 1 Tue Apr 10 07:29:51 2018 DEPLOYED jenkins-0.14.4 default $ helm delete demo --purge release "demo" deleted
更多 Helm Apps 能夠到 Kubeapps Hub 尋找。
SSH 進入k8s-m1節點,而後關閉該節點:
$ sudo poweroff
接着進入到k8s-m2節點,經過 kubectl 來檢查集羣是否可以正常執行:
# 先檢查 etcd 狀態,能夠發現 etcd-0 由於關機而中斷 $ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Unhealthy Get https://192.16.35.11:2379/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) # 測試是否能夠創建 Pod $ kubectl run nginx --image nginx --restart=Never --port 80 $ kubectl get po NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 22s
更多參考: