管理組件採用staticPod或者daemonSet形式跑的,宿主機os能跑docker應該本篇教程能大多適用
安裝完成僅供學習和實驗html
本次安裝的版本:node
- Kubernetes v1.10.0 (1.10.0和1.10.3親測成功)
- CNI v0.6.0
- Etcd v3.1.13
- Calico v3.0.4
- Docker CE latest version(18.03)
節點信息
本教學將如下列節點數與規格來進行部署Kubernetes集羣,系統可採用Ubuntu 16.x
與CentOS 7.x
linux
IP |
Hostname |
CPU |
Memory |
192.16.35.11 |
K8S-M1 |
1 |
4G |
192.16.35.12 |
K8S-M2 |
1 |
4G |
192.16.35.13 |
K8S-M3 |
1 |
4G |
192.16.35.14 |
K8S-N1 |
1 |
4G |
192.16.35.15 |
K8S-N2 |
1 |
4G |
192.16.35.16 |
K8S-N3 |
1 |
4G |
另外由全部master節點提供一組VIP 192.16.35.10
nginx
- 全部操做所有用root使用者進行(方便用),以SRE來講不推薦。
- 能夠下載Vagrantfile來創建Virtualbox虛擬機集羣。不過須要注意機器資源是否足夠
事前準備git
全部機器
彼此網路互通,而且k8s-m1
SSH登入其餘節點爲passwdless。
-
全部防火牆與SELinux 已關閉。如CentOS:github
1 2 3 4
|
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0 $ vim /etc/selinux/config SELINUX=disabled
|
-
全部機器
須要設定/etc/hosts
解析到全部集羣主機。docker
1 2 3 4 5 6 7
|
... 192.16.35.11 k8s-m1 192.16.35.12 k8s-m2 192.16.35.13 k8s-m3 192.16.35.14 k8s-n1 192.16.35.15 k8s-n2 192.16.35.16 k8s-n3
|
-
全部機器
須要安裝Docker CE 版本的容器引擎:json
1
|
$ curl -fsSL "https://get.docker.com/" | sh
|
- 不論是在Ubuntu或CentOS都只須要執行該指令就會自動安裝最新版Docker。
- CentOS安裝完成後,須要再執行如下指令:
1
|
$ systemctl enable docker && systemctl start docker
|
1 2 3 4
|
REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v3.0.4 5361c5a52912 8 weeks ago 278MB quay.io/calico/cni v2.0.3 cef0252b1749 2 months ago 69.1MB k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 5 months ago 742kB
|
這三個由於牆的緣由會拉取不到,我已經save成文件了(有工具的能夠直接pull上面鏡像)
文件地址是https://pan.baidu.com/s/1v7uN4ht-7qvA1uk9ZMmuMA
上面是百度雲,下載不了或者限速的能夠用下面七牛雲地址下載並導入鏡像
1
|
quay.io/calico/kube-controllers v2.0.2 0754e1c707e7 2 months ago 55.1MB
|
一樣被牆了,拉取不到用個人七牛雲地址導入
無越牆工具的,我已把kubectl和kubelet上傳到個人七牛雲了,使用下面下載
1 2 3 4 5 6 7 8 9 10 11 12
|
$ wget http://ols7lqkih.bkt.clouddn.com/kubelet -O /usr/local/bin/kubelet $ chmod +x /usr/local/bin/kubelet # node 請忽略下載 kubectl $ wget http://ols7lqkih.bkt.clouddn.com/kubectl -O /usr/local/bin/kubectl $ chmod +x /usr/local/bin/kubectl
# md5值爲如下,自行對比下看看文件是否損壞了
[root@k8s-m1 ~]# md5sum /usr/local/bin/kubelet a3ced404a71f94d2fa9230635ed4e407 kubelet [root@k8s-m1 ~]# md5sum /usr/local/bin/kubectl e1f801301614463e1f13cf28b4443608 kubectl
|
有工具的使用下面的原地址
1 2 3 4 5 6 7
|
$ export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64" $ wget "${KUBE_URL}/kubelet" -O /usr/local/bin/kubelet $ chmod +x /usr/local/bin/kubelet
# node 請忽略下載 kubectl $ wget "${KUBE_URL}/kubectl" -O /usr/local/bin/kubectl $ chmod +x /usr/local/bin/kubectl
|
創建集羣CA keys 與Certificates
在這個部分,將須要產生多個元件的Certificates,這包含Etcd、Kubernetes 元件等,而且每一個集羣都會有一個根數位憑證認證機構(Root Certificate Authority)被用在認證API Server 與Kubelet 端的憑證。
- PS這邊要注意CA JSON檔的
CN(Common Name)
與O(Organization)
等內容是會影響Kubernetes元件認證的。
Etcd
首先在k8s-m1
創建/etc/etcd/ssl
文件夾,而後進入目錄完成如下操做。
1 2
|
$ mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl $ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki"
|
下載ca-config.json
與etcd-ca-csr.json
文件,並從CSR json產生CA keys與Certificate:
1 2
|
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/etcd-ca-csr.json" $ cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
|
下載etcd-csr.json
文件,併產生Etcd證書:
1 2 3 4 5 6 7 8
|
$ wget "${PKI_URL}/etcd-csr.json" $ cfssl gencert \ -ca=etcd-ca.pem \ -ca-key=etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,192.16.35.11,192.16.35.12,192.16.35.13 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare etcd
|
-hostname
需修改爲全部masters 節點。
完成後刪除沒必要要文件:
確認/etc/etcd/ssl
有如下文件:
1 2
|
$ ls /etc/etcd/ssl etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
|
複製相關文件至其餘Etcd節點,這邊爲全部master
節點:
1 2 3 4 5 6 7
|
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE} done done
|
Kubernetes
在k8s-m1
創建pki
文件夾,而後進入目錄完成如下章節操做。
1 2 3
|
$ mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki $ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki" $ export KUBE_APISERVER="https://192.16.35.10:6443"
|
下載ca-config.json
與ca-csr.json
文件,併產生CA憑證:
1 2 3 4
|
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/ca-csr.json" $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca*.pem ca-key.pem ca.pem
|
API Server Certificate
下載apiserver-csr.json
文件,併產生kube-apiserver憑證:
1 2 3 4 5 6 7 8 9 10 11
|
$ wget "${PKI_URL}/apiserver-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=10.96.0.1,192.16.35.10,127.0.0.1,kubernetes.default \ -profile=kubernetes \ apiserver-csr.json | cfssljson -bare apiserver
$ ls apiserver*.pem apiserver-key.pem apiserver.pem
|
- 這邊
-hostname
的10.96.0.1
是Cluster IP的Kubernetes端點;
192.16.35.10
爲虛擬IP 位址(VIP);
kubernetes.default
爲Kubernets DN。
Front Proxy Certificate
下載front-proxy-ca-csr.json
文件,併產生Front Proxy CA金鑰,Front Proxy主要是用在API aggregator上:
1 2 3 4 5 6
|
$ wget "${PKI_URL}/front-proxy-ca-csr.json" $ cfssl gencert \ -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
$ ls front-proxy-ca*.pem front-proxy-ca-key.pem front-proxy-ca.pem
|
下載front-proxy-client-csr.json
檔案,併產生front-proxy-client證書:
1 2 3 4 5 6 7 8 9 10
|
$ wget "${PKI_URL}/front-proxy-client-csr.json" $ cfssl gencert \ -ca=front-proxy-ca.pem \ -ca-key=front-proxy-ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ front-proxy-client-csr.json | cfssljson -bare front-proxy-client
$ ls front-proxy-client*.pem front-proxy-client-key.pem front-proxy-client.pem
|
Admin Certificate
下載admin-csr.json
文件,併產生admin certificate憑證:
1 2 3 4 5 6 7 8 9 10
|
$ wget "${PKI_URL}/admin-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare admin
$ ls admin*.pem admin-key.pem admin.pem
|
接着經過如下指令產生名稱爲admin.conf
的kubeconfig文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|
|
Controller Manager Certificate
下載manager-csr.json
檔案,併產生kube-controller-manager certificate憑證:
1 2 3 4 5 6 7 8 9 10
|
$ wget "${PKI_URL}/manager-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare controller-manager
$ ls controller-manager*.pem controller-manager-key.pem controller-manager.pem
|
接着經過如下指令產生名稱爲controller-manager.conf
的kubeconfig文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|
|
Scheduler Certificate
下載scheduler-csr.json
文件,併產生kube-scheduler certificate憑證:
1 2 3 4 5 6 7 8 9 10
|
$ wget "${PKI_URL}/scheduler-csr.json" $ cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare scheduler
$ ls scheduler*.pem scheduler-key.pem scheduler.pem
|
接着經過如下指令產生名稱爲scheduler.conf
的kubeconfig文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
|
|
Master Kubelet Certificate
接着在k8s-m1
下載kubelet-csr.json
檔案,併產生憑證:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
|
$ wget "${PKI_URL}/kubelet-csr.json" $ for NODE in k8s-m1 k8s-m2 k8s-m3; do echo "--- $NODE ---" cp kubelet-csr.json kubelet-$NODE-csr.json; sed -i "s/\$NODE/$NODE/g" kubelet-$NODE-csr.json; cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=$NODE \ -profile=kubernetes \ kubelet-$NODE-csr.json | cfssljson -bare kubelet-$NODE done
$ ls kubelet*.pem kubelet-k8s-m1-key.pem kubelet-k8s-m1.pem kubelet-k8s-m2-key.pem kubelet-k8s-m2.pem kubelet-k8s-m3-key.pem kubelet-k8s-m3.pem
|
- 這邊須要依據節點修改
-hostname
與$NODE
。
完成後複製kubelet憑證至其餘master
節點:
1 2 3 4 5 6 7
|
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki" for FILE in kubelet-$NODE-key.pem kubelet-$NODE.pem ca.pem; do scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE} done done
|
接着在k8s-m1
執行如下指令產生名稱爲kubelet.conf
的kubeconfig文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
$ for NODE in k8s-m1 k8s-m2 k8s-m3; do echo "--- $NODE ---" ssh ${NODE} "cd /etc/kubernetes/pki && \ kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=../kubelet.conf && \ kubectl config set-credentials system:node:${NODE} \ --client-certificate=kubelet-${NODE}.pem \ --client-key=kubelet-${NODE}-key.pem \ --embed-certs=true \ --kubeconfig=../kubelet.conf && \ kubectl config set-context system:node:${NODE}@kubernetes \ --cluster=kubernetes \ --user=system:node:${NODE} \ --kubeconfig=../kubelet.conf && \ kubectl config use-context system:node:${NODE}@kubernetes \ --kubeconfig=../kubelet.conf && \ rm kubelet-${NODE}.pem kubelet-${NODE}-key.pem" done
|
Service Account Key
Service account 不是經過CA 進行認證,所以不要經過CA 來作Service account key 的檢查,這邊創建一組Private 與Public 密鑰提供給Service account key 使用:
在k8s-m1
執行如下指令
1 2 3 4
|
$ openssl genrsa -out sa.key 2048 $ openssl rsa -in sa.key -pubout -out sa.pub $ ls sa.* sa.key sa.pub
|
刪除沒必要要文件
全部資訊準備完成後,就能夠將一些沒必要要文件刪除:
1
|
$ rm -rf *.json *.csr scheduler*.pem controller-manager*.pem admin*.pem kubelet*.pem
|
複製文件至其餘節點
複製憑證文件至其餘master
節點:
1 2 3 4 5 6
|
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" for FILE in $(ls /etc/kubernetes/pki/); do scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE} done done
|
複製Kubernetes config文件至其餘master
節點:
1 2 3 4 5 6
|
$ for NODE in k8s-m2 k8s-m3; do echo "--- $NODE ---" for FILE in admin.conf controller-manager.conf scheduler.conf; do scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE} done done
|
Kubernetes Masters
本部分將說明如何創建與設定Kubernetes Master 角色,過程當中會部署如下元件:
- kube-apiserver:提供REST APIs,包含受權、認證與狀態儲存等。
- kube-controller-manager:負責維護集羣的狀態,如自動擴展,滾動更新等。
- kube-scheduler:負責資源排程,依據預約的排程策略將Pod分配到對應節點上。
- Etcd:儲存集羣全部狀態的Key/Value儲存系統。
- HAProxy:提供負載平衡器。
- Keepalived:提供虛擬網路位址(VIP)。
部署與設定
首先在全部master
節點下載部署元件的YAML文件,這邊不採用二進制執行檔與Systemd來管理這些元件,所有采用Static Pod來達成。這邊將檔案下載至/etc/kubernetes/manifests
目錄:
(友情提醒鏡像須要工具才能pull
沒有工具請把鏡像的gcr.io/google_containers和k8s.gcr.io部分換成mirrorgooglecontainers
例如
gcr.io/google_containers/kube-apiserver-amd64 改爲
mirrorgooglecontainers/kube-scheduler-amd64
keepalived裏的interface網卡名改成各自宿主機的網卡名
後續的全部文件裏的鏡像名同理(沒有越牆工具就這樣作)
)
1 2 3 4 5 6 7 8 9 10 11 12 13
|
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/master" $ mkdir -p /etc/kubernetes/manifests && cd /etc/kubernetes/manifests $ for FILE in kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived etcd etcd.config; do wget "${CORE_URL}/${FILE}.yml.conf" -O ${FILE}.yml if [ ${FILE} == "etcd.config" ]; then mv etcd.config.yml /etc/etcd/etcd.config.yml sed -i "s/\${HOSTNAME}/${HOSTNAME}/g" /etc/etcd/etcd.config.yml sed -i "s/\${PUBLIC_IP}/$(hostname -i)/g" /etc/etcd/etcd.config.yml fi done
$ ls /etc/kubernetes/manifests etcd.yml haproxy.yml keepalived.yml kube-apiserver.yml kube-controller-manager.yml kube-scheduler.yml
|
- 若IP與教學設定不一樣的話,請記得修改YAML文件,keepalived.yml裏記得把interface改爲宿主機的網卡名。
- kube-apiserver中的·NodeRestriction·請參考Using Node Authorization。
產生一個用來加密Etcd 的Key:
1 2
|
$ head -c 32 /dev/urandom | base64 SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ=
|
而後在每臺master
機器的/etc/kubernetes/
目錄下,使用上面的key配合下面命令來創建encryption.yml
的加密YAML文件:
1 2 3 4 5 6 7 8 9 10 11 12 13
|
$ cat <<EOF > /etc/kubernetes/encryption.yml kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ= - identity: {} EOF
|
而後在每臺master
機器/etc/kubernetes/
目錄下,創建audit-policy.yml
的進階稽覈策略YAML文件:
1 2 3 4 5 6
|
$ cat <<EOF > /etc/kubernetes/audit-policy.yml apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata EOF
|
每臺master
機器下載haproxy.cfg
檔案來提供給HAProxy容器使用:
1 2
|
$ mkdir -p /etc/haproxy/ $ wget "${CORE_URL}/haproxy.cfg" -O /etc/haproxy/haproxy.cfg
|
每臺master
機器下載kubelet.service
相關文件來管理kubelet:
1 2 3
|
$ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
|
- 若cluster dns或domain有改變的話,須要修改10-kubelet.conf。
最後每臺master
機器創建var 存放資訊,而後啓動kubelet 服務:
1 2
|
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /var/lib/etcd $ systemctl enable kubelet.service && systemctl start kubelet.service
|
完成後會須要一段時間來下載映像檔與啓動元件,能夠利用該指令來監看:
1 2 3 4 5 6 7 8 9 10 11 12
|
$ watch netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10344/kubelet tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 11324/kube-schedule tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 11416/haproxy tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 11235/kube-controll tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN 11416/haproxy tcp6 0 0 :::2379 :::* LISTEN 10479/etcd tcp6 0 0 :::2380 :::* LISTEN 10479/etcd tcp6 0 0 :::10255 :::* LISTEN 10344/kubelet tcp6 0 0 :::5443 :::* LISTEN 11295/kube-apiserve
|
- 此處須要等待時間來拉取鏡像,須要耐心等待
- 若看到以上資訊表示服務正常啓動,若發生問題能夠用
docker
指令來查看。
- 若看到關鍵的幾個管理組件容器退出的話就說明操做錯誤
上面會去拉取鏡像,須要一段時間,具體好沒好能夠下面的操做來看狀態對不對
驗證集羣
完成後,在任意一臺master
節點複製admin kubeconfig
文件,並經過簡單指令驗證:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
|
$ cp /etc/kubernetes/admin.conf ~/.kube/config $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
$ kubectl get node NAME STATUS ROLES AGE VERSION k8s-m1 NotReady master 52s v1.10.0 k8s-m2 NotReady master 51s v1.10.0 k8s-m3 NotReady master 50s v1.10.0
$ kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE etcd-k8s-m1 1/1 Running 0 7m etcd-k8s-m2 1/1 Running 0 8m etcd-k8s-m3 1/1 Running 0 7m haproxy-k8s-m1 1/1 Running 0 7m haproxy-k8s-m2 1/1 Running 0 8m haproxy-k8s-m3 1/1 Running 0 8m keepalived-k8s-m1 1/1 Running 0 8m keepalived-k8s-m2 1/1 Running 0 7m keepalived-k8s-m3 1/1 Running 0 7m kube-apiserver-k8s-m1 1/1 Running 0 7m kube-apiserver-k8s-m2 1/1 Running 0 6m kube-apiserver-k8s-m3 1/1 Running 0 7m kube-controller-manager-k8s-m1 1/1 Running 0 8m kube-controller-manager-k8s-m2 1/1 Running 0 8m kube-controller-manager-k8s-m3 1/1 Running 0 8m kube-scheduler-k8s-m1 1/1 Running 0 8m kube-scheduler-k8s-m2 1/1 Running 0 8m kube-scheduler-k8s-m3 1/1 Running 0 8m
|
接着確認服務可以執行logs 等指令:
1 2
|
$ kubectl -n kube-system logs -f kube-scheduler-k8s-m2 Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-scheduler-k8s-m2)
|
- 這邊會發現出現403 Forbidden問題,這是由於
kube-apiserveruser
並無nodes的資源存取權限,屬於正常。
後面kubectl的命令不須要每一個master都執行了,任意一臺master執行就好了
kubectl能夠從url讀取內容來建立內容裏的資源對象,也能夠本地文件讀取
後面kubectl命令結尾的yaml文件記得先下載下來改下里面的鏡像倉庫部分gcr.io/google_containers和k8s.gcr.io部分換成mirrorgooglecontainers,還有裏面的apiserver ip啥的
而後-f後面指定文件路徑便可
上面建議後面kubectl命令部分同理,不在多說廢話
1 2 3 4 5 6 7 8 9
|
$ kubectl apply -f "${CORE_URL}/apiserver-to-kubelet-rbac.yml.conf" clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" configured clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" configured
|
設定master
節點容許Taint:
1 2 3 4
|
$ kubectl taint nodes node-role.kubernetes.io/master="":NoSchedule --all node "k8s-m1" tainted node "k8s-m2" tainted node "k8s-m3" tainted
|
創建TLS Bootstrapping RBAC 與Secret
因爲本次安裝啓用了TLS認證,所以每一個節點的kubelet都必須使用kube-apiserver的CA的憑證後,才能與kube-apiserver進行溝通,而該過程須要手動針對每臺節點單獨簽署憑證是一件繁瑣的事情,且一旦節點增長會延伸出管理不易問題;而TLS bootstrapping目標就是解決該問題,經過讓kubelet先使用一個預約低權限使用者鏈接到kube-apiserver,而後在對kube-apiserver申請憑證簽署,當受權Token一致時,Node節點的kubelet憑證將由kube-apiserver動態簽署提供。具體做法能夠參考TLS Bootstrapping與Authenticating with Bootstrap Tokens。
首先在k8s-m1
創建一個變數來產生BOOTSTRAP_TOKEN
,並創建bootstrap-kubelet.conf
的Kubernetes config文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
|
$ cd /etc/kubernetes/pki $ export TOKEN_ID=$(openssl rand 3 -hex) $ export TOKEN_SECRET=$(openssl rand 8 -hex) $ export BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET} $ export KUBE_APISERVER="https://192.16.35.10:6443"
|
接着在k8s-m1
創建TLS bootstrap secret來提供自動簽證使用:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
|
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: Secret metadata: name: bootstrap-token-${TOKEN_ID} namespace: kube-system type: bootstrap.kubernetes.io/token stringData: token-id: ${TOKEN_ID} token-secret: ${TOKEN_SECRET} usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token EOF
secret "bootstrap-token-65a3a9" created
|
在k8s-m1
創建 TLS Bootstrap Autoapprove RBAC:
1 2 3 4
|
$ kubectl apply -f "${CORE_URL}/kubelet-bootstrap-rbac.yml.conf" clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-bootstrap" created clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-certificate-rotation" created
|
Kubernetes Nodes
本部分將說明如何創建與設定Kubernetes Node 角色,Node 是主要執行容器實例(Pod)的工做節點。
在開始部署前,先在k8-m1
將須要用到的文件複製到全部node
節點上:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
|
$ cd /etc/kubernetes/pki $ for NODE in k8s-n1 k8s-n2 k8s-n3; do echo "--- $NODE ---" ssh ${NODE} "mkdir -p /etc/kubernetes/pki/" ssh ${NODE} "mkdir -p /etc/etcd/ssl" |
部署與設定
在每臺node
節點下載kubelet.service
相關文件來管理kubelet:
1 2 3 4
|
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/node" $ mkdir -p /etc/systemd/system/kubelet.service.d $ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service $ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf
|
- 若
cluster dns
或domain
有改變的話,須要修改10-kubelet.conf
。
最後每臺node
節點創建var 存放資訊,而後啓動kubelet 服務:
1 2
|
$ mkdir -p /var/lib/kubelet /var/log/kubernetes $ systemctl enable kubelet.service && systemctl start kubelet.service
|
驗證集羣
完成後,在任意一臺master
節點並經過簡單指令驗證:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
$ kubectl get csr NAME AGE REQUESTOR CONDITION csr-bvz9l 11m system:node:k8s-m1 Approved,Issued csr-jwr8k 11m system:node:k8s-m2 Approved,Issued csr-q867w 11m system:node:k8s-m3 Approved,Issued node-csr-Y-FGvxZWJqI-8RIK_IrpgdsvjGQVGW0E4UJOuaU8ogk 17s system:bootstrap:dca3e1 Approved,Issued node-csr-cnX9T1xp1LdxVDc9QW43W0pYkhEigjwgceRshKuI82c 19s system:bootstrap:dca3e1 Approved,Issued node-csr-m7SBA9RAGCnsgYWJB-u2HoB2qLSfiQZeAxWFI2WYN7Y 18s system:bootstrap:dca3e1 Approved,Issued
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-m1 NotReady master 12m v1.10.0 k8s-m2 NotReady master 11m v1.10.0 k8s-m3 NotReady master 11m v1.10.0 k8s-n1 NotReady node 32s v1.10.0 k8s-n2 NotReady node 31s v1.10.0 k8s-n3 NotReady node 29s v1.10.0
|
Kubernetes Core Addons部署
當完成上面全部步驟後,接着須要部署一些插件,其中如Kubernetes DNS
與Kubernetes Proxy
等這種Addons是很是重要的。
Kubernetes Proxy
Kube-proxy是實現Service的關鍵插件,kube-proxy會在每臺節點上執行,而後監聽API Server的Service與Endpoint資源物件的改變,而後來依據變化執行iptables來實現網路的轉發。這邊咱們會須要建議一個DaemonSet來執行,而且創建一些須要的Certificates。
在k8s-m1
下載kube-proxy.yml
來創建Kubernetes Proxy Addon:
1 2 3 4 5 6 7 8 9 10 11 12
|
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-proxy.yml.conf" serviceaccount "kube-proxy" created clusterrolebinding.rbac.authorization.k8s.io "system:kube-proxy" created configmap "kube-proxy" created daemonset.apps "kube-proxy" created
$ kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE IP NODE kube-proxy-8j5w8 1/1 Running 0 29s 192.16.35.16 k8s-n3 kube-proxy-c4zvt 1/1 Running 0 29s 192.16.35.11 k8s-m1 kube-proxy-clpl6 1/1 Running 0 29s 192.16.35.12 k8s-m2 ...
|
Kubernetes DNS
Kube DNS是Kubernetes集羣內部Pod之間互相溝通的重要Addon,它容許Pod能夠經過Domain Name方式來鏈接Service,其主要由Kube DNS與Sky DNS組合而成,經過Kube DNS監聽Service與Endpoint變化,來提供給Sky DNS資訊,已更新解析位址。
在k8s-m1
下載kube-dns.yml
來創建Kubernetes Proxy Addon:
1 2 3 4 5 6 7 8
|
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-dns.yml.conf" serviceaccount "kube-dns" created service "kube-dns" created deployment.extensions "kube-dns" created
$ kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-654684d656-zq5t8 0/3 Pending 0 1m
|
這邊會發現處於Pending
狀態,是因爲Kubernetes Pod Network還未創建完成,所以全部節點會處於NotReady
狀態,而形成Pod沒法被排程分配到指定節點上啓動,因爲爲了解決該問題,下節將說明如何創建Pod Network。
Calico Network 安裝與設定
Calico 是一款純3層的資料中心網路方案(不須要Overlay 網路),Calico 好處是它整合了各類雲原平生臺,且Calico 在每個節點利用Linux Kernel 實現高效的vRouter 來負責資料的轉發,而當資料中心複雜度增長時,能夠用BGP route reflector 來達成。
在k8s-m1
下載calico.yaml
來創建Calico Network:(yaml裏的interface網卡名記得改爲和宿主機網卡名一致)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/network/calico.yml.conf" configmap "calico-config" created daemonset "calico-node" created deployment "calico-kube-controllers" created clusterrolebinding "calico-cni-plugin" created clusterrole "calico-cni-plugin" created serviceaccount "calico-cni-plugin" created clusterrolebinding "calico-kube-controllers" created clusterrole "calico-kube-controllers" created serviceaccount "calico-kube-controllers" created
$ kubectl -n kube-system get po -l k8s-app=calico-node -o wide NAME READY STATUS RESTARTS AGE IP NODE calico-node-22mbb 2/2 Running 0 1m 192.16.35.12 k8s-m2 calico-node-2qwf5 2/2 Running 0 1m 192.16.35.11 k8s-m1 calico-node-g2sp8 2/2 Running 0 1m 192.16.35.13 k8s-m3 calico-node-hghp4 2/2 Running 0 1m 192.16.35.14 k8s-n1 calico-node-qp6gf 2/2 Running 0 1m 192.16.35.15 k8s-n2 calico-node-zfx4n 2/2 Running 0 1m 192.16.35.16 k8s-n3
|
- 這邊若節點IP與網卡不一樣的話,請修改calico.yml文件。
在k8s-m1
下載Calico CLI來查看Calico nodes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
|
$ wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.0/calicoctl -O /usr/local/bin/calicoctl $ chmod u+x /usr/local/bin/calicoctl $ cat <<EOF > ~/calico-rc export ETCD_ENDPOINTS="https://192.16.35.11:2379,https://192.16.35.12:2379,https://192.16.35.13:2379" export ETCD_CA_CERT_FILE="/etc/etcd/ssl/etcd-ca.pem" export ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" export ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" EOF
$ . ~/calico-rc $ calicoctl node status Calico process is running.
IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 192.16.35.12 | node-to-node mesh | up | 04:42:37 | Established | | 192.16.35.13 | node-to-node mesh | up | 04:42:42 | Established | | 192.16.35.14 | node-to-node mesh | up | 04:42:37 | Established | | 192.16.35.15 | node-to-node mesh | up | 04:42:41 | Established | | 192.16.35.16 | node-to-node mesh | up | 04:42:36 | Established | +--------------+-------------------+-------+----------+-------------+ ...
|
查看pending 的pod 是否已執行:
1 2 3 4
|
$ kubectl -n kube-system get po -l k8s-app=kube-dns kubectl -n kube-system get po -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE kube-dns-654684d656-j8xzx 3/3 Running 0 10m
|
Gubernets Extra Addons部署
本節說明如何部署一些官方經常使用的Addons,如Dashboard、Heapster 等。
Dashboard
Dashboard是Kubernetes社區官方開發的儀表板,有了儀表板後管理者就可以經過Web-based方式來管理Kubernetes集羣,除了提高管理方便,也讓資源視覺化,讓人更直覺看見系統資訊的呈現結果。
在k8s-m1
經過kubectl來創建kubernetes dashboard便可:
1 2 3 4 5 6 7
|
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard NAME READY STATUS RESTARTS AGE kubernetes-dashboard-7d5dcdb6d9-j492l 1/1 Running 0 12s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.111.22.111 <none> 443/TCP 12s
|
這邊會額外創建一個名稱爲open-api
Cluster Role Binding,這僅做爲方便測試時使用,在通常狀況下不要開啓,否則就會直接被存取全部API:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
$ cat <<EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-api namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:anonymous EOF
|
- 注意!管理者能夠針對特定使用者來開放API 存取權限,但這邊方便使用直接綁在cluster-admin cluster role。
完成後,就能夠經過瀏覽器存取Dashboard https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/。
在 1.7 版本以後的 Dashboard 將再也不提供全部權限,所以須要創建一個 service account 來綁定 cluster-admin role:
1 2 3 4
|
$ kubectl -n kube-system create sa dashboard $ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard $ kubectl -n kube-system describe secrets | sed -rn '/\sdashboard-token-/,/^token/{/^token/s#\S+\s+##p}' eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdzVocmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWJmMTFjYzMtZjRlYi0xMWU3LTgzYWUtMDgwMDI3NjdkOWI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.Xuyq34ci7Mk8bI97o4IldDyKySOOqRXRsxVWIJkPNiVUxKT4wpQZtikNJe2mfUBBD-JvoXTzwqyeSSTsAy2CiKQhekW8QgPLYelkBPBibySjBhJpiCD38J1u7yru4P0Pww2ZQJDjIxY4vqT46ywBklReGVqY3ogtUQg-eXueBmz-o7lJYMjw8L14692OJuhBjzTRSaKW8U2MPluBVnD7M2SOekDff7KpSxgOwXHsLVQoMrVNbspUCvtIiEI1EiXkyCNRGwfnd2my3uzUABIHFhm0_RZSmGwExPbxflr8Fc6bxmuz-_jSdOtUidYkFIzvEWw2vRovPgs3MXTv59RwUw
|
- 複製
token
,而後貼到Kubernetes dashboard。注意這邊通常來講要針對不一樣User開啓特定存取權限。
Heapster
Heapster是Kubernetes社區維護的容器集羣監控與效能分析工具。Heapster會從Kubernetes apiserver取得全部Node資訊,而後再經過這些Node來取得kubelet上的資料,最後再將全部收集到資料送到Heapster的後臺儲存InfluxDB,最後利用Grafana來抓取InfluxDB的資料源來進行視覺化。
在k8s-m1
經過kubectl來創建kubernetes monitor便可:
1 2 3 4 5 6 7 8 9 10 11 12 13
|
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-monitor.yml.conf" $ kubectl -n kube-system get po,svc NAME READY STATUS RESTARTS AGE ... po/heapster-74fb5c8cdc-62xzc 4/4 Running 0 7m po/influxdb-grafana-55bd7df44-nw4nc 2/2 Running 0 7m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... svc/heapster ClusterIP 10.100.242.225 <none> 80/TCP 7m svc/monitoring-grafana ClusterIP 10.101.106.180 <none> 80/TCP 7m svc/monitoring-influxdb ClusterIP 10.109.245.142 <none> 8083/TCP,8086/TCP 7m ···
|
完成後,就能夠經過瀏覽器存取Grafana Dashboard https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/。
Ingress
Ingress是利用Nginx或HAProxy等負載平衡器來暴露集羣內服務的元件,Ingress主要經過設定Ingress規格來定義Domain Name映射Kubernetes內部Service,這種方式能夠避免掉使用過多的NodePort問題。
在k8s-m1
經過kubectl來創建Ingress Controller便可:
1 2 3 4 5 6
|
$ kubectl create ns ingress-nginx $ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/ingress-controller.yml.conf" $ kubectl -n ingress-nginx get po NAME READY STATUS RESTARTS AGE default-http-backend-5c6d95c48-rzxfb 1/1 Running 0 7m nginx-ingress-controller-699cdf846-982n4 1/1 Running 0 7m
|
- 這裏也能夠選擇Traefik 的Ingress Controller。
測試Ingress 功能
這邊先創建一個Nginx HTTP server Deployment 與Service:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
$ kubectl run nginx-dp --image nginx --port 80 $ kubectl expose deploy nginx-dp --port 80 $ kubectl get po,svc $ cat <<EOF | kubectl create -f - apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-nginx-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: test.nginx.com http: paths: - path: / backend: serviceName: nginx-dp servicePort: 80 EOF
|
經過curl 來進行測試:
1 2 3 4 5 6 7 8 9 10
|
$ curl 192.16.35.10 -H 'Host: test.nginx.com' <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
# 測試其餘 domain name 是否會回傳 404 $ curl 192.16.35.10 -H 'Host: test.nginx.com1' default backend - 404
|
Helm Tiller Server
Helm是Kubernetes Chart的管理工具,Kubernetes Chart是一套預先組態的Kubernetes資源套件。其中Tiller Server
主要負責接收來至Client的指令,並經過kube-apiserver與Kubernetes集羣作溝通,根據Chart定義的內容,來產生與管理各類對應API物件的Kubernetes部署文件(又稱爲Release
)。
首先在k8s-m1
安裝Helm tool:
1 2
|
$ wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz | tar -zx $ sudo mv linux-amd64/helm /usr/local/bin/
|
另外在全部node
機器安裝 socat:
1
|
$ sudo apt-get install -y socat
|
接着初始化 Helm(這邊會安裝 Tiller Server):
1 2 3 4 5 6 7 8 9 10 11 12 13 14
|
$ kubectl -n kube-system create sa tiller $ kubectl create clusterrolebinding tiller |
測試Helm 功能
這邊部署簡單Jenkins 來進行功能測試:
1 2 3 4 5 6 7 8 9 10 11 12
|
$ helm install --name demo --set Persistence.Enabled=false stable/jenkins $ kubectl get po,svc -l app=demo-jenkins NAME READY STATUS RESTARTS AGE demo-jenkins-7bf4bfcff-q74nt 1/1 Running 0 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-jenkins LoadBalancer 10.103.15.129 <pending> 8080:31161/TCP 2m demo-jenkins-agent ClusterIP 10.103.160.126 <none> 50000/TCP 2m
|
完成後,就能夠經過瀏覽器存取Jenkins Web http://192.16.35.10:31161。
測試完成後,便可刪除:
1 2 3 4 5 6
|
$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE demo 1 Tue Apr 10 07:29:51 2018 DEPLOYED jenkins-0.14.4 default
$ helm delete demo --purge release "demo" deleted
|
更多Helm Apps能夠到Kubeapps Hub尋找。
測試集羣
SSH進入k8s-m1
節點,而後關閉該節點:
接着進入到k8s-m2
節點,經過kubectl來檢查集羣是否可以正常執行: