[TOC]node
該集羣包含三個節點,啓動後經過競爭選舉機制產生一個
leader
節點,其餘節點爲阻塞狀態。當leader
節點不可用時,阻塞節點將會在此選舉產生新的leader
,從而保證服務的高可用。爲保證通訊安全,這裏採用x509
證書和私鑰,kube-controller-manager
在與apiserver
的安全端口(http 10252
)通訊使用;git
建立證書籤名請求github
cd /opt/k8s/work cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "10.0.20.11", "10.0.20.12", "10.0.20.13", "node01.k8s.com", "node02.k8s.com", "node03.k8s.com" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "4Paradigm" } ] } EOF
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager ls kube-controller-manager*pem
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager.kubeconfig root@${node_ip}:/etc/kubernetes/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > kube-controller-manager.service.template <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] WorkingDirectory=${K8S_DIR}/kube-controller-manager ExecStart=/opt/k8s/bin/kube-controller-manager \\ --profiling \\ --cluster-name=kubernetes \\ --controllers=*,bootstrapsigner,tokencleaner \\ --kube-api-qps=1000 \\ --kube-api-burst=2000 \\ --leader-elect \\ --use-service-account-credentials\\ --concurrent-service-syncs=2 \\ --bind-address=0.0.0.0 \\ #--secure-port=10252 \\ --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\ --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\ #--port=0 \\ --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --client-ca-file=/etc/kubernetes/cert/ca.pem \\ --requestheader-allowed-names="" \\ --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\ --requestheader-extra-headers-prefix="X-Remote-Extra-" \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\ --experimental-cluster-signing-duration=876000h \\ --horizontal-pod-autoscaler-sync-period=10s \\ --concurrent-deployment-syncs=10 \\ --concurrent-gc-syncs=30 \\ --node-cidr-mask-size=24 \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --pod-eviction-timeout=6m \\ --terminated-pod-gc-threshold=10000 \\ --root-ca-file=/etc/kubernetes/cert/ca.pem \\ --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
參數解釋json
–port=0:
關閉監聽非安全端口(http),同時 –address 參數無效,–bind-address 參數有效;–secure-port=1025二、–bind-address=0.0.0.0:
在全部網絡接口監聽 10252 端口的 https /metrics 請求;–kubeconfig:
指定 kubeconfig 文件路徑,kube-controller-manager 使用它鏈接和驗證 kube-apiserver;–authentication-kubeconfig 和 –authorization-kubeconfig:
kube-controller-manager 使用它鏈接 apiserver,對 client 的請求進行認證和受權。kube-controller-manager 再也不使用 –tls-ca-file 對請求 https metrics 的 Client 證書進行校驗。若是沒有配置這兩個 kubeconfig 參數,則 client 鏈接 kube-controller-manager https 端口的請求會被拒絕(提示權限不足)。–cluster-signing-*-file:
簽名 TLS Bootstrap 建立的證書;–experimental-cluster-signing-duration:
指定 TLS Bootstrap 證書的有效期;–root-ca-file:
放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;–service-cluster-ip-range :
指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;–leader-elect=true:
集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;–controllers=*,bootstrapsigner,tokencleaner:
啓用的控制器列表,tokencleaner 用於自動清理過時的 Bootstrap token;–horizontal-pod-autoscaler-*:
custom metrics 相關參數,支持 autoscaling/v2alpha1;–tls-cert-file、–tls-private-key-file:
使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;–use-service-account-credentials=true:
kube-controller-manager 中各 controller 使用 serviceaccount 訪問 kube-apiserver;cd /opt/k8s/work source /opt/k8s/bin/environment.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${MASTER_IPS[i]}.service done ls kube-controller-manager*.service
分發到全部master節點bootstrap
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service done
source /opt/k8s/bin/environment.sh for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager" done
檢查運行狀態api
source /opt/k8s/bin/environment.sh for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active" done
for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "netstat -lnpt | grep kube-controlle" done
輸出結果安全
[root@node01 work]# for node_ip in ${MASTER_IPS[@]} > do > echo ">>> ${node_ip}" > ssh root@${node_ip} "netstat -lnpt | grep kube-controlle" > done >>> 10.0.20.11 tcp6 0 0 :::10252 :::* LISTEN 6127/kube-controlle tcp6 0 0 :::10257 :::* LISTEN 6127/kube-controlle >>> 10.0.20.12 tcp6 0 0 :::10252 :::* LISTEN 2914/kube-controlle tcp6 0 0 :::10257 :::* LISTEN 2914/kube-controlle >>> 10.0.20.13 tcp6 0 0 :::10252 :::* LISTEN 2952/kube-controlle tcp6 0 0 :::10257 :::* LISTEN 2952/kube-controlle
ClusteRole system:kube-controller-manager的權限過小,只能建立secret、serviceaccount等資源,將controller的權限分散到ClusterRole system:controller:xxx中網絡
[root@node01 work]# kubectl describe clusterrole system:kube-controller-manager Name: system:kube-controller-manager Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- secrets [] [] [create delete get update] endpoints [] [] [create get update] serviceaccounts [] [] [create get update] events [] [] [create patch update] serviceaccounts/token [] [] [create] tokenreviews.authentication.k8s.io [] [] [create] subjectaccessreviews.authorization.k8s.io [] [] [create] configmaps [] [] [get] namespaces [] [] [get] *.* [] [] [list watch]
須要在 kube-controller-manager 的啓動參數中添加 –use-service-account-credentials=true 參數,這樣 main controller 會爲各 controller 建立對應的 ServiceAccount XXX-controller。內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。app
[root@node01 work]# kubectl get clusterrole|grep controller system:controller:attachdetach-controller 122m system:controller:certificate-controller 122m system:controller:clusterrole-aggregation-controller 122m system:controller:cronjob-controller 122m system:controller:daemon-set-controller 122m system:controller:deployment-controller 122m system:controller:disruption-controller 122m system:controller:endpoint-controller 122m system:controller:expand-controller 122m system:controller:generic-garbage-collector 122m system:controller:horizontal-pod-autoscaler 122m system:controller:job-controller 122m system:controller:namespace-controller 122m system:controller:node-controller 122m system:controller:persistent-volume-binder 122m system:controller:pod-garbage-collector 122m system:controller:pv-protection-controller 122m system:controller:pvc-protection-controller 122m system:controller:replicaset-controller 122m system:controller:replication-controller 122m system:controller:resourcequota-controller 122m system:controller:route-controller 122m system:controller:service-account-controller 122m system:controller:service-controller 122m system:controller:statefulset-controller 122m system:controller:ttl-controller 122m system:kube-controller-manager 122m
以 deployment controller 爲例:ssh
[root@node01 work]# kubectl describe clusterrole system:controller:deployment-controller Name: system:controller:deployment-controller Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- replicasets.apps [] [] [create delete get list patch update watch] replicasets.extensions [] [] [create delete get list patch update watch] events [] [] [create patch update] pods [] [] [get list update watch] deployments.apps [] [] [get list update watch] deployments.extensions [] [] [get list update watch] deployments.apps/finalizers [] [] [update] deployments.apps/status [] [] [update] deployments.extensions/finalizers [] [] [update] deployments.extensions/status [] [] [update]
[root@node01 work]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}
這裏看到 controller-manager 的狀態已是 ok
了,在 測試訪問apiserver狀態 看到的仍是 scheduler 是會同樣的。