一 部署 kubelet
kubelet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。
kubelet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況。
爲確保安全,部署時關閉了 kubelet 的非安全 http 端口,對請求進行認證和受權,拒絕未受權的訪問(如 apiserver、heapster 的請求)。
1.1 安裝kubelet
提示:k8smaster01節點已下載相應二進制,可直接分發至node節點。
1.2 分發kubelet
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for all_ip in ${ALL_IPS[@]}
4 do
5 echo ">>> ${all_ip}"
6 scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/
7 ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
8 done
1.3 分發kubeconfig
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
4 do
5 echo ">>> ${all_name}"
6
7 # 建立 token
8 export BOOTSTRAP_TOKEN=$(kubeadm token create \
9 --description kubelet-bootstrap-token \
10 --groups system:bootstrappers:${all_name} \
11 --kubeconfig ~/.kube/config)
12
13 # 設置集羣參數
14 kubectl config set-cluster kubernetes \
15 --certificate-authority=/etc/kubernetes/cert/ca.pem \
16 --embed-certs=true \
17 --server=${KUBE_APISERVER} \
18 --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
19
20 # 設置客戶端認證參數
21 kubectl config set-credentials kubelet-bootstrap \
22 --token=${BOOTSTRAP_TOKEN} \
23 --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
24
25 # 設置上下文參數
26 kubectl config set-context default \
27 --cluster=kubernetes \
28 --user=kubelet-bootstrap \
29 --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
30
31 # 設置默認上下文
32 kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
33 done
解釋:
向 kubeconfig 寫入的是 token,bootstrap 結束後 kube-controller-manager 爲 kubelet 建立 client 和 server 證書。
token 有效期爲 1 天,超期後將不能再被用來 boostrap kubelet,且會被 kube-controller-manager 的 tokencleaner 清理;
kube-apiserver 接收 kubelet 的 bootstrap token 後,將請求的 user 設置爲 system:bootstrap:<Token ID>,group 設置爲 system:bootstrappers,後續將爲這個 group 設置 ClusterRoleBinding。
1 [root@k8smaster01 work]# kubeadm token list --kubeconfig ~/.kube/config #查看 kubeadm 爲各節點建立的 token
2 [root@k8smaster01 work]# kubectl get secrets -n kube-system|grep bootstrap-token #查看各 token 關聯的 Secret
1.5 分發bootstrap kubeconfig
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
4 do
5 echo ">>> ${all_name}"
6 scp kubelet-bootstrap-${all_name}.kubeconfig root@${all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
7 done
1.6 建立kubelet 參數配置文件
從 v1.10 開始,部分 kubelet 參數需在配置文件中配置,建議建立kubelet配置文件。
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# cat > kubelet-config.yaml.template <<EOF
4 kind: KubeletConfiguration
5 apiVersion: kubelet.config.k8s.io/v1beta1
6 address: "##ALL_IP##"
7 staticPodPath: ""
8 syncFrequency: 1m
9 fileCheckFrequency: 20s
10 httpCheckFrequency: 20s
11 staticPodURL: ""
12 port: 10250
13 readOnlyPort: 0
14 rotateCertificates: true
15 serverTLSBootstrap: true
16 authentication:
17 anonymous:
18 enabled: false
19 webhook:
20 enabled: true
21 x509:
22 clientCAFile: "/etc/kubernetes/cert/ca.pem"
23 authorization:
24 mode: Webhook
25 registryPullQPS: 0
26 registryBurst: 20
27 eventRecordQPS: 0
28 eventBurst: 20
29 enableDebuggingHandlers: true
30 enableContentionProfiling: true
31 healthzPort: 10248
32 healthzBindAddress: "##ALL_IP##"
33 clusterDomain: "${CLUSTER_DNS_DOMAIN}"
34 clusterDNS:
35 - "${CLUSTER_DNS_SVC_IP}"
36 nodeStatusUpdateFrequency: 10s
37 nodeStatusReportFrequency: 1m
38 imageMinimumGCAge: 2m
39 imageGCHighThresholdPercent: 85
40 imageGCLowThresholdPercent: 80
41 volumeStatsAggPeriod: 1m
42 kubeletCgroups: ""
43 systemCgroups: ""
44 cgroupRoot: ""
45 cgroupsPerQOS: true
46 cgroupDriver: cgroupfs
47 runtimeRequestTimeout: 10m
48 hairpinMode: promiscuous-bridge
49 maxPods: 220
50 podCIDR: "${CLUSTER_CIDR}"
51 podPidsLimit: -1
52 resolvConf: /etc/resolv.conf
53 maxOpenFiles: 1000000
54 kubeAPIQPS: 1000
55 kubeAPIBurst: 2000
56 serializeImagePulls: false
57 evictionHard:
58 memory.available: "100Mi"
59 nodefs.available: "10%"
60 nodefs.inodesFree: "5%"
61 imagefs.available: "15%"
62 evictionSoft: {}
63 enableControllerAttachDetach: true
64 failSwapOn: true
65 containerLogMaxSize: 20Mi
66 containerLogMaxFiles: 10
67 systemReserved: {}
68 kubeReserved: {}
69 systemReservedCgroup: ""
70 kubeReservedCgroup: ""
71 enforceNodeAllocatable: ["pods"]
72 EOF
1.7 分發kubelet 參數配置文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for all_ip in ${ALL_IPS[@]}
4 do
5 echo ">>> ${all_ip}"
6 sed -e "s/##ALL_IP##/${all_ip}/" kubelet-config.yaml.template > kubelet-config-${all_ip}.yaml.template
7 scp kubelet-config-${all_ip}.yaml.template root@${all_ip}:/etc/kubernetes/kubelet-config.yaml
8 done
1.8 建立kubelet systemd
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# cat > kubelet.service.template <<EOF
4 [Unit]
5 Description=Kubernetes Kubelet
6 Documentation=https://github.com/GoogleCloudPlatform/kubernetes
7 After=docker.service
8 Requires=docker.service
9
10 [Service]
11 WorkingDirectory=${K8S_DIR}/kubelet
12 ExecStart=/opt/k8s/bin/kubelet \\
13 --allow-privileged=true \\
14 --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
15 --cert-dir=/etc/kubernetes/cert \\
16 --cni-conf-dir=/etc/cni/net.d \\
17 --container-runtime=docker \\
18 --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
19 --root-dir=${K8S_DIR}/kubelet \\
20 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
21 --config=/etc/kubernetes/kubelet-config.yaml \\
22 --hostname-override=##ALL_NAME## \\
23 --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
24 --image-pull-progress-deadline=15m \\
25 --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
26 --logtostderr=true \\
27 --v=2
28 Restart=always
29 RestartSec=5
30 StartLimitInterval=0
31
32 [Install]
33 WantedBy=multi-user.target
34 EOF
解釋:
- 若是設置了 --hostname-override 選項,則 kube-proxy 也須要設置該選項,不然會出現找不到 Node 的狀況;
- --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
- K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄建立證書和私鑰文件,而後寫入 --kubeconfig 文件;
- --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 鏡像,它不能回收容器的殭屍。
1.9 分發kubelet systemd
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
4 do
5 echo ">>> ${all_name}"
6 sed -e "s/##ALL_NAME##/${all_name}/" kubelet.service.template > kubelet-${all_name}.service
7 scp kubelet-${all_name}.service root@${all_name}:/etc/systemd/system/kubelet.service
8 done
二 啓動驗證
2.1 受權
kubelet 啓動時查找 --kubeletconfig 參數對應的文件是否存在,若是不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 發送證書籤名請求 (CSR)。
kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證經過後將請求的 user 設置爲 system:bootstrap:<Token ID>,group 設置爲 system:bootstrappers,這一過程稱爲 Bootstrap Token Auth。
默認狀況下,這個 user 和 group 沒有建立 CSR 的權限,所以kubelet 會啓動失敗,可經過以下方式建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定。
1 [root@k8smaster01 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
2.2 啓動kubelet
1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
2 [root@k8smaster01 ~]# for all_name in ${ALL_NAMES[@]}
3 do
4 echo ">>> ${all_name}"
5 ssh root@${all_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
6 ssh root@${all_name} "/usr/sbin/swapoff -a"
7 ssh root@${all_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
8 done
kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
注意:kube-controller-manager 須要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,纔會爲 TLS Bootstrap 建立證書和私鑰。
提示:
啓動服務前必須先建立工做目錄;
關閉 swap 分區,不然 kubelet 會啓動失敗。
2.3 查看kubelet服務
1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
2 [root@k8smaster01 ~]# for all_name in ${ALL_NAMES[@]}
3 do
4 echo ">>> ${all_name}"
5 ssh root@${all_name} "systemctl status kubelet"
6 done
7 [root@k8snode01 ~]# kubectl get csr
8 [root@k8snode01 ~]# kubectl get nodes
三 approve CSR 請求
3.1 自動 approve CSR 請求
建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書。
1 [root@k8snode01 ~]# cd /opt/k8s/work
2 [root@k8snode01 work]# cat > csr-crb.yaml <<EOF
3 # Approve all CSRs for the group "system:bootstrappers"
4 kind: ClusterRoleBinding
5 apiVersion: rbac.authorization.k8s.io/v1
6 metadata:
7 name: auto-approve-csrs-for-group
8 subjects:
9 - kind: Group
10 name: system:bootstrappers
11 apiGroup: rbac.authorization.k8s.io
12 roleRef:
13 kind: ClusterRole
14 name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
15 apiGroup: rbac.authorization.k8s.io
16 ---
17 # To let a node of the group "system:nodes" renew its own credentials
18 kind: ClusterRoleBinding
19 apiVersion: rbac.authorization.k8s.io/v1
20 metadata:
21 name: node-client-cert-renewal
22 subjects:
23 - kind: Group
24 name: system:nodes
25 apiGroup: rbac.authorization.k8s.io
26 roleRef:
27 kind: ClusterRole
28 name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
29 apiGroup: rbac.authorization.k8s.io
30 ---
31 # A ClusterRole which instructs the CSR approver to approve a node requesting a
32 # serving cert matching its client cert.
33 kind: ClusterRole
34 apiVersion: rbac.authorization.k8s.io/v1
35 metadata:
36 name: approve-node-server-renewal-csr
37 rules:
38 - apiGroups: ["certificates.k8s.io"]
39 resources: ["certificatesigningrequests/selfnodeserver"]
40 verbs: ["create"]
41 ---
42 # To let a node of the group "system:nodes" renew its own server credentials
43 kind: ClusterRoleBinding
44 apiVersion: rbac.authorization.k8s.io/v1
45 metadata:
46 name: node-server-cert-renewal
47 subjects:
48 - kind: Group
49 name: system:nodes
50 apiGroup: rbac.authorization.k8s.io
51 roleRef:
52 kind: ClusterRole
53 name: approve-node-server-renewal-csr
54 apiGroup: rbac.authorization.k8s.io
55 EOF
56 [root@k8snode01 work]# kubectl apply -f csr-crb.yaml
解釋:
auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 爲 system:bootstrappers;
node-client-cert-renewal:自動 approve node 後續過時的 client 證書,自動生成的證書 Group 爲 system:nodes;
node-server-cert-renewal:自動 approve node 後續過時的 server 證書,自動生成的證書 Group 爲 system:nodes。
3.2 查看 kubelet 的狀況
1 [root@k8snode01 ~]# kubectl get csr | grep boot #等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approved
2 [root@k8snode01 ~]# kubectl get nodes #全部節點均 ready
3 [root@k8snode01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
4 [root@k8snode01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
3.3 手動 approve server cert csr
基於安全性考慮,CSR approving controllers 不會自動 approve kubelet server 證書籤名請求,須要手動 approve。
1 [root@k8smaster01 ~]# kubectl get csr
2 [root@k8smaster01 ~]# kubectl certificate approve csr-2kmtj
3
1 [root@k8smaster01 ~]# ls -l /etc/kubernetes/cert/kubelet-*
四 kubelet API 接口
4.1 kubelet 提供的 API 接口
1 [root@k8smaster01 ~]# sudo netstat -lnpt|grep kubelet #查看kubelet監聽端口
解釋:
- 10248: healthz http 服務;
- 10250: https 服務,訪問該端口時須要認證和受權(即便訪問 /healthz 也須要);
- 未開啓只讀端口 10255;
- 從 K8S v1.10 開始,去除了 --cadvisor-port 參數(默認 4194 端口),不支持訪問 cAdvisor UI & API。
4.2 kubelet api 認證和受權
kubelet 配置了以下認證參數:
- authentication.anonymous.enabled:設置爲 false,不容許匿名�訪問 10250 端口;
- authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTPs 證書認證;
- authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證。
同時配置了以下受權參數:
authroization.mode=Webhook:開啓 RBAC 受權。
kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。若是二者都沒經過,則拒絕請求,提示 Unauthorized。
1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.24.8.71:10250/metrics
2 Unauthorized[root@k8smaster01 ~]#
3 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.24.8.71:10250/metrics
4 Unauthorized
若經過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發送請求,查詢證書或 token 對應的 user、group 是否有操做資源的權限(RBAC)。
4.3 證書認證和受權
1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.24.8.71:10250/metrics #默認權限不足
2 Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
3 curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.24.8.71:10250/metrics|head #使用最高權限的admin
解釋:
--cacert、--cert、--key 的參數值必須是文件路徑,如上面的 ./admin.pem 不能省略 ./,不然返回 401 Unauthorized。
4.4 建立bear token 認證和受權
1 [root@k8smaster01 ~]# kubectl create sa kubelet-api-test
2 [root@k8smaster01 ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
3 [root@k8smaster01 ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
4 [root@k8smaster01 ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
5 [root@k8smaster01 ~]# echo ${TOKEN}
1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.24.8.71:10250/metrics|head
4.5 cadvisor 和 metrics
cadvisor 是內嵌在 kubelet 二進制中的,統計所在節點各容器的資源(CPU、內存、磁盤、網卡)使用狀況的服務。
瀏覽器訪問 https://172.24.8.71:10250/metrics 和 https://172.24.8.71:10250/metrics/cadvisor 分別返回 kubelet 和 cadvisor 的 metrics。
注意:
kubelet.config.json 設置 authentication.anonymous.enabled 爲 false,不容許匿名證書訪問 10250 的 https 服務;
參考https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/A.%E6%B5%8F%E8%A7%88%E5%99%A8%E8%AE%BF%E9%97%AEkube-apiserver%E5%AE%89%E5%85%A8%E7%AB%AF%E5%8F%A3.md,建立和導入相關證書,而後訪問上面的 10250 端口。