kubelet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。node
kubelet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況。linux
爲確保安全,部署時關閉了 kubelet 的非安全 http 端口,對請求進行認證和受權,拒絕未受權的訪問(如 apiserver、heapster 的請求)。git
建立 kubelet bootstrap kubeconfig 文件github
cd /opt/k8s/work export KUBE_APISERVER=https://192.168.0.107:6443 export node_name=slave export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:${node_name} \ --kubeconfig ~/.kube/config) # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
分發 bootstrap kubeconfig 文件到全部 worker 節點web
cd /opt/k8s/work export node_ip=192.168.0.114 scp kubelet-bootstrap.kubeconfig root@${node_ip}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
建立和分發 kubelet 參數配置文件docker
從 v1.10 開始,部分 kubelet 參數需在配置文件中配置,kubelet --help 會提示json
cd /opt/k8s/work export CLUSTER_CIDR="172.30.0.0/16" export NODE_IP=192.168.0.114 export CLUSTER_DNS_SVC_IP="10.254.0.2" cat > kubelet-config.yaml <<EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: ${NODE_IP} staticPodPath: "/etc/kubernetes/manifests" syncFrequency: 1m fileCheckFrequency: 20s httpCheckFrequency: 20s staticPodURL: "" port: 10250 readOnlyPort: 0 rotateCertificates: true serverTLSBootstrap: true authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/etc/kubernetes/cert/ca.pem" authorization: mode: Webhook registryPullQPS: 0 registryBurst: 20 eventRecordQPS: 0 eventBurst: 20 enableDebuggingHandlers: true enableContentionProfiling: true healthzPort: 10248 healthzBindAddress: ${NODE_IP} clusterDomain: "cluster.local" clusterDNS: - "${CLUSTER_DNS_SVC_IP}" nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 1m imageMinimumGCAge: 2m imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 volumeStatsAggPeriod: 1m kubeletCgroups: "" systemCgroups: "" cgroupRoot: "" cgroupsPerQOS: true cgroupDriver: cgroupfs runtimeRequestTimeout: 10m hairpinMode: promiscuous-bridge maxPods: 220 podCIDR: "${CLUSTER_CIDR}" podPidsLimit: -1 resolvConf: /run/systemd/resolve/resolv.conf maxOpenFiles: 1000000 kubeAPIQPS: 1000 kubeAPIBurst: 2000 serializeImagePulls: false evictionHard: memory.available: "100Mi" nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" evictionSoft: {} enableControllerAttachDetach: true failSwapOn: true containerLogMaxSize: 20Mi containerLogMaxFiles: 10 systemReserved: {} kubeReserved: {} systemReservedCgroup: "" kubeReservedCgroup: "" enforceNodeAllocatable: ["pods"] EOF
爲各節點建立和分發 kubelet 配置文件bootstrap
cd /opt/k8s/work export node_ip=192.168.0.114 scp kubelet-config.yaml root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
建立和分發 kubelet 服務啓動文件api
cd /opt/k8s/work export K8S_DIR=/data/k8s/k8s export NODE_NAME=slave cat > kubelet.service <<EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=${K8S_DIR}/kubelet ExecStart=/opt/k8s/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\ --cert-dir=/etc/kubernetes/cert \\ --root-dir=${K8S_DIR}/kubelet \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-config.yaml \\ --hostname-override=${NODE_NAME} \\ --image-pull-progress-deadline=15m \\ --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\ --logtostderr=true \\ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target EOF
安裝分發kubelet服務文件安全
cd /opt/k8s/work export node_ip=192.168.0.114 scp kubelet.service root@${node_ip}:/etc/systemd/system/kubelet.service
授予 kube-apiserver 訪問 kubelet API 的權限
在執行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉發到 kubelet 的 https 端口。這裏定義 RBAC 規則,受權 apiserver 使用的證書(kubernetes.pem)對應的用戶(CN:kubernetes-api)訪問 kubelet API 的權限,詳情參考kubelet-auth:
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-api
Bootstrap Token Auth 和授予權限
kubelet 啓動時查找 --kubeletconfig 參數對應的文件是否存在,若是不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 發送證書籤名請求 (CSR)。
kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證經過後將請求的 user 設置爲 system:bootstrap:
默認狀況下,這個 user 和 group 沒有建立 CSR 的權限, 須要建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
啓動 kubelet 服務
export K8S_DIR=/data/k8s/k8s export node_ip=192.168.0.114 ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
注意:kube-controller-manager 須要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,纔會爲 TLS Bootstrap 建立證書和私鑰。
遇到問題
啓動kubelet後,使用 kubectl get csr 沒有結果,查看kubelet出現錯誤
journalctl -u kubelet -a |grep -A 2 'certificate_manager.go' Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized
查看kube-api服務日誌
root@master:/opt/k8s/work# journalctl -eu kube-apiserver Unable to authenticate the request due to an error: invalid bearer token
緣由,在kube-apiserver服務的啓動文件中丟掉了下面的配置
--enable-bootstrap-token-auth \\
追加上,從新啓動kube-apiserver後解決
kubelet 啓動後持續不斷的產生csr,手動approve後還繼續產生
緣由是kube-controller-manager服務中止掉了,從新啓動後解決
查看 kubelet 狀況
root@master:/opt/k8s/work# kubectl get csr NAME AGE REQUESTOR CONDITION csr-kl5mg 49s system:bootstrap:5t989l Pending csr-mrmkf 2m1s system:bootstrap:5t989l Pending csr-ql68g 13s system:bootstrap:5t989l Pending csr-rvl2v 84s system:bootstrap:5t989l Pending
手動 approve csr
root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve certificatesigningrequest.certificates.k8s.io/csr-kl5mg approved certificatesigningrequest.certificates.k8s.io/csr-mrmkf approved certificatesigningrequest.certificates.k8s.io/csr-ql68g approved certificatesigningrequest.certificates.k8s.io/csr-rvl2v approved root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve certificatesigningrequest.certificates.k8s.io/csr-f4smx approved
查看node信息
root@master:/opt/k8s/work# kubectl get nodes NAME STATUS ROLES AGE VERSION slave Ready <none> 10m v1.17.2
查看kubelet服務狀態
export node_ip=192.168.0.114 root@master:/opt/k8s/work# ssh root@${node_ip} "systemctl status kubelet.service" ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-02-10 22:48:41 CST; 12min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 15529 (kubelet) Tasks: 19 (limit: 4541) CGroup: /system.slice/kubelet.service └─15529 /opt/k8s/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/cert --root-dir=/data/k8s/k8s/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-config.yaml --hostname-override=slave --image-pull-progress-deadline=15m --volume-plugin-dir=/data/k8s/k8s/kubelet/kubelet-plugins/volume/exec/ --logtostderr=true --v=2 2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.846285 15529 kubelet_node_status.go:73] Successfully registered node slave 2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.930745 15529 certificate_manager.go:402] Rotating certificates 2月 10 22:49:14 slave kubelet[15529]: I0210 22:49:14.966351 15529 kubelet_node_status.go:486] Recording NodeReady event message for node slave 2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580410 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2029-01-21 13:08:18.850930128 +0000 UTC 2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580484 15529 certificate_manager.go:281] Waiting 78430h18m49.270459727s for next certificate rotation 2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.580981 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2027-07-14 16:09:26.990162158 +0000 UTC 2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.581096 15529 certificate_manager.go:281] Waiting 65065h19m56.409078053s for next certificate rotation 2月 10 22:53:44 slave kubelet[15529]: I0210 22:53:44.911705 15529 kubelet.go:1312] Image garbage collection succeeded 2月 10 22:53:45 slave kubelet[15529]: I0210 22:53:45.053792 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service 2月 10 22:58:45 slave kubelet[15529]: I0210 22:58:45.054225 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.servic
建立 kube-proxy 證書和私鑰
建立證書籤名請求文件
cd /opt/k8s/work cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "NanJing", "L": "NanJing", "O": "system:kube-proxy", "OU": "system" } ] } EOF
生成證書和私鑰
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy*pem
安裝證書
cd /opt/k8s/work export node_ip=192.168.0.114 scp kube-proxy*.pem root@${node_ip}:/etc/kubernetes/cert/
建立 kubeconfig 文件
cd /opt/k8s/work export KUBE_APISERVER=https://192.168.0.107:6443 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
分發 kubeconfig
cd /opt/k8s/work export node_ip=192.168.0.114 scp kube-proxy.kubeconfig root@${node_ip}:/etc/kubernetes/kube-proxy.kubeconfig
建立 kube-proxy 配置文件
cd /opt/k8s/work export CLUSTER_CIDR="172.30.0.0/16" export NODE_IP=192.168.0.114 export NODE_NAME=slave cat > kube-proxy-config.yaml <<EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 clientConnection: burst: 200 kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig" qps: 100 bindAddress: ${NODE_IP} healthzBindAddress: ${NODE_IP}:10256 metricsBindAddress: ${NODE_IP}:10249 enableProfiling: true clusterCIDR: ${CLUSTER_CIDR} hostnameOverride: ${NODE_NAME} mode: "ipvs" portRange: "" iptables: masqueradeAll: false ipvs: scheduler: rr excludeCIDRs: [] EOF
分發kube-proxy 配置文件
cd /opt/k8s/work export node_ip=192.168.0.114 scp kube-proxy-config.yaml root@${node_ip}:/etc/kubernetes/kube-proxy-config.yaml
建立kube-proxy服務啓動文件
cd /opt/k8s/work export K8S_DIR=/data/k8s/k8s cat > kube-proxy.service <<EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=${K8S_DIR}/kube-proxy ExecStart=/opt/k8s/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy-config.yaml \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
分發 kube-proxy服務啓動文件:
export node_ip=192.168.0.114 scp kube-proxy.service root@${node_ip}:/etc/systemd/system/
啓動 kube-proxy服務
export node_ip=192.168.0.114 export K8S_DIR=/data/k8s/k8s ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy" ssh root@${node_ip} "modprobe ip_vs_rr" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
檢查啓動結果
export node_ip=192.168.0.114 ssh root@${node_ip} "systemctl status kube-proxy |grep Active"
確保狀態爲 active (running),不然查看日誌,確認緣由
若是出現異常,經過以下命令查看
journalctl -u kube-proxy
查看狀態
root@slave:~# netstat -lnpt|grep kube-prox tcp 0 0 192.168.0.114:10256 0.0.0.0:* LISTEN 23078/kube-proxy tcp 0 0 192.168.0.114:10249 0.0.0.0:* LISTEN 23078/kube-proxy root@slave:~# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.254.0.1:443 rr -> 192.168.0.107:6443 Masq 1 0 0