部署 kubelet 組件

kublet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。node

kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況。linux

爲確保安全,本文檔只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如 apiserver、heapster)。nginx

 

建立 kubelet bootstrap kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in k8s-node1 k8s-node2 k8s-node3
   do
    echo ">>> ${node_name}"

    # 建立 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)

    # 設置集羣參數
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設置客戶端認證參數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設置上下文參數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

    # 設置默認上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done
  • 證書中寫入 Token 而非證書,證書後續由 kube-controller-manager 建立。

查看 kubeadm 爲各節點建立的 token:git

[root@k8s-master1 work]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
5bmk75.3kmttw7ff5ppxli8   23h       2019-05-16T16:15:31+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node3
d9d9gq.1ty3yoyc9czgvhbo   23h       2019-05-16T16:15:31+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node2
j9ae3c.ngrn78qr8cw8eeo6   23h       2019-05-16T16:15:30+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node1
  • 建立的 token 有效期爲 1 天,超期後將不能再被使用,且會被 kube-controller-manager 的 tokencleaner 清理(若是啓用該 controller 的話);
  • kube-apiserver 接收 kubelet 的 bootstrap token 後,將請求的 user 設置爲 system:bootstrap:,group 設置爲 system:bootstrappers;

查看各 token 關聯的 Secret:github

[root@k8s-master1 work]# kubectl get secrets  -n kube-system|grep bootstrap-token
bootstrap-token-5bmk75                           bootstrap.kubernetes.io/token         7      77s
bootstrap-token-d9d9gq                           bootstrap.kubernetes.io/token         7      77s
bootstrap-token-j9ae3c                           bootstrap.kubernetes.io/token         7      78s

分發 bootstrap kubeconfig 文件到全部 worker 節點

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in k8s-node1 k8s-node2 k8s-node3
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

 

建立和分發 kubelet 參數配置文件

從 v1.10 開始,kubelet 部分參數需在配置文件中配置,kubelet --help 會提示:web

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

建立 kubelet 參數配置模板文件:docker

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat <<EOF | tee kubelet-config.yaml.template
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
podCIDR: "${POD_CIDR}"
maxPods: 220
serializeImagePulls: false
hairpinMode: promiscuous-bridge
cgroupDriver: cgroupfs
runtimeRequestTimeout: "15m"
rotateCertificates: true
serverTLSBootstrap: true
readOnlyPort: 0
port: 10250
address: "##NODE_IP##"
EOF
  • address:API 監聽地址,不能爲 127.0.0.1,不然 kube-apiserver、heapster 等不能調用 kubelet 的 API;
  • readOnlyPort=0:關閉只讀端口(默認 10255),等效爲未指定;
  • authentication.anonymous.enabled:設置爲 false,不容許匿名�訪問 10250 端口;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTP 證書認證;
  • authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;
  • 對於未經過 x509 證書和 webhook 認證的請求(kube-apiserver 或其餘客戶端),將被拒絕,提示 Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具備操做資源的權限(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決於 kube-controller-manager 的 --experimental-cluster-signing-duration 參數;
  • 須要 root 帳戶運行;

爲各節點建立和分發 kubelet 配置文件:bootstrap

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

替換後的 kubelet-config.yaml 文件: kubelet-config.yamlapi

建立和分發 kubelet systemd unit 文件

建立 kubelet systemd unit 文件模板:安全

cd /opt/k8s/work
cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --root-dir=${K8S_DIR}/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1
  --allow-privileged=true \\
  --event-qps=0 \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --registry-qps=0 \\
  --image-pull-progress-deadline=30m \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
EOF
  • 若是設置了 --hostname-override 選項,則 kube-proxy 也須要設置該選項,不然會出現找不到 Node 的狀況;
  • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
  • K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄建立證書和私鑰文件,而後寫入 --kubeconfig 文件;
  • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 鏡像,它不能回收容器的殭屍;

替換後的 unit 文件:kubelet.service

爲各節點建立和分發 kubelet systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in k8s-node1 k8s-node2 k8s-node3
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
  done

Bootstrap Token Auth 和授予權限

kublet 啓動時查找配置的 --kubeletconfig 文件是否存在,若是不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發送證書籤名請求 (CSR)。

kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證(事先使用 kubeadm 建立的 token),認證經過後將請求的 user 設置爲 system:bootstrap:,group 設置爲 system:bootstrappers,這一過程稱爲 Bootstrap Token Auth。

默認狀況下,這個 user 和 group 沒有建立 CSR 的權限,kubelet 啓動失敗,錯誤日誌以下:

$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 06 06:42:36 m7-autocv-gpu01 kubelet[26986]: F0506 06:42:36.314378   26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 m7-autocv-gpu01 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 m7-autocv-gpu01 systemd[1]: kubelet.service: Failed with result 'exit-code'.

解決辦法是:建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

啓動 kubelet 服務

source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet"
    ssh root@${node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done
  • 必須建立工做目錄;
  • 關閉 swap 分區,不然 kubelet 會啓動失敗;
$ journalctl -u kubelet |tail
8月 15 12:16:49 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:49.578598    7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]}
8月 15 12:16:49 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:49.578698    7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]}
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.205871    7807 mount_linux.go:214] Detected OS with systemd
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.205939    7807 server.go:408] Version: v1.11.2
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206013    7807 feature_gate.go:230] feature gates: &{map[RotateKubeletClientCertificate:true RotateKubeletServerCertificate:true]}
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206101    7807 feature_gate.go:230] feature gates: &{map[RotateKubeletServerCertificate:true RotateKubeletClientCertificate:true]}
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206217    7807 plugins.go:97] No cloud provider specified.
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206237    7807 server.go:524] No cloud provider specified: "" from the config file: ""
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.206264    7807 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
8月 15 12:16:50 m7-autocv-gpu01 kubelet[7807]: I0815 12:16:50.208628    7807 bootstrap.go:86] No valid private key and/or certificate found, reusing existing private key or creating a new one

kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。

注意:kube-controller-manager 須要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,纔會爲 TLS Bootstrap 建立證書和私鑰。

 

[root@k8s-master1 work]# kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-6jnhf   13s   system:bootstrap:3up3ds   Pending
csr-kdqt9   14s   system:bootstrap:xoaem5   Pending
csr-kmjtf   13s   system:bootstrap:qb0a7h   Pending
[root@k8s-master1 work]# kubectl get nodes
No resources found.
  • 三個 work 節點的 csr 均處於 pending 狀態;

自動 approve CSR 請求

建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:

cd /opt/k8s/work
cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
  • auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 爲 system:bootstrappers;
  • node-client-cert-renewal:自動 approve node 後續過時的 client 證書,自動生成的證書 Group 爲 system:nodes;
  • node-server-cert-renewal:自動 approve node 後續過時的 server 證書,自動生成的證書 Group 爲 system:nodes;

生效配置:

kubectl apply -f csr-crb.yaml

查看 kublet 的狀況

等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approved:

[root@k8s-master1 work]# kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-6jnhf   26m   system:bootstrap:3up3ds   Approved,Issued
csr-85ncq   19m   system:node:k8s-node2     Approved,Issued
csr-jf7m6   15m   system:node:k8s-node1     Approved,Issued
csr-kdqt9   26m   system:bootstrap:xoaem5   Approved,Issued
csr-kmjtf   26m   system:bootstrap:qb0a7h   Approved,Issued
csr-l4n7g   19m   system:node:k8s-node3     Approved,Issued
csr-rg7sg   19m   system:node:k8s-node1     Approved,Issued
csr-rmq8r   15m   system:node:k8s-node2     Approved,Issued
csr-rsg8j   15m   system:node:k8s-node3     Approved,Issued

 

全部節點均 ready:

[root@k8s-master1 work]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   19m   v1.14.1
k8s-node2   Ready    <none>   19m   v1.14.1
k8s-node3   Ready    <none>   19m   v1.14.1

kube-controller-manager 爲各 node 生成了 kubeconfig 文件和公私鑰:

[root@k8s-node2 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2312 5月  15 17:16 /etc/kubernetes/kubelet.kubeconfig
  • 沒有自動生成 kubelet server 證書;

kubelet 提供的 API 接口

kublet 啓動後監聽多個端口,用於接收 kube-apiserver 或其它組件發送的請求:

[root@k8s-node2 ~]# sudo netstat -lnpt|grep kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2071/kubelet        
tcp        0      0 192.168.161.171:10250   0.0.0.0:*               LISTEN      2071/kubelet        
tcp        0      0 127.0.0.1:39792         0.0.0.0:*               LISTEN      2071/kubelet        
  • 10248: healthz http 服務;
  • 10250: https API 服務;注意:未開啓只讀端口 10255;

例如執行 kubectl exec -it nginx-ds-5rmws -- sh 命令時,kube-apiserver 會向 kubelet 發送以下請求:

POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1

kubelet 接收 10250 端口的 https 請求:

  • /pods、/runningpods
  • /metrics、/metrics/cadvisor、/metrics/probes
  • /spec
  • /stats、/stats/container
  • /logs
  • /run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;

詳情參考:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3

因爲關閉了匿名認證,同時開啓了 webhook 受權,全部訪問 10250 端口 https API 的請求都須要被認證和受權。

預約義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 全部 API 的權限(kube-apiserver 使用的 kubernetes 證書 User 授予了該權限):

[root@k8s-master1 work]# kubectl describe clusterrole system:kubelet-api-admin
Name:         system:kubelet-api-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources      Non-Resource URLs  Resource Names  Verbs
  ---------      -----------------  --------------  -----
  nodes/log      []                 []              [*]
  nodes/metrics  []                 []              [*]
  nodes/proxy    []                 []              [*]
  nodes/spec     []                 []              [*]
  nodes/stats    []                 []              [*]
  nodes          []                 []              [get list watch proxy]

kublet api 認證和受權

kublet 配置了以下認證參數:

  • authentication.anonymous.enabled:設置爲 false,不容許匿名�訪問 10250 端口;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTPs 證書認證;
  • authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;

同時配置了以下受權參數:

  • authroization.mode=Webhook:開啓 RBAC 受權;

kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。若是二者都沒經過,則拒絕請求,提示 Unauthorized:

[root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.161.170:10250/metrics
Unauthorized
curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.161.170:10250/metrics
Unauthorized

經過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發送請求,查詢證書或 token 對應的 user、group 是否有操做資源的權限(RBAC);

 

證書認證和受權

# 權限不足的證書;
sudo curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.161.170:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
# 使用部署 kubectl 命令行工具時建立的、具備最高權限的 admin 證書;
[root@k8s-master1 work]# sudo curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.161.170:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0
  • --cacert--cert--key 的參數值必須是文件路徑,如上面的 ./admin.pem 不能省略 ./,不然返回 401 Unauthorized

bear token 認證和受權

建立一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具備調用 kubelet API 的權限:

[root@k8s-master1 work]# kubectl create sa kubelet-api-test
serviceaccount/kubelet-api-test created
[root@k8s-master1 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created
[root@k8s-master1 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@k8s-master1 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@k8s-master1 work]# echo ${TOKEN}
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tMmZ0ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjIzYzg4NTI5LTc2ZjctMTFlOS1iMDc3LTAwMGMyOWFhMGE2OCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.qCs7cgQYsvkxr2QsOklzFlsf14_0vpEfKX3NSf5sOuwHBixMq4HGBoLRCLRnGiJ35W7zKIGVMwpIXFDkOc7TDFFjUe8rBY6uCcZKBmchIyaqsu-vi4T-U2kRd3ST011eQdYJZHcbOHYM1dYt_5pOfDlA-wS-DobfpFpTt3BkOtrdoO1taRLr-u5I3Xun4wzKS5a9GN8S-Ap0Xn9UEGuSMqB6CplomPmi-P8WOTnz3D-D1WLtJRKkJt8RvIFDwRDGPf6ytStDhaQYtOQ8nbdfuzOLuiIApzwvrK-bCInM70dfu-iHGrLZQW2zNwPU1Y9-f6MFw4hoQ13AeZRFS5FI-Q
[root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.27.128.149:10250/metrics|head
^C
[root@k8s-master1 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.161.170:10250/metrics|head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

 

cadvisor 和 metrics

cadvisor 統計所在節點各容器的資源(CPU、內存、磁盤、網卡)使用狀況,分別在本身的 http web 頁面(4194 端口)和 10250 以 promehteus metrics 的形式輸出。

相關文章
相關標籤/搜索