kubernetes(七)二進制安裝-worker節點安裝

配置kubelet

kubelet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。node

kubelet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況。linux

爲確保安全,部署時關閉了 kubelet 的非安全 http 端口,對請求進行認證和受權,拒絕未受權的訪問(如 apiserver、heapster 的請求)。git

  1. 建立 kubelet bootstrap kubeconfig 文件github

    cd /opt/k8s/work
    
    export KUBE_APISERVER=https://192.168.0.107:6443
    export node_name=slave
    
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)
    
    # 設置集羣參數
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 設置客戶端認證參數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 設置上下文參數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap.kubeconfig
    
    # 設置默認上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
    • 向 kubeconfig 寫入的是 token,bootstrap 結束後 kube-controller-manager 爲 kubelet 建立 client 和 server 證書
    • kube-apiserver 接收 kubelet 的 bootstrap token 後,將請求的 user 設置爲 system:bootstrap: ,group 設置爲 system:bootstrappers,後續將爲這個 group 設置 ClusterRoleBinding
  2. 分發 bootstrap kubeconfig 文件到全部 worker 節點web

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet-bootstrap.kubeconfig root@${node_ip}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  3. 建立和分發 kubelet 參數配置文件docker

    從 v1.10 開始,部分 kubelet 參數需在配置文件中配置,kubelet --help 會提示json

    cd /opt/k8s/work
    
    export CLUSTER_CIDR="172.30.0.0/16"
    export NODE_IP=192.168.0.114
    export CLUSTER_DNS_SVC_IP="10.254.0.2"
    
    
    cat > kubelet-config.yaml <<EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: ${NODE_IP}
    staticPodPath: "/etc/kubernetes/manifests"
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
      x509:
        clientCAFile: "/etc/kubernetes/cert/ca.pem"
    authorization:
      mode: Webhook
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: ${NODE_IP}
    clusterDomain: "cluster.local"
    clusterDNS:
      - "${CLUSTER_DNS_SVC_IP}"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "${CLUSTER_CIDR}"
    podPidsLimit: -1
    resolvConf: /run/systemd/resolve/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
      memory.available:  "100Mi"
      nodefs.available:  "10%"
      nodefs.inodesFree: "5%"
      imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF
    • address:kubelet 安全端口(https,10250)監聽的地址,不能爲 127.0.0.1,不然 kube-apiserver、heapster 等不能調用 kubelet 的 API;
    • readOnlyPort=0:關閉只讀端口(默認 10255),等效爲未指定;
    • authentication.anonymous.enabled:設置爲 false,不容許匿名訪問 10250 端口;
    • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTP 證書認證;
    • authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;
      對於未經過 x509 證書和 webhook 認證的請求(kube-apiserver 或其餘客戶端),將被拒絕,提示 Unauthorized;
    • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具備操做資源的權限(RBAC);
    • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決於 kube-controller-manager 的 --experimental-cluster-signing-duration 參數
  4. 爲各節點建立和分發 kubelet 配置文件bootstrap

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet-config.yaml root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  5. 建立和分發 kubelet 服務啓動文件api

    cd /opt/k8s/work
    export K8S_DIR=/data/k8s/k8s
    export NODE_NAME=slave
    cat > kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kubelet
    ExecStart=/opt/k8s/bin/kubelet \\
      --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
      --cert-dir=/etc/kubernetes/cert \\
      --root-dir=${K8S_DIR}/kubelet \\
      --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
      --config=/etc/kubernetes/kubelet-config.yaml \\
      --hostname-override=${NODE_NAME} \\
      --image-pull-progress-deadline=15m \\
      --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
      --logtostderr=true \\
      --v=2
    Restart=always
    RestartSec=5
    StartLimitInterval=0
    
    [Install]
    WantedBy=multi-user.target
    EOF
    • 若是設置了 --hostname-override 選項,則 kube-proxy 也須要設置該選項,不然會出現找不到 Node 的狀況;
    • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
    • K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄建立證書和私鑰文件,而後寫入 --kubeconfig 文件
  6. 安裝分發kubelet服務文件安全

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kubelet.service root@${node_ip}:/etc/systemd/system/kubelet.service
  7. 授予 kube-apiserver 訪問 kubelet API 的權限

    在執行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉發到 kubelet 的 https 端口。這裏定義 RBAC 規則,受權 apiserver 使用的證書(kubernetes.pem)對應的用戶(CN:kubernetes-api)訪問 kubelet API 的權限,詳情參考kubelet-auth

    kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-api
  8. Bootstrap Token Auth 和授予權限

    kubelet 啓動時查找 --kubeletconfig 參數對應的文件是否存在,若是不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 發送證書籤名請求 (CSR)。

    kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證經過後將請求的 user 設置爲 system:bootstrap: ,group 設置爲 system:bootstrappers,這一過程稱爲 Bootstrap Token Auth。

    默認狀況下,這個 user 和 group 沒有建立 CSR 的權限, 須要建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:

    kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
  9. 啓動 kubelet 服務

    export K8S_DIR=/data/k8s/k8s
    
    export node_ip=192.168.0.114
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
    • kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。

    • 注意:kube-controller-manager 須要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,纔會爲 TLS Bootstrap 建立證書和私鑰。

  10. 遇到問題

    1. 啓動kubelet後,使用 kubectl get csr 沒有結果,查看kubelet出現錯誤

      journalctl -u kubelet -a |grep -A 2 'certificate_manager.go' 
      
      Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized

      查看kube-api服務日誌

      root@master:/opt/k8s/work# journalctl -eu kube-apiserver
      
      Unable to authenticate the request due to an error: invalid bearer token

      緣由,在kube-apiserver服務的啓動文件中丟掉了下面的配置

      --enable-bootstrap-token-auth \\

      追加上,從新啓動kube-apiserver後解決

    2. kubelet 啓動後持續不斷的產生csr,手動approve後還繼續產生
      緣由是kube-controller-manager服務中止掉了,從新啓動後解決

      • kubelet服務出問題後 要刪除對應節點的/etc/kubernetes/kubelet.kubeconfig和/etc/kubernetes/cert/kubelet-client-current*.pem、/etc/kubernetes/cert/kubelet-client-current*.pem,以後再從新啓動kubelet
  11. 查看 kubelet 狀況

    root@master:/opt/k8s/work# kubectl get csr
    NAME        AGE   REQUESTOR                 CONDITION
    csr-kl5mg   49s   system:bootstrap:5t989l   Pending
    csr-mrmkf   2m1s  system:bootstrap:5t989l   Pending
    csr-ql68g   13s   system:bootstrap:5t989l   Pending
    csr-rvl2v   84s   system:bootstrap:5t989l   Pending
    • 執行時,在手動approve以前會一直追加csr
  12. 手動 approve csr

    root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    certificatesigningrequest.certificates.k8s.io/csr-kl5mg approved
    certificatesigningrequest.certificates.k8s.io/csr-mrmkf approved
    certificatesigningrequest.certificates.k8s.io/csr-ql68g approved
    certificatesigningrequest.certificates.k8s.io/csr-rvl2v approved
    
    root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    certificatesigningrequest.certificates.k8s.io/csr-f4smx approved
  13. 查看node信息

    root@master:/opt/k8s/work# kubectl get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    slave   Ready    <none>   10m   v1.17.2
  14. 查看kubelet服務狀態

    export node_ip=192.168.0.114
    root@master:/opt/k8s/work# ssh root@${node_ip} "systemctl status kubelet.service"
    ● kubelet.service - Kubernetes Kubelet
       Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2020-02-10 22:48:41 CST; 12min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 15529 (kubelet)
        Tasks: 19 (limit: 4541)
       CGroup: /system.slice/kubelet.service
               └─15529 /opt/k8s/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/cert --root-dir=/data/k8s/k8s/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-config.yaml --hostname-override=slave --image-pull-progress-deadline=15m --volume-plugin-dir=/data/k8s/k8s/kubelet/kubelet-plugins/volume/exec/ --logtostderr=true --v=2
    
    2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.846285   15529 kubelet_node_status.go:73] Successfully registered node slave
    2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.930745   15529 certificate_manager.go:402] Rotating certificates
    2月 10 22:49:14 slave kubelet[15529]: I0210 22:49:14.966351   15529 kubelet_node_status.go:486] Recording NodeReady event message for node slave
    2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580410   15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2029-01-21 13:08:18.850930128 +0000 UTC
    2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580484   15529 certificate_manager.go:281] Waiting 78430h18m49.270459727s for next certificate rotation
    2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.580981   15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2027-07-14 16:09:26.990162158 +0000 UTC
    2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.581096   15529 certificate_manager.go:281] Waiting 65065h19m56.409078053s for next certificate rotation
    2月 10 22:53:44 slave kubelet[15529]: I0210 22:53:44.911705   15529 kubelet.go:1312] Image garbage collection succeeded
    2月 10 22:53:45 slave kubelet[15529]: I0210 22:53:45.053792   15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
    2月 10 22:58:45 slave kubelet[15529]: I0210 22:58:45.054225   15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.servic

配置kube-proxy 組件

  1. 建立 kube-proxy 證書和私鑰

    1. 建立證書籤名請求文件

      cd /opt/k8s/work
      cat > kube-proxy-csr.json <<EOF
      {
          "CN": "system:kube-proxy",
          "key": {
              "algo": "rsa",
              "size": 2048
          },
          "names": [
            {
              "C": "CN",
              "ST": "NanJing",
              "L": "NanJing",
              "O": "system:kube-proxy",
              "OU": "system"
            }
          ]
      }
      EOF
      • CN:指定該證書的 User 爲 system:kube-proxy;
      • 預約義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限。
    2. 生成證書和私鑰

      cd /opt/k8s/work
      cfssl gencert -ca=/opt/k8s/work/ca.pem \
        -ca-key=/opt/k8s/work/ca-key.pem \
        -config=/opt/k8s/work/ca-config.json \
        -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
      
      ls kube-proxy*pem
    3. 安裝證書

      cd /opt/k8s/work
      export node_ip=192.168.0.114
      scp kube-proxy*.pem root@${node_ip}:/etc/kubernetes/cert/
  2. 建立 kubeconfig 文件

    • kube-proxy 使用此文件訪問apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-proxy證書等信息
    cd /opt/k8s/work
    
    export KUBE_APISERVER=https://192.168.0.107:6443
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/k8s/work/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER}  \
      --kubeconfig=kube-proxy.kubeconfig
      
    kubectl config set-credentials kube-proxy \
      --client-certificate=kube-proxy.pem \
      --client-key=kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  3. 分發 kubeconfig

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kube-proxy.kubeconfig root@${node_ip}:/etc/kubernetes/kube-proxy.kubeconfig
  4. 建立 kube-proxy 配置文件

    cd /opt/k8s/work
    
    export CLUSTER_CIDR="172.30.0.0/16"
    
    export NODE_IP=192.168.0.114
    
    export NODE_NAME=slave
    
    cat > kube-proxy-config.yaml <<EOF
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      burst: 200
      kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
      qps: 100
    bindAddress: ${NODE_IP}
    healthzBindAddress: ${NODE_IP}:10256
    metricsBindAddress: ${NODE_IP}:10249
    enableProfiling: true
    clusterCIDR: ${CLUSTER_CIDR}
    hostnameOverride: ${NODE_NAME}
    mode: "ipvs"
    portRange: ""
    iptables:
      masqueradeAll: false
    ipvs:
      scheduler: rr
      excludeCIDRs: []
    EOF
    • bindAddress: 監聽地址;
    • clientConnection.kubeconfig: 鏈接 apiserver 的 kubeconfig 文件;
    • clusterCIDR: kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;
    • hostnameOverride: 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 ipvs 規則;
    • mode: 使用 ipvs 模式;
  5. 分發kube-proxy 配置文件

    cd /opt/k8s/work
    export node_ip=192.168.0.114
    scp kube-proxy-config.yaml root@${node_ip}:/etc/kubernetes/kube-proxy-config.yaml
  6. 建立kube-proxy服務啓動文件

    cd /opt/k8s/work
    export K8S_DIR=/data/k8s/k8s
    
    cat > kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    WorkingDirectory=${K8S_DIR}/kube-proxy
    ExecStart=/opt/k8s/bin/kube-proxy \\
      --config=/etc/kubernetes/kube-proxy-config.yaml \\
      --logtostderr=true \\
      --v=2
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
  7. 分發 kube-proxy服務啓動文件:

    export node_ip=192.168.0.114
    scp kube-proxy.service root@${node_ip}:/etc/systemd/system/
  8. 啓動 kube-proxy服務

    export node_ip=192.168.0.114
    export K8S_DIR=/data/k8s/k8s
    
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "modprobe ip_vs_rr"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  9. 檢查啓動結果

    export node_ip=192.168.0.114
    ssh root@${node_ip} "systemctl status kube-proxy  |grep Active"
    • 確保狀態爲 active (running),不然查看日誌,確認緣由

    • 若是出現異常,經過以下命令查看

      journalctl -u kube-proxy
  10. 查看狀態

    root@slave:~# netstat -lnpt|grep kube-prox
    tcp        0      0 192.168.0.114:10256     0.0.0.0:*               LISTEN      23078/kube-proxy
    tcp        0      0 192.168.0.114:10249     0.0.0.0:*               LISTEN      23078/kube-proxy
    root@slave:~# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 192.168.0.107:6443           Masq    1      0          0
相關文章
相關標籤/搜索