部署 kube-proxy 組件

說明:本部署文章參照了 https://github.com/opsnull/follow-me-install-kubernetes-cluster ,歡迎給做者star

kube-proxy 運行在全部 worker 節點上,,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。node

本文檔講解部署 kube-proxy 的部署,使用 ipvs 模式。git

注意:若是沒有特殊指明,本文檔的全部操做均在 k8s-master1 節點上執行,而後遠程分發文件和執行命令。github

建立 kube-proxy 證書

建立證書籤名請求:json

cd /opt/k8s/work
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF
  • CN:指定該證書的 User 爲 system:kube-proxy
  • 預約義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;
  • 該證書只會被 kube-proxy 當作 client 證書使用,因此 hosts 字段爲空;

生成證書和私鑰:api

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*

建立和分發 kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • --embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時,寫入的是證書文件路徑);

分發 kubeconfig 文件:負載均衡

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in k8s-node1 k8s-node2 k8s-node3
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
  done

建立 kube-proxy 配置文件

從 v1.10 開始,kube-proxy 部分參數能夠配置文件中配置。能夠使用 --write-config-to 選項生成該配置文件,或者參考 kubeproxyconfig 的類型定義源文件ssh

建立 kube-proxy config 文件模板:tcp

cd /opt/k8s/work
cat <<EOF | tee kube-proxy-config.yaml.template
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
bindAddress: ##NODE_IP##
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
EOF
  • bindAddress: 監聽地址;
  • clientConnection.kubeconfig: 鏈接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;
  • hostnameOverride: 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 ipvs 規則;
  • mode: 使用 ipvs 模式;

爲各節點建立和分發 kube-proxy 配置文件:ide

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=3; i < 6; i++ ))
  do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
    scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done

替換後的 kube-proxy.config.yaml 文件:kube-proxy.config.yamlspa

建立和分發 kube-proxy systemd unit 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

替換後的 unit 文件:kube-proxy.service

分發 kube-proxy systemd unit 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in k8s-node1 k8s-node2 k8s-node3
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  done

啓動 kube-proxy 服務

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done
  • 必須先建立工做目錄;

檢查啓動結果

source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
  done

確保狀態爲 active (running),不然查看日誌,確認緣由:

journalctl -u kube-proxy

 

查看監聽端口和 metrics

[root@k8s-node2 work]# sudo netstat -lnpt|grep kube-prox
tcp        0      0 192.168.161.171:10256   0.0.0.0:*               LISTEN      3295/kube-proxy     
tcp        0      0 192.168.161.171:10249   0.0.0.0:*               LISTEN      3295/kube-proxy     
  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

查看 ipvs 路由規則

source /opt/k8s/bin/environment.sh
for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  done

預期輸出:

[root@k8s-master1 work]# for node_ip in 192.168.161.170 192.168.161.171 192.168.161.172
>   do
>     echo ">>> ${node_ip}"
>     ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
>   done
>>> 192.168.161.170
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.161.150:6443         Masq    1      0          0         
  -> 192.168.161.151:6443         Masq    1      0          0         
  -> 192.168.161.152:6443         Masq    1      0          0         
>>> 192.168.161.171
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.161.150:6443         Masq    1      0          0         
  -> 192.168.161.151:6443         Masq    1      0          0         
  -> 192.168.161.152:6443         Masq    1      0          0         
>>> 192.168.161.172
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.161.150:6443         Masq    1      0          0         
  -> 192.168.161.151:6443         Masq    1      0          0         
  -> 192.168.161.152:6443         Masq    1      0          0         

可見將全部到 kubernetes cluster ip 443 端口的請求都轉發到 kube-apiserver 的 6443 端口;

相關文章
相關標籤/搜索