2018/1/4html
### 申請的 SLB 資源 SLB instance id: lb-xxx, vip: 10.10.9.76 ### 網絡聯通性測試 [root@tvm-04 ~]# for i in $(seq 1 10);do echo "------------$i";curl -k -If -m 3 https://10.10.9.76:6443;done ------------1 curl: (22) NSS: client certificate not found (nickname not specified) ### 符合預期。出現上述的異常,表示是證書相關的問題,能夠經過域名解析到這個 vip 來繞過 ### 選一個域名(以前在爲每一個節點建立證書時,使用的附加信息中有幾個 DNS 可選) ### 例如:kubernetes.default.svc.cluster.local ### 後續集羣外須要訪問 apiserver 則請求到上述域名中,以此來規避 SLB 的 L4 proxy 沒有證書的問題 ###(上述域名寫入目標測試節點的 hosts 中) [root@tvm-04 ~]# vim /etc/hosts (略) ### k8s apiserver SLB 10.10.9.76 kubernetes.default.svc.cluster.local 後續在配置 worker 節點時將會用到這裏的域名
### 此處咱們先配置一個單節點的 nginx L4 proxy 來測試(若是使用 L7 須要增長對應的證書) [root@tvm-04 ~]# vim /etc/hosts (略) ### k8s apiserver SLB 10.10.9.74 kubernetes.default.svc.cluster.local ### 在這臺機器上配置 nginx L4 stream [root@tvm-03 ~]# yum -y install nginx [root@tvm-03 ~]# nginx -v nginx version: nginx/1.12.2 [root@tvm-03 ~]# mkdir -p /etc/nginx/stream.conf.d [root@tvm-03 ~]# cat <<'_EOF' >>/etc/nginx/nginx.conf stream { log_format proxy '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time "$upstream_addr" ' '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; access_log /var/log/nginx/stream.access.log proxy; include /etc/nginx/stream.conf.d/*.conf; } _EOF [root@tvm-03 ~]# cat /etc/nginx/stream.conf.d/slb.test.apiserver.local.conf #tcp: kubernetes.default.svc.cluster.local upstream slb_test_apiserver_local { server 10.10.9.67:6443 weight=5 max_fails=3 fail_timeout=30s; server 10.10.9.68:6443 weight=5 max_fails=3 fail_timeout=30s; server 10.10.9.69:6443 weight=5 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_pass slb_test_apiserver_local; proxy_connect_timeout 1s; proxy_timeout 3s; access_log /var/log/nginx/slb_test_apiserver_local.log proxy; } ### 啓動 proxy [root@tvm-03 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok [root@tvm-03 ~]# systemctl start nginx.service [root@tvm-03 ~]# systemctl enable nginx.service ###切換 apiserver 準備測試 [root@tvm-02 ~]# sed -i 's#10.10.9.69:6443#kubernetes.default.svc.cluster.local:7443#' ~/.kube/config [root@tvm-02 ~]# kubectl get nodes [root@tvm-02 ~]# kubectl get nodes [root@tvm-02 ~]# grep kube /etc/hosts 10.10.9.74 kubernetes.default.svc.cluster.local [root@tvm-02 ~]# kubectl cluster-info Kubernetes master is running at https://kubernetes.default.svc.cluster.local:7443 KubeDNS is running at https://kubernetes.default.svc.cluster.local:7443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ### 查看 proxy 上的日誌 [root@tvm-03 ~]# tail /var/log/nginx/slb_test_apiserver_local.log 10.10.9.69 [03/Jan/2018:12:34:13 +0800] TCP 200 26209 1947 0.217 "10.10.9.68:6443" "1947" "26209" "0.000" 10.10.9.69 [03/Jan/2018:12:34:15 +0800] TCP 200 26209 1947 0.284 "10.10.9.69:6443" "1947" "26209" "0.000" ###符合預期,下一步能夠配置這個 LB 的高可用,擴容到 2 節點,經過 keepalived 之類到服務來提供 vip 服務便可。