前面兩篇文章已經配置好了etcd和flannel的網絡,如今開始配置k8s master集羣。html
etcd集羣配置參考:二進制搭建kubernetes多master集羣【1、使用TLS證書搭建etcd集羣】node
flannel網絡配置參考:二進制搭建kubernetes多master集羣【2、配置flannel網絡】linux
本文在如下主機上操做部署k8s集羣git
k8s-master1:192.168.80.7github
k8s-master2:192.168.80.8json
k8s-master3:192.168.80.9bootstrap
配置Kubernetes master集羣後端
kubernetes master 節點包含的組件:api
目前這三個組件須要部署在同一臺機器上。安全
kube-scheduler
、kube-controller-manager
和 kube-apiserver
三者的功能緊密相關;kube-scheduler
、kube-controller-manager
進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leader;1、部署kubectl命令工具
kubectl 是 kubernetes 集羣的命令行管理工具,本文檔介紹安裝和配置它的步驟。
kubectl 默認從 ~/.kube/config
文件讀取 kube-apiserver 地址、證書、用戶名等信息,若是沒有配置,執行 kubectl 命令時可能會出錯。
~/.kube/config
只須要部署一次,而後拷貝到其餘的master。
一、下載kubectl
wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/ cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin
二、建立請求證書
[root@k8s-master1 ssl]# cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "4Paradigm" } ] } EOF
system:masters
,kube-apiserver 收到該證書後將請求的 Group 設置爲 system:masters;cluster-admin
將 Group system:masters
與 Role cluster-admin
綁定,該 Role 授予全部 API的權限;生成證書和私鑰
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin
三、建立~/.kube/config文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kubectl.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=kubectl.kubeconfig # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin \ --kubeconfig=kubectl.kubeconfig # 設置默認上下文 kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
四、分發~/.kube/config文件
[root@k8s-master1 temp]# cp kubectl.kubeconfig ~/.kube/config [root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master2:~/.kube/config kubectl.kubeconfig 100% 6285 2.2MB/s 00:00 [root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master3:~/.kube/config kubectl.kubeconfig
2、部署api-server
一、建立kube-apiserver的證書籤名請求:
[root@k8s-master1 ssl]# cat > kubernetes-csr.json <<EOF
{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9", "192.168.80.13", "114.67.81.105", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] }
EOF
.
(如不能爲 kubernetes.default.svc.cluster.local.
),不然解析時失敗,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
;cluster.local
域名,如 bqding.com
,則須要修改域名列表中的最後兩個域名爲:kubernetes.default.svc.bqding
、kubernetes.default.svc.bqding.com
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
二、將生成的證書和私鑰文件拷貝到 master 節點:
[root@k8s-master1 ssl]# cp kubernetes*.pem /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master3:/etc/kubernetes/cert/
三、建立加密配置文件
[root@k8s-master1 ssl]# cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: $(head -c 32 /dev/urandom | base64) - identity: {} EOF
四、分發加密配置文件到master節點
[root@k8s-master1 ssl]# cp encryption-config.yaml /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master3:/etc/kubernetes/cert/
五、建立kube-apiserver systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \ --advertise-address=192.168.80.7 \ --bind-address=192.168.80.7 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.254.0.0/16 \ --service-node-port-range=30000-32700 \ --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/cert/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \ --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \ --etcd-cafile=/etc/kubernetes/cert/ca.pem \ --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \ --etcd-servers=https://192.168.80.4:2379,https://192.168.80.5:2379,https://192.168.80.6:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.targe EOF
--experimental-encryption-provider-config
:啓用加密特性;--authorization-mode=Node,RBAC
: 開啓 Node 和 RBAC 受權模式,拒絕未受權的請求;--enable-admission-plugins
:啓用 ServiceAccount
和 NodeRestriction
;--service-account-key-file
:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file
指定私鑰文件,二者配對使用;--tls-*-file
:指定 apiserver 使用的證書、私鑰和 CA 文件。--client-ca-file
用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;--kubelet-client-certificate
、--kubelet-client-key
:若是指定,則使用 https 訪問 kubelet APIs;須要爲證書對應的用戶(上面 kubernetes*.pem 證書的用戶爲 kubernetes) 用戶定義 RBAC 規則,不然訪問 kubelet API 時提示未受權;--bind-address
: 不能爲 127.0.0.1
,不然外界不能訪問它的安全端口 6443;--insecure-port=0
:關閉監聽非安全端口(8080);--service-cluster-ip-range
: 指定 Service Cluster IP 地址段;--service-node-port-range
: 指定 NodePort 的端口範圍;--runtime-config=api/all=true
: 啓用全部版本的 APIs,如 autoscaling/v2alpha1;--enable-bootstrap-token-auth
:啓用 kubelet bootstrap 的 token 認證;--apiserver-count=3
:指定集羣運行模式,多臺 kube-apiserver 會經過 leader 選舉產生一個工做節點,其它節點處於阻塞狀態;六、分發kube-apiserver.service文件到其餘master
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service
七、建立日誌目錄
mkdir -p /var/log/kubernetes
八、啓動api-server服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-apiserver [root@k8s-master1 ssl]# systemctl start kube-apiserver
九、檢查api-server和集羣狀態
[root@k8s-master1 ssl]# netstat -ptln | grep kube-apiserve tcp 0 0 192.168.80.9:6443 0.0.0.0:* LISTEN 22348/kube-apiserve [root@k8s-master1 ssl]#kubectl cluster-info Kubernetes master is running at https://114.67.81.105:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
十、授予kubernetes證書訪問kubelet api權限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
3、部署kube-controller-manager
爲保證通訊安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在以下兩種狀況下使用該證書:
一、建立kube-controller-manager證書請求:
[root@k8s-master1 ssl]# cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "4Paradigm" } ] } EOF
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
二、將生成的證書和私鑰分發到全部 master 節點
[root@k8s-master1 ssl]# cp kube-controller-manager*.pem /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/cert/
三、建立和分發kubeconfig文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分發 kube-controller-manager.kubeconfig 到全部 master 節點
[root@k8s-master1 ssl]# cp kube-controller-manager.kubeconfig /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master3:/etc/kubernetes/cert/
四、建立和分發kube-controller-manager systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \ --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \ --experimental-cluster-signing-duration=8760h \ --root-ca-file=/etc/kubernetes/cert/ca.pem \ --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
--port=0
:關閉監聽 http /metrics 的請求,同時 --address
參數無效,--bind-address
參數有效;--secure-port=10252
、--bind-address=0.0.0.0
: 在全部網絡接口監聽 10252 端口的 https /metrics 請求;--kubeconfig
:指定 kubeconfig 文件路徑,kube-controller-manager 使用它鏈接和驗證 kube-apiserver;--cluster-signing-*-file
:簽名 TLS Bootstrap 建立的證書;--experimental-cluster-signing-duration
:指定 TLS Bootstrap 證書的有效期;--root-ca-file
:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;--service-account-private-key-file
:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file
指定的公鑰文件配對使用;--service-cluster-ip-range
:指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;--leader-elect=true
:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;--feature-gates=RotateKubeletServerCertificate=true
:開啓 kublet server 證書的自動更新特性;--controllers=*,bootstrapsigner,tokencleaner
:啓用的控制器列表,tokencleaner 用於自動清理過時的 Bootstrap token;--horizontal-pod-autoscaler-*
:custom metrics 相關參數,支持 autoscaling/v2alpha1;--tls-cert-file
、--tls-private-key-file
:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;--use-service-account-credentials=true
:分發kube-controller-manager systemd unit文件
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/kube-controller-manager.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/kube-controller-manager.service
五、啓動kube-controller-manager服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-controller-manager [root@k8s-master1 ssl]# systemctl start kube-controller-manager
六、檢查kube-controller-manager服務
[root@k8s-master1 ssl]# netstat -lnpt|grep kube-controll tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 17906/kube-controll tcp6 0 0 :::10257 :::* LISTEN 17906/kube-controll
七、查看當前kube-controller-manager的leader
[root@k8s-master1 ssl]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_d19698f1-0379-11e9-9c06-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:40:15Z","renewTime":"2018-12-19T11:12:43Z","leaderTransitions":5}' creationTimestamp: 2018-12-19T08:53:45Z name: kube-controller-manager namespace: kube-system resourceVersion: "9860" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 97ef4bad-036b-11e9-90aa-fa163e5caede
可見,當前的 leader 爲 kube-master3 節點。
4、部署kube-scheduler
該集羣包含 3 個節點,啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
爲保證通訊安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在以下兩種狀況下使用該證書:
一、建立kube-scheduler證書請求
[root@k8s-master1 ssl]# cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.80.7", "192.168.80.8", "192.168.80.9" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "4Paradigm" } ] }
EOF
生成證書和私鑰:
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
二、建立和分發kube-scheduler.kubeconfig文件
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
分發 kubeconfig 到全部 master 節點:
[root@k8s-master1 ssl]# cp kube-scheduler.kubeconfig /etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master2:/etc/kubernetes/cert/ [root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master3:/etc/kubernetes/cert/
三、建立和分發kube-scheduler systemd unit文件
[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \ --leader-elect=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
EOF
--address
:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;--kubeconfig
:指定 kubeconfig 文件路徑,kube-scheduler 使用它鏈接和驗證 kube-apiserver;--leader-elect=true
:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;分發 systemd unit 文件到全部 master 節點:
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/kube-scheduler.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/kube-scheduler.service
四、啓動kube-scheduler服務
[root@k8s-master1 ssl]# systemctl daemon-reload [root@k8s-master1 ssl]# systemctl enable kube-scheduler [root@k8s-master1 ssl]# systemctl start kube-scheduler
五、查看kube-scheduler運行監聽端口
[root@k8s-master1 ssl]# netstat -lnpt|grep kube-sche tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 17921/kube-schedule
六、查看當前kube-scheduler的leader
[root@k8s-master1 ssl]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_d41f4473-0379-11e9-a19b-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:38:27Z","renewTime":"2018-12-19T11:14:06Z","leaderTransitions":2}' creationTimestamp: 2018-12-19T09:10:56Z name: kube-scheduler namespace: kube-system resourceVersion: "9961" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: fe267870-036d-11e9-90aa-fa163e5caede
可見,當前的 leader 爲 kube-master1 節點。
7、在全部master節點上驗證功能是否正常
[root@k8s-master1 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
8、Haproxy+keepalived配置k8s master高可用(每臺master都進行操做,紅色字體改爲對應主機的便可)
運行 keepalived 和 haproxy 的節點稱爲 LB 節點。因爲 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
本文檔複用 master 節點的三臺機器,haproxy 監聽的端口(8443) 須要與 kube-apiserver 的端口 6443 不一樣,避免衝突。
keepalived 在運行過程當中週期檢查本機的 haproxy 進程狀態,若是檢測到 haproxy 進程異常,則觸發從新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
全部組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都經過 VIP 和 haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。
一、安裝haproxy和keepalived
yum install -y keepalived haproxy
二、三個master配置haproxy代理api-server服務
[root@k8s-master1 ~]# cat /etc/haproxy/haproxy.cfg global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /var/run/haproxy-admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults log global timeout connect 5000 timeout client 10m timeout server 10m listen admin_stats bind 0.0.0.0:10080 mode http log 127.0.0.1 local0 err stats refresh 30s stats uri /status stats realm welcome login\ Haproxy stats auth admin:123456 stats hide-version stats admin if TRUE listen kube-master bind 0.0.0.0:8443 mode tcp option tcplog balance roundrobin server 192.168.80.7 192.168.80.7:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.80.8 192.168.80.8:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.80.9 192.168.80.9:6443 check inter 2000 fall 2 rise 2 weight 1
三、三個master配置keepalived服務
[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf global_defs { router_id lb-master-105 } vrrp_script check-haproxy { script "killall -0 haproxy" interval 3 } vrrp_instance VI-kube-master { state BACKUP nopreempt #設置不搶佔,必須設置在backup上且priority最高的節點上 priority 120 dont_track_primary interface ens192 virtual_router_id 68 advert_int 3 track_script { check-haproxy } virtual_ipaddress { 114.67.81.105 #VIP,訪問此IP調用api-server } }
killall -0 haproxy
命令檢查所在節點的 haproxy 進程是否正常。四、啓動haproxy和keepalived服務
#haproxy
systemctl enable haproxy
systemctl start haproxy
#keepalive
systemctl enable keepalived
systemctl start keepalived
五、查看haproxy和keepalived服務狀態以及VIP狀況
systemctl status haproxy|grep Active
systemctl status keepalived|grep Active
若是Active: active (running)表示正常。
六、查看VIP所屬狀況
ip addr show | grep 114.67.81.105
我這裏VIP在192.168.80.7上。
爲了驗證高可用配置成功否,能夠把192.168.80.7上的haproxy服務關閉,此時VIP會漂移到192.168.80.8服務器上,當192.168.80.7解決問題重啓後,因爲它配置了nopreempt,因此它不會從新搶佔VIP資源。
注:* 若是使用雲搭建的集羣,在高可用這塊能夠直接用雲服務商提供的SLB服務,若是haproxy+keepalive可能不支持,緣由你懂的。(雲底層封掉了)
下一篇咱們將進行node節點的部署,請參考:二進制搭建kubernetes多master集羣【4、配置k8s node】