部署架構
Kubernetes Master節點主要運行:kube-apiserver、kube-scheduler、kube-controller-manager3個組件。redis
其中kube-scheduler和kube-controller-manager能夠以集羣模式運行,經過leader選舉產生一個工做進程,其它進程處於阻塞模式。後端
kube-apiserver能夠運行多個實例,但對其它組件須要提供統一的訪問地址,該地址須要高可用。本文主要解決kube-apiserver的高可用問題。api
本文采用HAProxy+Keepalived實現kube-apiserver的VIP高可用和負載均衡。Keepalived提供kube-apiserver對外服務的VIP高可用,HAProxy監聽VIP,後端鏈接全部kube-apiserver實例,提供健康檢查和負載均衡功能。kube-apiserver的默認端口爲6443, 爲避免衝突, HAProxy監聽的端口要與之不一樣,此實驗中爲16443。bash
Keepalived週期性檢查本機的HAProxy進程狀態,若是檢測到HAProxy進程異常,則觸發VIP飄移。根據部署規劃,全部組件都經過VIP監聽的16443端口訪問kube-apiserver服務。架構
配置LB節點
針對LB節點執行部署準備中的設置主機名、配置防火牆、更新系統內核爲4.19.x、設置時間同步這4個配置。負載均衡
安裝HA組件
全部LB節點安裝HAProxy、Keepalived:frontend
[root@K8S-PROD-LB1 ~]# yum install haproxy keepalived -y
[root@K8S-PROB-LB2 ~]# yum install haproxy keepalived -y
配置HAProxy
LB節點配置HAProxy,執行下面步驟,以K8S-PROD-LB1爲例。socket
備份haproxy.cfg
[root@K8S-PROD-LB1 haproxy]# cd /etc/haproxy/
[root@K8S-PROD-LB1 haproxy]# cp haproxy.cfg{,.bak}
[root@K8S-PROD-LB1 haproxy]# > haproxy.cfg
配置haproxy.cfg
[root@K8S-PROD-LB1 haproxy]# vi haproxy.cfg
global
log 127.0.0.1 local2tcp
chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
option tcplog
option httpclose
option dontlognull
option abortonclose
option redispatch
retries 3
maxconn 32000
timeout connect 5000ms
timeout client 2h
timeout server 2hide
listen stats
mode http
bind :10086
stats enable
stats uri /admin?stats
stats auth admin:admin
stats admin if TRUE
frontend k8s_apiserver
bind *:16443
mode tcp
default_backend https_sri
backend https_sri
balance roundrobin
server apiserver1_192_168_122_11 192.168.122.11:6443 check inter 2000 fall 2 rise 2 weight 100
server apiserver2_192_168_122_12 192.168.122.12:6443 check inter 2000 fall 2 rise 2 weight 100
server apiserver3_192_168_122_13 192.168.122.13:6443 check inter 2000 fall 2 rise 2 weight 100
配置K8S-PROD-LB2
[root@K8S-PROD-LB1 haproxy]# scp haproxy.cfg root@192.168.122.22:/etc/haproxy/
配置Keepalived
LB節點配置Keepalived,執行下面步驟,以K8S-PROD-LB1爲例。
備份keepalived.conf
[root@K8S-PROD-LB1 haproxy]# cd /etc/keepalived/
[root@K8S-PROD-LB1 keepalived]# cp keepalived.conf{,.bak}
[root@K8S-PROD-LB1 keepalived]# > keepalived.conf
配置keepalived.conf
[root@K8S-PROD-LB1 keepalived]# vi keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {br/>dcfenga@sina.com
}
notification_email_from dcfenga@sina.com
smtp_server mail.cluster.com
smtp_connect_timeout 30
router_id LB_KUBE_APISERVER
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 60
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.122.31
192.168.122.32
}
virtual_ipaddress {
192.168.122.40/24 label eth0:0
}
track_script {
check_haproxy
}
}
配置K8S-PROD-LB2
其餘LB節點的state MASTER字段需改成state BACKUP, priority 100須要設置爲90。若是在增長一個LB節點,state需設置成BACKUP, priorit需設置成80,以次類推。
[root@K8S-PROD-LB1 keepalived]# scp keepalived.conf root@192.168.122.32:/etc/keepalived/
[root@K8S-PROD-LB2 keepalived]# vi keepalived.conf
...
state BACKUP
priority 90
...
配置HAProxy存活檢查腳本
全部LB節點須要配置檢查腳本(check_haproxy.sh), 當HAProxy掛掉後自動中止keepalived:
[root@K8S-PROD-LB1 keepalived]# vi /etc/keepalived/check_haproxy.sh
#!/bin/bash
flag=$(systemctl status haproxy &> /dev/null;echo $?)
if [[ $flag != 0 ]];then
echo "haproxy is down, close the keepalived"
systemctl stop keepalived
fi
[root@K8S-PROD-LB1 keepalived]# chmod +x check_haproxy.sh
[root@K8S-PROD-LB1 keepalived]# scp /etc/keepalived/check_haproxy.sh root@192.168.122.32:/etc/keepalived/
配置keepalived日誌至本地文件
vi /etc/sysconfig/keepalived
...
KEEPALIVED_OPTIONS="-D -S 0"
...
vi /etc/rsyslog.conf
local0.* /var/log/keepalived.log
而後重啓該節點。
修改keepalived系統啓動服務
[root@K8S-PROD-LB1 keepalived]# vi /usr/lib/systemd/system/keepalived.service
[Unit]
Description=LVS and VRRP High Availability Monitor
After=syslog.target network-online.target
Requires=haproxy.service #增長該字段
[Service]
Type=forking
PIDFile=/var/run/keepalived.pid
KillMode=process
EnvironmentFile=-/etc/sysconfig/keepalived
ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
[root@K8S-PROD-LB1 keepalived]# scp /usr/lib/systemd/system/keepalived.service root@192.168.122.32:/usr/lib/systemd/system/
Firewalld放行VRRP協議(可選操做)
若是firewalld啓動的狀況下:
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface eth0 --destination 192.168.122.40 --protocol vrrp -j ACCEPT
firewall-cmd --reload
啓動LB組件
啓動haproxy
[root@K8S-PROD-LB1 keepalived]# systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy
[root@PROD-K8S-LB2 ~]# systemctl enable haproxy && systemctl start haproxy && systemctl status haproxy
啓動keepalived
[root@K8S-PROD-LB1 ~]# systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived
[root@PROD-K8S-LB2 ~]# systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived
測試LB
浮動IP漂移測試
執行 ip a 命令,查看浮動 IP:
[root@K8S-PROD-LB1 keepalived]# ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.122.31/24 brd 192.168.122.255 scope global eth0
inet 192.168.122.40/24 scope global secondary eth0:0
Master節點宕機後,VIP: 192.168.122.40會飄移至一臺Backup節點上;當Master節點恢復後,該VIP有會飄移回去。
查看HAProxy端口
[root@K8S-PROD-LB1 ~]# netstat -ntlp | grep haproxy
tcp 0 0 0.0.0.0:10086 0.0.0.0: LISTEN 21301/haproxy
tcp 0 0 0.0.0.0:16443 0.0.0.0: LISTEN 21301/haproxy
查看HAProxy服務狀態
iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 10086 -j DNAT --to-destination 192.168.122.40:10086
iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 16443 -j DNAT --to-destination 192.168.122.40:16443
http://192.168.191.32:10086/admin?stats,登陸HAProxy,查看服務是否正常,帳戶信息是在haproxy.cfg中的listen stats 下stats auth的值:admin:admin,頁面輸出結果: