k8s集羣部署v1.15實踐6:配置高可用kube-apiserver組件ha+keepalived

配置高可用kube-apiserver組件ha+keepalived

備註1:node

kubernetes master 節點運行以下組件:
kube-apiserver
kube-scheduler
kube-controller-manager後端

kube-scheduler 和 kube-controller-manager 能夠以集羣模式運行,經過 leader 選舉產生一個工做進程,其它進程處於阻塞模式.api

對於 kube-apiserver,能夠運行多個實例(本文檔是 3 實例),但對其它組件須要提供統一的訪問地址,該地址須要高可用.本文檔使用 keepalived 和 haproxy 實現 kubeapiserverVIP 高可用和負載均衡.bash

集羣模式和ha+keepalived的主要區別是什麼呢?ha+keepalived配置vip,實現了api惟一的訪問地址和負載均衡.集羣模式沒有配置vip.服務器

備註2:網絡

keepalived 提供 kube-apiserver 對外服務的 VIP.app

haproxy 監聽 VIP,後端鏈接全部 kube-apiserver 實例,提供健康檢查和負載均衡功能.負載均衡

運行 keepalived 和 haproxy 的節點稱爲 LB 節點.因爲 keepalived 是一主多備運行模式,故至少兩個 LB 節點.frontend

本文檔複用 master 節點的三臺機器,haproxy 監聽的端口(8443) 須要與 kube-apiserver的端口 6443 不一樣,避免衝突.socket

keepalived 在運行過程當中週期檢查本機的 haproxy 進程狀態,若是檢測到 haproxy 進程異常,則觸發從新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用.

全部組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都經過 VIP 和haproxy 監聽的 8443 端口訪問 kube-apiserver 服務.

1.安裝Haproxy,keepalived.三個節點都安裝

yum -y install keepalived haproxy

2.配置haproxy

修改配置文件

/etc/haproxy/haproxy.cfg

把原配置文件備份,三個節點都作一下這個操做

[root@k8s-node1 haproxy]# mv haproxy.cfg haproxy.cfg.bk
[root@k8s-node1 haproxy]# ls
haproxy.cfg.bk

從新生成個haproxy.cfg文件,參考見下

haproxy 在 1080 端口輸出 status 信息.

haproxy 監聽全部接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致.

server 字段列出全部 kube-apiserver 監聽的 IP 和端口.

[root@k8s-node1 haproxy]# cat haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                   tcp
    log                     global
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000
frontend         kube-api
bind             0.0.0.0:8443
mode             tcp
log              global
default_backend  kube-client

backend         kube-client
    balance     source
    server  k8s-node1 192.168.174.128:6443 check inter 2000 fall 2
    server  k8s-node2 192.168.174.129:6443 check inter 2000 fall 2
    server  k8s-node3 192.168.174.130:6443 check inter 2000 fall 2
listen stats
mode http
bind 0.0.0.0:1080
stats enable
stats hide-version
stats uri /haproxyadmin?stats
stats realm Haproxy\ Statistics
stats auth admin:admin
stats admin if TRUE
[root@k8s-node1 haproxy]#

分發配置文件到其它節點

[root@k8s-node1 haproxy]# scp haproxy.cfg root@k8s-node2:/etc/haproxy/
haproxy.cfg                                                                                           100% 1025   942.0KB/s   00:00    
[root@k8s-node1 haproxy]# scp haproxy.cfg root@k8s-node3:/etc/haproxy/
haproxy.cfg                                                                                           100% 1025   820.4KB/s   00:00    
[root@k8s-node1 haproxy]#

啓動haproxy服務

systemctl enable haproxy &&systemctl start haproxy

狀態

[root@k8s-node1 haproxy]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-11-04 02:29:27 EST; 7s ago
 Main PID: 3827 (haproxy-systemd)
    Tasks: 3
   Memory: 1.7M
   CGroup: /system.slice/haproxy.service
           ├─3827 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─3828 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─3829 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Nov 04 02:29:27 k8s-node1 systemd[1]: Started HAProxy Load Balancer.
Nov 04 02:29:27 k8s-node1 haproxy-systemd-wrapper[3827]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/ha...d -Ds
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node1 haproxy]#

檢索啓動端口

[root@k8s-node1 haproxy]# netstat -tlnp |grep haproxy
tcp        0      0 0.0.0.0:1080            0.0.0.0:*               LISTEN      3829/haproxy        
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      3829/haproxy

3.配置keepalived

keepalived 是一主(master)多備(backup)運行模式,故有兩種類型的配置文件.

master 配置文件只有一份,backup 配置文件視節點數目而定,對於本文檔而言,規劃以下:

master: 192.168.174.128
backup:192.168.174.129,192.168.174.130

備份原配置文件,三個節點都操做.

[root@k8s-node1 keepalived]# pwd
/etc/keepalived
[root@k8s-node1 keepalived]# mv keepalived.conf keepalived.conf.bk

master節點配置文件,參考見下

[root@k8s-node1 keepalived]# cat keepalived.conf
global_defs { 
    router_id NodeA 
} 

vrrp_script chk_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 5
        weight 2
}

vrrp_instance VI_1 { 
    state MASTER    #設置爲主服務器 
    interface ens33  #監測網絡接口 
    virtual_router_id 51  #主、備必須同樣 
    priority 100   #(主、備機取不一樣的優先級,主機值較大,備份機值較小,值越大優先級越高) 
    advert_int 1   #VRRP Multicast廣播週期秒數 
    authentication { 
    auth_type PASS  #VRRP認證方式,主備必須一致 
    auth_pass 1111   #(密碼) 
} 
virtual_ipaddress { 
    192.168.174.127/24  #VRRP HA虛擬地址 
}
    track_script {                           
      chk_haproxy
}
}

backup節點配置文件,參考見下:

[root@k8s-node2 keepalived]# cat keepalived.conf
global_defs { 
    router_id NodeA 
} 

vrrp_script chk_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 5
        weight 2
}
vrrp_instance VI_1 { 
    state BACKUP    #設置爲備用服務器 
    interface ens33  #監測網絡接口 
    virtual_router_id 51  #主、備必須同樣 
    priority 90   #(主、備機取不一樣的優先級,主機值較大,備份機值較小,值越大優先級越高) 
    advert_int 1   #VRRP Multicast廣播週期秒數 
    authentication { 
    auth_type PASS  #VRRP認證方式,主備必須一致 
    auth_pass 1111   #(密碼) 
} 
virtual_ipaddress { 
    192.168.174.127/24  #VRRP HA虛擬地址 
}
    track_script {
        chk_haproxy
}
}
[root@k8s-node3 keepalived]# cat keepalived.conf
global_defs { 
    router_id NodeA 
} 

vrrp_script chk_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 5
        weight 2
}
vrrp_instance VI_1 { 
    state BACKUP    #設置爲備用服務器 
    interface ens33  #監測網絡接口 
    virtual_router_id 51  #主、備必須同樣 
    priority 80  #(主、備機取不一樣的優先級,主機值較大,備份機值較小,值越大優先級越高) 
    advert_int 1   #VRRP Multicast廣播週期秒數 
    authentication { 
    auth_type PASS  #VRRP認證方式,主備必須一致 
    auth_pass 1111   #(密碼) 
} 
virtual_ipaddress { 
    192.168.174.127/24  #VRRP HA虛擬地址 
}
    track_script {
        chk_haproxy
}
}

腳本文件,注意腳本名字和存放位置,必須是keepalived配置文件裏指定的位置而且名字要同樣.

腳本必須有x執行權限.

[root@k8s-node1 keepalived]# cat check_haproxy.sh 
#!/bin/bash
A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ];then
systemctl start haproxy.service
fi
sleep 3
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
pkill keepalived
fi

把腳本文件複製到其它節點

[root@k8s-node1 keepalived]# scp check_haproxy.sh root@k8s-node2:/etc/keepalived/
check_haproxy.sh                                                                                      100%  186   152.3KB/s   00:00    
[root@k8s-node1 keepalived]# scp check_haproxy.sh root@k8s-node3:/etc/keepalived/
check_haproxy.sh

啓動keepalived

[root@k8s-node1 keepalived]# systemctl enable keepalived && systemctl start keepalived
[root@k8s-node1 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-11-04 02:57:19 EST; 19s ago
  Process: 4720 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
    Tasks: 3
   Memory: 1.6M
   CGroup: /system.slice/keepalived.service
           ├─4721 /usr/sbin/keepalived -D
           ├─4722 /usr/sbin/keepalived -D
           └─4723 /usr/sbin/keepalived -D

Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:21 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127
Nov 04 02:57:26 k8s-node1 Keepalived_vrrp[4723]: Sending gratuitous ARP on ens33 for 192.168.174.127

vip

[root@k8s-node1 keepalived]# ip a |grep -A 3 ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:08:66:a8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.174.128/24 brd 192.168.174.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.174.127/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::7c9d:8cfc:d487:6a38/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

遇到的問題

腳本不執行.緣由:

千萬注意,track_script這組參數必須寫在vip後面,否則腳本不會執行.

相關文章
相關標籤/搜索