LVS只是作一個負載均衡,經過訪問VIP來訪問後端的網站程序,一旦LVS宕機,整個網站就訪問不了,這就出現了單點。因此要結合keepalive這種高可用軟件來保證整個網站的高可用。本文將介紹如何利用keepalive來實現LVS的高可用(以LVS的DR模式爲例,生產環境後臺的real server 網站內容是一致的,爲了看到實驗效果,這裏是兩個不一樣的頁面)。 前端
實驗拓撲:node
1.在兩臺Real Server上配置web服務web
2.配置兩臺Real Server
後端
配置腳本以下:bash
#!/bin/bash # VIP='10.1.88.88' NETMASK='255.255.255.255' IFACE='lo:0' case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $IFACE $VIP broadcast $VIP netmask $NETMASK up route add -host $VIP $IFACE ;; stop) echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $IFACE down ;; *) echo "Usage: $(basename $0) {start|stop}" exit 1 esac
3.配置keepalived服務器
keepalived master的配置:負載均衡
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 } vrrp_script chk_down { #配置腳本 script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" #腳本內容 interval 2 #間隔兩秒執行一次腳本 weight -6 #若是腳本執行失敗則優先級減6 } vrrp_instance VI_1 { #配置實例 state MASTER #狀態 interface eno16777736 #keepalived所配置的接口 virtual_router_id 88 #VRID priority 100 #優先級 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.1.88.88/16 dev eno16777736 label eno16777736:0 #VIP } track_script { chk_down } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 10.1.88.88 80 { #配置虛擬服務器 delay_loop 3 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 #當全部realserver宕機後,則啓用sorry_server real_server 10.1.68.5 80 { #realserver配置 weight 1 HTTP_GET { #realserver健康狀態檢測 url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } real_server 10.1.68.6 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } }
keepalived backup配置:ide
global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0" interval 2 weight -6 } vrrp_instance VI_1 { state BACKUP #配置bakup服務器state爲BACKUP interface eno16777736 virtual_router_id 88 priority 98 #優先級爲98,比master低 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.1.88.88/16 dev eno16777736 label eno16777736:0 } track_script { chk_down } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 10.1.88.88 80 { delay_loop 3 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 10.1.68.5 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } real_server 10.1.68.6 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } }
在兩臺調度器上面配置httpd,用於sorry_serveroop
在兩臺前端調度器上同時啓動keepalived測試
service keepalived start #master service keepalived start #backup
此時兩臺調度器已經同時生成了ipvs規則
此時VIP已經在master的網卡接口上
4.測試
模擬其中一臺調度器故障,VIP當即漂移至backup調度器,不影響調度
其中有一臺故障可正常調度至另外一臺realserver
後端realserver所有故障,此時sorry_server服務器響應