HA Cluster的配置前提:
(1) 各節點時間必須同步;
ntp, chrony
(2) 確保iptables及selinux不會成爲阻礙;
(3) 各節點之間可經過主機名互相通訊(對KA並不是必須);
建議使用/etc/hosts文件實現;
(4) 確保各節點的用於集羣服務的接口支持MULTICAST通訊;
D類:224-239; ip link set dev eth0 multicast onhtml
WEB1:192.168.30.17 WEB2:192.168.30.27 LVS1+Keepalived: 192.168.30.7 VIP:10.0.0.100 LVS2+Keepalived: 192.168.30.37 VIP:10.0.0.101 DNS:172.20.42.27 Route:192.168.30.208, 10.0.0.200,172.20.42.200 Client: Windows IP 172.20.42.222
1. 網絡 ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none IPADDR=192.168.30.7 PREFIX=24 GATEWAY=192.168.30.208 2. 安裝keepalived yum install keepalived -y 3. 準備腳本 #!/bin/bash
contact='root@localhost'linux
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
}nginx
case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
4. 配置keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalive@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
}web
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass OB0Q67DM
}
virtual_ipaddress {
10.0.0.100/8
}
track_interface {
eth0
}vim
notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault"
}bash
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 61
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 2f118245
}
virtual_ipaddress {
10.0.0.101/8
}
track_interface {
eth0
}服務器
notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault"
}網絡
virtual_server 10.0.0.101 80 {
delay_loop 2
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCPapp
real_server 192.168.30.17 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.30.27 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } }
}oop
virtual_server 10.0.0.100 80 {
delay_loop 2
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.30.17 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } } real_server 192.168.30.27 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 2 nb_get_retry 3 delay_before_retry 1 } }
}
5. 啓動keepalived: systemctl start keepalived
10.0.0.100 VIP在此服務器上
和LVS1+Keepalived配置步驟相同,只是要相應的把同sync group的vrrp instance的主備和優先級作相應的調整。 vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 61 priority 98 啓動keepalived: systemctl start keepalived 10.0.0.101 VIP在此服務器上
vim /var/named/blog.com.zone $TTL 1D
@ IN SOA master.blog.com admin.blog.com. (
0 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS master
master A 172.20.42.27
www A 10.0.0.100
www A 10.0.0.101
1. 啓用ipforward echo 1 >/proc/sys/net/ipv4/ip_forward 2. 網絡配置 ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none I PADDR=192.168.30.208 PREFIX=24 ifcfg-eth0:1 DEVICE=eth0:1 BOOTPROTO=none IPADDR=10.0.0.200 PREFIX=8 ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none IPADDR=172.20.42.200 PREFIX=16
1. 準備配置腳本並執行 setpara.sh #!/bin/bash
vip1="10.0.0.100"
vip2="10.0.0.101"
mask="255.255.255.255"
iface1="lo:0"
iface2="lo:1"
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $iface1 $vip1 netmask $mask broadcast $vip1 up
ifconfig $iface2 $vip2 netmask $mask broadcast $vip2 up
;;
stop)
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig $iface1 down
ifconfig $iface2 down
;;
esac
2 安裝appache,yum install httpd -y
3. 生成主頁echo web1 > /var/www/html/index.html
和WEB1步驟相同,echo web2 > /var/www/html/index.html
在LVS1+Keepalived和LVS2+Keepalived上安裝ipvsadm: yum install ipvsadm -y 1. 在上述服務器上執行ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.100:80 rr persistent 50 -> 192.168.30.17:80 Route 1 0 0 -> 192.168.30.27:80 Route 1 0 0 TCP 10.0.0.101:80 rr persistent 50 -> 192.168.30.17:80 Route 1 0 0 -> 192.168.30.27:80 Route 1 0 0 2. 不管中止哪個節點的keepalive,VIP都是成功漂移到另外一臺服務器上,提供正常的服務。
Keepalived沒法爲Nginx提供直接的配置,藉助vrrp_script{}去探測關鍵進程的執行狀態結果來決定對是否動態修改優先級,來決定VIP的漂移。 藉助notify.sh來重啓Nginx, 如backup) systemctl restart nginx notify backup 分兩步: 1. 先定義一個腳本 2. 在vrrp實例中調用此腳本
yum install nginx -y vim /etc/nginx/conf.d/www.conf upstream websrvs { server 192.168.30.17:80; server 192.168.30.27:80; } server { listen 80 default_server; server_name 192.168.30.7; root /usr/share/nginx/html; location / { proxy_pass http://websrvs; } } 第二臺服務器: upstream websrvs { server 192.168.30.17:80; server 192.168.30.27:80; } server { listen 80 default_server; server_name 192.168.30.37; root /usr/share/nginx/html; location / { proxy_pass http://websrvs; } }
vrrp_script ngxhelth { script "killall -0 nginx && exit 0 || exit 1" interval 1 weight -5 } 分別在vrrp_instance VI_2和vrrp_instance VI_1中調用此腳本 track_script { ngxhealth }
在一臺服務器中止keepalived,vip會轉移到另外一臺服務器,可是網頁訪問仍然正常。