基於MFS的單點及手動備份的缺陷,考慮將其與Keepalived相結合以提升可用性。在Centos下MooseFS(MFS)分佈式存儲共享環境部署記錄這篇文檔部署環境的基礎上,只須要作以下改動:html
1)將master-server做爲Keepalived_MASTER(啓動mfsmaster、mfscgiserv) 2)將matelogger做爲Keepalived_BACKUP(啓動mfsmaster、mfscgiserv) 3)將ChunkServer服務器裏配置的MASTER_HOST參數值改成VIP地址 4)clinet掛載的master的ip地址改成VIP地址 按照這樣調整後,須要將Keepalived_MASTER和Keepalived_BACKUP裏面的hosts綁定信息也修改下。
方案實施原理及思路node
1)mfsmaster的故障恢復在1.6.5版本後能夠由mfsmetalogger產生的日誌文件changelog_ml.*.mfs和metadata.mfs.back文件經過命令mfsmetarestore恢復 2)定時從mfsmaster獲取metadata.mfs.back 文件用於master恢復 3)Keepalived MASTER檢測到mfsmaster進程宕停時會執行監控腳本,即自動啓動mfsmaster進程,若是啓動失敗,則會強制kill掉keepalived和mfscgiserv進程,由此轉移VIP到BACKUP上面。 4)如果Keepalived MASTER故障恢復,則會將VIP資源從BACKUP一方強制搶奪回來,繼而由它提供服務 5)整個切換在2~5秒內完成 根據檢測時間間隔。
架構拓撲圖以下:python
1)Keepalived_MASTER(mfs master)機器上的操做web
mfs的master日誌服務器的安裝配置已經在另外一篇文檔中詳細記錄了,這裏就不作贅述了。下面直接記錄下keepalived的安裝配置: ----------------------------------------------------------------------------------------------------------------------- 安裝Keepalived [root@Keepalived_MASTER ~]# yum install -y openssl-devel popt-devel [root@Keepalived_MASTER ~]# cd /usr/local/src/ [root@Keepalived_MASTER src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz [root@Keepalived_MASTER src]# tar -zvxf keepalived-1.3.5.tar.gz [root@Keepalived_MASTER src]# cd keepalived-1.3.5 [root@Keepalived_MASTER keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived [root@Keepalived_MASTER keepalived-1.3.5]# make && make install [root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/ [root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ [root@Keepalived_MASTER keepalived-1.3.5]# mkdir /etc/keepalived/ [root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ [root@Keepalived_MASTER keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ [root@Keepalived_MASTER keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local [root@Keepalived_MASTER keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived #添加執行權限 [root@Keepalived_MASTER keepalived-1.3.5]# chkconfig keepalived on #設置開機啓動 [root@Keepalived_MASTER keepalived-1.3.5]# service keepalived start #啓動 [root@Keepalived_MASTER keepalived-1.3.5]# service keepalived stop #關閉 [root@Keepalived_MASTER keepalived-1.3.5]# service keepalived restart #重啓 配置Keepalived [root@Keepalived_MASTER ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak [root@Keepalived_MASTER ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MFS_HA_MASTER } vrrp_script chk_mfs { script "/usr/local/mfs/keepalived_check_mfsmaster.sh" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_mfs } virtual_ipaddress { 182.48.115.239 } notify_master "/etc/keepalived/clean_arp.sh 182.48.115.239" } 接着編寫監控腳本 [root@Keepalived_MASTER ~]# vim /usr/local/mfs/keepalived_check_mfsmaster.sh #!/bin/bash A=`ps -C mfsmaster --no-header | wc -l` if [ $A -eq 0 ];then /etc/init.d/mfsmaster start sleep 3 if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then /usr/bin/killall -9 mfscgiserv /usr/bin/killall -9 keepalived fi fi [root@Keepalived_MASTER ~]# chmod 755 /usr/local/mfs/keepalived_check_mfsmaster.sh 設置更新虛擬服務器(VIP)地址的arp記錄到網關腳本 [root@Keepalived_MASTER ~]# vim /etc/keepalived/clean_arp.sh #!/bin/sh VIP=$1 GATEWAY=182.48.115.254 //這個是網關地址 /sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null [root@Keepalived_MASTER ~]# chmod 755 /etc/keepalived/clean_arp.sh 啓動keepalived(確保Keepalived_MASTER機器的mfs master服務和Keepalived服務都要啓動) [root@Keepalived_MASTER ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@Keepalived_MASTER ~]# ps -ef|grep keepalived root 28718 1 0 13:09 ? 00:00:00 keepalived -D root 28720 28718 0 13:09 ? 00:00:00 keepalived -D root 28721 28718 0 13:09 ? 00:00:00 keepalived -D root 28763 27466 0 13:09 pts/0 00:00:00 grep keepalived 查看vip [root@Keepalived_MASTER ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0 inet 182.48.115.239/32 scope global eth0 inet6 fe80::5054:ff:fe09:2160/64 scope link valid_lft forever preferred_lft forever
2)Keepalived_BACKUP(mfs master)機器上的操做vim
在另外一篇文檔裏,該機器是做爲metalogger元數據日誌服務器的,那麼在這個高可用環境下,該臺機器改成Keepalived_BACKUP使用。 即去掉metalogger的部署,直接部署mfs master(部署過程參考另外一篇文檔)。下面直接說下Keepalived_BACKUP下的Keepalived配置: 安裝Keepalived [root@Keepalived_BACKUP ~]# yum install -y openssl-devel popt-devel [root@Keepalived_BACKUP ~]# cd /usr/local/src/ [root@Keepalived_BACKUP src]# wget http://www.keepalived.org/software/keepalived-1.3.5.tar.gz [root@Keepalived_BACKUP src]# tar -zvxf keepalived-1.3.5.tar.gz [root@Keepalived_BACKUP src]# cd keepalived-1.3.5 [root@Keepalived_BACKUP keepalived-1.3.5]# ./configure --prefix=/usr/local/keepalived [root@Keepalived_BACKUP keepalived-1.3.5]# make && make install [root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/ [root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ [root@Keepalived_BACKUP keepalived-1.3.5]# mkdir /etc/keepalived/ [root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ [root@Keepalived_BACKUP keepalived-1.3.5]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/ [root@Keepalived_BACKUP keepalived-1.3.5]# echo "/etc/init.d/keepalived start" >> /etc/rc.local [root@Keepalived_BACKUP keepalived-1.3.5]# chmod +x /etc/rc.d/init.d/keepalived #添加執行權限 [root@Keepalived_BACKUP keepalived-1.3.5]# chkconfig keepalived on #設置開機啓動 [root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived start #啓動 [root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived stop #關閉 [root@Keepalived_BACKUP keepalived-1.3.5]# service keepalived restart #重啓 配置Keepalived [root@Keepalived_BACKUP ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-bak [root@Keepalived_BACKUP ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MFS_HA_BACKUP } vrrp_script chk_mfs { script "/usr/local/mfs/keepalived_check_mfsmaster.sh" interval 2 weight 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_mfs } virtual_ipaddress { 182.48.115.239 } notify_master "/etc/keepalived/clean_arp.sh 182.48.115.239" } 接着編寫監控腳本 [root@Keepalived_BACKUP ~]# vim /usr/local/mfs/keepalived_notify.sh #!/bin/bash A=`ps -C mfsmaster --no-header | wc -l` if [ $A -eq 0 ];then /etc/init.d/mfsmaster start sleep 3 if [ `ps -C mfsmaster --no-header | wc -l ` -eq 0 ];then /usr/bin/killall -9 mfscgiserv /usr/bin/killall -9 keepalived fi fi [root@Keepalived_BACKUP ~]# chmod 755 /usr/local/mfs/keepalived_notify.sh 設置更新虛擬服務器(VIP)地址的arp記錄到網關腳本(Haproxy_Keepalived_Master 和 Haproxy_Keepalived_Backup兩臺機器都要操做) [root@Keepalived_BACKUP ~]# vim /etc/keepalived/clean_arp.sh #!/bin/sh VIP=$1 GATEWAY=182.48.115.254 /sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null 啓動keepalived [root@Keepalived_BACKUP ~]# /etc/init.d/keepalived start Starting keepalived: [root@Keepalived_BACKUP ~]# ps -ef|grep keepalived root 17565 1 0 11:06 ? 00:00:00 keepalived -D root 17567 17565 0 11:06 ? 00:00:00 keepalived -D root 17568 17565 0 11:06 ? 00:00:00 keepalived -D root 17778 17718 0 13:47 pts/1 00:00:00 grep keepalived 要保證Keepalived_BACKUP機器的mfs master服務和keepalived服務都要啓動!
3)chunkServer的配置centos
只須要將mfschunkserver.cfg文件中的MASTER_HOST參數配置成182.48.115.239,即VIP地址。 其餘的配置都不須要修改。而後重啓mfschunkserver服務
4)clinet客戶端的配置bash
只須要見掛載命令中的元數據ip改成182.48.115.249便可! [root@clinet-server ~]# mkdir /mnt/mfs [root@clinet-server ~]# mkdir /mnt/mfsmeta [root@clinet-server ~]# /usr/local/mfs/bin/mfsmount /mnt/mfs -H 182.48.115.239 mfsmaster accepted connection with parameters: read-write,restricted_ip,admin ; root mapped to root:root [root@clinet-server ~]# /usr/local/mfs/bin/mfsmount -m /mnt/mfsmeta -H 182.48.115.239 mfsmaster accepted connection with parameters: read-write,restricted_ip [root@clinet-server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 8.3G 3.8G 4.1G 49% / tmpfs 499M 228K 498M 1% /dev/shm /dev/vda1 477M 35M 418M 8% /boot /dev/sr0 3.7G 3.7G 0 100% /media/CentOS_6.8_Final 182.48.115.239:9421 107G 42G 66G 39% /mnt/mfs [root@clinet-server ~]# mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) /dev/sr0 on /media/CentOS_6.8_Final type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=0,gid=0,iocharset=utf8,mode=0400,dmode=0500) 182.48.115.239:9421 on /mnt/mfs type fuse.mfs (rw,nosuid,nodev,allow_other) 182.48.115.239:9421 on /mnt/mfsmeta type fuse.mfsmeta (rw,nosuid,nodev,allow_other) 驗證下客戶端掛載MFS文件系統後的數據讀寫是否正常 [root@clinet-server ~]# cd /mnt/mfs [root@clinet-server mfs]# echo "12312313" > test.txt [root@clinet-server mfs]# cat test.txt 12312313 [root@clinet-server mfs]# rm -f test.txt [root@clinet-server mfs]# cd ../mfsmeta/trash/ [root@clinet-server trash]# find . -name "*test*" ./003/00000003|test.txt [root@clinet-server trash]# cd ./003/ [root@clinet-server 003]# ls 00000003|test.txt undel [root@clinet-server 003]# mv 00000003\|test.txt undel/ [root@clinet-server 003]# ls /mnt/mfs test.txt [root@clinet-server 003]# cat /mnt/mfs/test.txt 12312313 以上說明掛載後的MFS數據共享正常
5)Keepalived_MASTER和Keepalived_BACKUP的iptales防火牆設置服務器
Keepalived_MASTER和Keepalived_BACKUP的防火牆iptables在實驗中是關閉的。 若是開啓了iptables防火牆功能,則須要在兩臺機器的iptables裏配置以下: 可使用"ss -l"命令查看本機監聽的端口 [root@Keepalived_MASTER ~]# ss -l State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 100 *:9419 *:* LISTEN 0 100 *:9420 *:* LISTEN 0 100 *:9421 *:* LISTEN 0 50 *:9425 *:* LISTEN 0 128 :::ssh :::* LISTEN 0 128 *:ssh *:* LISTEN 0 100 ::1:smtp :::* LISTEN 0 100 127.0.0.1:smtp *:* [root@Keepalived_MASTER ~]# vim /etc/sysconfig/iptables ........ -A INPUT -s 182.148.15.0/24 -d 224.0.0.18 -j ACCEPT #容許組播地址通訊。注意設置這兩行,就會在Keepalived_MASTER故障恢復後,將VIP資源從Keepalived_BACK那裏再轉移回來 -A INPUT -s 182.148.15.0/24 -p vrrp -j ACCEPT #容許VRRP(虛擬路由器冗餘協)通訊 -A INPUT -m state --state NEW -m tcp -p tcp --dport 9419 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9420 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9421 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 9425 -j ACCEPT [root@Keepalived_MASTER ~]# /etc/init.d/iptables start
6)故障切換後的數據同步腳本架構
上面的配置能夠實現Keepalived_MASTER機器出現故障(keepalived服務關閉),VIP資源轉移到Keepalived_BACKUP上; 當Keepalived_MASTER機器故障恢復(即keepalived服務開啓),那麼它就會將VIP資源再次搶奪回來! 可是隻是實現了VIP資源的轉移,可是MFS文件系統的數據該如何進行同步呢? 下面在兩臺機器上分別寫了數據同步腳本(Keepalived_MASTER和Keepalived_BACKUP要提早作好雙方的ssh無密碼登錄的信任關係) Keepalived_MASTER機器上 [root@Keepalived_MASTER ~]# vim /usr/local/mfs/MFS_DATA_Sync.sh #!/bin/bash A=`ip addr|grep 182.48.115.239|awk -F" " '{print $2}'|cut -d"/" -f1` if [ $A == 182.48.115.239 ];then /etc/init.d/mfsmaster stop /bin/rm -f /usr/local/mfs/var/mfs/* /usr/bin/rsync -e "ssh -p22" -avpgolr 182.48.115.235:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/ /usr/local/mfs/sbin/mfsmetarestore -m /etc/init.d/mfsmaster -a sleep 3 echo "this server has become the master of MFS" if [ $A != 182.48.115.239 ];then echo "this server is still MFS's slave" fi fi Keepalived_BACKUP機器上 [root@Keepalived_BACKUP ~]# vim /usr/local/mfs/MFS_DATA_Sync.sh #!/bin/bash A=`ip addr|grep 182.48.115.239|awk -F" " '{print $2}'|cut -d"/" -f1` if [ $A == 182.48.115.239 ];then /etc/init.d/mfsmaster stop /bin/rm -f /usr/local/mfs/var/mfs/* /usr/bin/rsync -e "ssh -p22" -avpgolr 182.48.115.233:/usr/local/mfs/var/mfs/* /usr/local/mfs/var/mfs/ /usr/local/mfs/sbin/mfsmetarestore -m /etc/init.d/mfsmaster -a sleep 3 echo "this server has become the master of MFS" if [ $A != 182.48.115.239 ];then echo "this server is still MFS's slave" fi fi 即當VIP資源轉移到本身這一方時,執行這個同步腳本,就會將對方的數據同步過來了。
7)故障切換測試app
1)關閉Keepalived_MASTER的mfsmaster服務 因爲keepalived.conf文件中的監控腳本定義,當發現mfsmaster進程不存在時,就會主動啓動mfsmaster。只要當mfsmaster啓動失敗,纔會強制 killall掉keepalived和mfscgiserv進程 [root@Keepalived_MASTER ~]# /etc/init.d/mfsmaster stop sending SIGTERM to lock owner (pid:29266) waiting for termination terminated 發現mfsmaster關閉後,會自動重啓 [root@Keepalived_MASTER ~]# ps -ef|grep mfs root 26579 1 0 16:00 ? 00:00:00 /usr/bin/python /usr/local/mfs/sbin/mfscgiserv start root 30389 30388 0 17:18 ? 00:00:00 /bin/bash /usr/local/mfs/keepalived_check_mfsmaster.sh mfs 30395 1 71 17:18 ? 00:00:00 /etc/init.d/mfsmaster start 默認狀況下,VIP資源是在Keepalived_MASTER上的 [root@Keepalived_MASTER ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0 inet 182.48.115.239/32 scope global eth0 inet6 fe80::5054:ff:fe09:2160/64 scope link valid_lft forever preferred_lft forever 在client端掛載後(經過VIP地址掛載),查看數據 [root@clinet-server ~]# cd /mnt/mfs [root@clinet-server mfs]# ll total 4 -rw-r--r-- 1 root root 9 May 24 17:11 grace -rw-r--r-- 1 root root 9 May 24 17:11 grace1 -rw-r--r-- 1 root root 9 May 24 17:11 grace2 -rw-r--r-- 1 root root 9 May 24 17:11 grace3 -rw-r--r-- 1 root root 9 May 24 17:10 kevin -rw-r--r-- 1 root root 9 May 24 17:10 kevin1 -rw-r--r-- 1 root root 9 May 24 17:10 kevin2 -rw-r--r-- 1 root root 9 May 24 17:10 kevin3 當keepalived關閉(這個時候mfsmaster關閉後就不會自動重啓了,由於keepalived關閉了,監控腳本就不會執行了) [root@Keepalived_MASTER ~]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] [root@Keepalived_MASTER ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0 inet6 fe80::5054:ff:fe09:2160/64 scope link valid_lft forever preferred_lft forever 發現,Keepalived_MASTER的keepalived關閉後,VIP資源就不在它上面了。 查看系統日誌,發現VIP已經轉移 [root@Keepalived_MASTER ~]# tail -1000 /var/log/messages ....... May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 182.48.115.239 May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239 May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239 May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239 May 24 17:11:19 centos6-node1 Keepalived_vrrp[29184]: Sending gratuitous ARP on eth0 for 182.48.115.239 而後到Keepalived_BACKUP上面發現,VIP已通過來了 [root@Keepalived_BACKUP ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:82:69:69 brd ff:ff:ff:ff:ff:ff inet 182.48.115.235/27 brd 182.48.115.255 scope global eth0 inet 182.48.115.239/32 scope global eth0 inet6 fe80::5054:ff:fe82:6969/64 scope link valid_lft forever preferred_lft forever 查看日誌,也能看到VIP轉移過來了 [root@Keepalived_BACKUP ~]# tail -1000 /var/log/messages ....... May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 99 May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) Entering BACKUP STATE May 24 17:27:57 centos6-node2 Keepalived_vrrp[5254]: VRRP_Instance(VI_1) removing protocol VIPs. 再次在clinet客戶端掛載後,查看數據 [root@clinet-server mfs]# ll [root@clinet-server mfs]# 發現沒有數據,這就須要執行上面的那個同步腳本 [root@Keepalived_BACKUP ~]# sh -x /usr/local/mfs/MFS_DATA_Sync.sh 再次在clinet客戶端掛載後,查看數據 [root@clinet-server mfs]# ll total 4 -rw-r--r--. 1 root root 9 May 24 17:11 grace -rw-r--r--. 1 root root 9 May 24 17:11 grace1 -rw-r--r--. 1 root root 9 May 24 17:11 grace2 -rw-r--r--. 1 root root 9 May 24 17:11 grace3 -rw-r--r--. 1 root root 9 May 24 17:10 kevin -rw-r--r--. 1 root root 9 May 24 17:10 kevin1 -rw-r--r--. 1 root root 9 May 24 17:10 kevin2 -rw-r--r--. 1 root root 9 May 24 17:10 kevin3 發現數據已經同步過來,而後再更新數據 [root@clinet-server mfs]# rm -f ./* [root@clinet-server mfs]# echo "123123" > wangshibo [root@clinet-server mfs]# echo "123123" > wangshibo1 [root@clinet-server mfs]# echo "123123" > wangshibo2 [root@clinet-server mfs]# echo "123123" > wangshibo3 [root@clinet-server mfs]# echo "123123" > wangshibo4 [root@clinet-server mfs]# ll total 3 -rw-r--r--. 1 root root 7 May 24 17:26 wangshibo -rw-r--r--. 1 root root 7 May 24 17:26 wangshibo1 -rw-r--r--. 1 root root 7 May 24 17:26 wangshibo2 -rw-r--r--. 1 root root 7 May 24 17:26 wangshibo3 -rw-r--r--. 1 root root 7 May 24 17:26 wangshibo4 2)恢復Keepalived_MASTER的keepalived進程 [root@Keepalived_MASTER ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@Keepalived_MASTER ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:09:21:60 brd ff:ff:ff:ff:ff:ff inet 182.48.115.233/27 brd 182.48.115.255 scope global eth0 inet 182.48.115.239/32 scope global eth0 inet6 fe80::5054:ff:fe09:2160/64 scope link valid_lft forever preferred_lft forever 發現Keepalived_MASTER的keepalived進程啓動後,VIP資源又搶奪回來。查看/var/log/messages日誌能看出VIP資源轉移回來 再次查看Keepalived_BACKUP,發現VIP不在了 [root@Keepalived_BACKUP ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:82:69:69 brd ff:ff:ff:ff:ff:ff inet 182.48.115.235/27 brd 182.48.115.255 scope global eth0 inet6 fe80::5054:ff:fe82:6969/64 scope link valid_lft forever preferred_lft forever 在client端掛載後(經過VIP地址掛載),查看數據 [root@clinet-server mfs]# ll total 4 -rw-r--r--. 1 root root 9 May 24 17:11 grace -rw-r--r--. 1 root root 9 May 24 17:11 grace1 -rw-r--r--. 1 root root 9 May 24 17:11 grace2 -rw-r--r--. 1 root root 9 May 24 17:11 grace3 -rw-r--r--. 1 root root 9 May 24 17:10 kevin -rw-r--r--. 1 root root 9 May 24 17:10 kevin1 -rw-r--r--. 1 root root 9 May 24 17:10 kevin2 -rw-r--r--. 1 root root 9 May 24 17:10 kevin3 發現數據仍是舊的,須要在Keepalived_MASTER上執行同步 [root@Keepalived_MASTER ~]# sh -x /usr/local/mfs/MFS_DATA_Sync.sh 再次在clinet客戶端掛載後,查看數據,發現已經同步 [root@xqsj_web3 mfs]# ll total 3 -rw-r--r-- 1 root root 7 May 24 17:26 wangshibo -rw-r--r-- 1 root root 7 May 24 17:26 wangshibo1 -rw-r--r-- 1 root root 7 May 24 17:26 wangshibo2 -rw-r--r-- 1 root root 7 May 24 17:26 wangshibo3 -rw-r--r-- 1 root root 7 May 24 17:26 wangshibo4