1、簡介
MHA 是一套優秀的做爲MySQL高可用性環境下故障切換和主從提高的高可用軟件。在MySQL故障切換過程當中,MHA能作到在0~30秒以內自動完成數據庫的故障切換操做,而且在進行故障切換的過程當中,MHA能在最大程度上保證數據的一致性,以達到真正意義上的高可用。html
該軟件由兩部分組成:MHA Manager(管理節點) 和 MHA Node(數據節點):node
MHA Manager能夠單獨部署在一臺獨立機器上管理多個master-slave集羣,也能夠部署在一臺slave上.MHA Manager探測集羣的node節點,當發現master出現故障的時候,它能夠自動將具備最新數據的slave提高爲新的master,而後將全部其它的slave導向新的master上.整個故障轉移過程對應用程序是透明的。mysql
MHA node運行在每臺MySQL服務器上(master/slave/manager),它經過監控具有解析和清理logs功能的腳原本加快故障轉移的。linux
目前MHA主要支持一主多從的架構,要搭建MHA,要求一個複製集羣中必須最少有三臺數據庫服務器,一主二從,即一臺充當master,一臺充當備用master,另一臺充當從庫sql
MHA 0.56 開始,就能夠支持GTID了,shell
舒適提示:數據庫
必定要關閉防火牆或者開放相關策略 - -!vim
更新系統時間(不是必須):
查看是不是CST上海時區,若不是執行更改: ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
同步網絡時間:ntpdate -u asia.pool.ntp.orgbash
注意:在作MHA以前必定要確保主從的數據是一致的,就是兩邊的執行環境一致。不然故障切換時slave沒法自動指向新master!這時就須要手動清除指向老master的記錄了。因此爲了不這個問題的出現,必定要確保slave與master的binlog日誌執行環境時一致的。服務器
環境 | 版本 |
CentOS 7 | |
MHA | 0.57 |
Master | 50.116 | 寫入 |
Candicate master | 50.115 | 讀 |
slave_Manager | 50.28 | 讀 |
警告:搭建過程均參考這兩位大神的博客:(有些日誌的輸出信息,懶得重現了,就直接copy的他們的。)
http://www.cnblogs.com/gomysql/p/3675429.html
http://www.cnblogs.com/xuanzhi201111/p/4231412.html?spm=5176.100239.blogcont52048.7.HR7na7
二 、安裝MHA
1.)安裝MHA node節點所依賴的perl模塊(DBD:mysql):
rpm -ivh http://dl.fedoraproject.org/pub/epel/7Server/x86_64/e/epel-release-7-10.noarch.rpm yum install perl-DBD-MySQL -y
2.)安裝MHA node所需依賴:(全部節點)
yum install -y perl-devel yum install -y perl-CPAN
3.)由谷歌維護的MHA 頁面自12年就不在更新了,想要最新版本的MHA須要去本身找資源。(上方有給出)
在安裝manager以前先安裝這些依賴:
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y
下載Node與Manager 軟件包,解壓後進入目錄中執行以下操做安裝:
perl Makefile.PL make && make install
Node安裝完成後會在/usr/local/bin/下生成這些文件:
-r-xr-xr-x 1 root root 15498 1月 18 11:02 apply_diff_relay_logs -r-xr-xr-x 1 root root 4807 1月 18 11:02 filter_mysqlbinlog -r-xr-xr-x 1 root root 7401 1月 18 11:02 purge_relay_logs -r-xr-xr-x 1 root root 7263 1月 18 11:02 save_binary_logs Node腳本說明:(這些工具一般由MHA Manager的腳本觸發,無需人爲操做) save_binary_logs //保存和複製master的二進制日誌 apply_diff_relay_logs //識別差別的中繼日誌事件並將其差別的事件應用於其餘的slave filter_mysqlbinlog //去除沒必要要的ROLLBACK事件(MHA已再也不使用這個工具) purge_relay_logs //清除中繼日誌(不會阻塞SQL線程)
Manager安裝完成後會在/usr/local/bin 下生成以下文件:
-r-xr-xr-x. 1 root root 15498 1月 11 22:55 apply_diff_relay_logs -r-xr-xr-x. 1 root root 4807 1月 11 22:55 filter_mysqlbinlog -r-xr-xr-x. 1 root root 1995 1月 11 22:55 masterha_check_repl -r-xr-xr-x. 1 root root 1779 1月 11 22:55 masterha_check_ssh -r-xr-xr-x. 1 root root 1865 1月 11 22:55 masterha_check_status -r-xr-xr-x. 1 root root 3201 1月 11 22:55 masterha_conf_host -r-xr-xr-x. 1 root root 2517 1月 11 22:55 masterha_manager -r-xr-xr-x. 1 root root 2165 1月 11 22:55 masterha_master_monitor -r-xr-xr-x. 1 root root 2373 1月 11 22:55 masterha_master_switch -r-xr-xr-x. 1 root root 3749 1月 11 22:55 masterha_secondary_check -r-xr-xr-x. 1 root root 1739 1月 11 22:55 masterha_stop -r-xr-xr-x. 1 root root 7401 1月 11 22:55 purge_relay_logs -r-xr-xr-x. 1 root root 7263 1月 11 22:55 save_binary_logd
複製mha4mysql-manager-0.53/samples/scripts/目錄下的腳本到/usr/local/bin目錄:
-rwxr-xr-x. 1 root root 3443 1月 8 2012 master_ip_failover //自動切換時vip管理的腳本,不是必須,若是咱們使用keepalived的,咱們能夠本身編寫腳本完成對vip的管理,好比監控mysql,若是mysql異常,咱們中止keepalived就行,這樣vip就會自動漂移 -rwxr-xr-x. 1 root root 9186 1月 8 2012 master_ip_online_change //在線切換時vip的管理,不是必須,一樣能夠能夠自行編寫簡單的shell完成 -rwxr-xr-x. 1 root root 11867 1月 8 2012 power_manager //故障發生後關閉主機的腳本,不是必須 -rwxr-xr-x. 1 root root 1360 1月 8 2012 send_report //因故障切換後發送報警的腳本,不是必須,可自行編寫簡單的shell完成
3、配置MHA
1.)配置SSH登陸無密碼驗證(使用key登陸,工做中經常使用,最好不要禁掉密碼登陸,若是禁了,可能會有問題)
manager_slave:(當管理節點在本地時須要ssh免祕鑰到本地)
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.115
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.116
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
slave:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.116
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
master:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.115
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
2.)建立MHA的工做目錄 mkdir -p /etc/masterha 、而且建立相關配置文件: cp mha4mysql-manager-0.56/samples/conf/app1.cnf /etc/masterha/
[server default] manager_workdir=/var/log/masterha/app1 manager_log=/var/log/masterha/app1/manager.log master_binlog_dir=/opt/mysql/log master_ip_failover_script= /usr/local/bin/master_ip_failover master_ip_online_change_script= /usr/local/bin/master_ip_online_change password=123456 user=root ping_interval=2 remote_workdir=/tmp repl_password=123456 repl_user=root report_script=/usr/local/bin/send_report secondary_check_script= /usr/local/bin/masterha_secondary_check -s server03 -s server02 shutdown_script="" ssh_user=root [server1] hostname=192.168.50.115 port=3306 [server2] hostname=192.168.50.116 candidate_master=1 check_repl_delay=0 [server3] hostname=192.168.50.28 port=3306
3.)設置relay log的清除方式(在每一個slave節點上)
在50.28與50.116 執行:mysql -uroot -p123456 -e "set global relay_log_purge=0"
注意:
MHA在發生切換的過程當中,從庫的恢復過程當中依賴於relay log的相關信息,因此這裏要將relay log的自動清除設置爲OFF,採用手動清除relay log的方式。在默認狀況下,從服務器上的中繼日誌會在SQL線程執行完畢後被自動刪除。可是在MHA環境中,這些中繼日誌在恢復其餘從服務器時可能會被用到,所以須要禁用中繼日誌的自動刪除功能。按期清除中繼日誌須要考慮到複製延時的問題。在ext3的文件系統下,刪除大的文件須要必定的時間,會致使嚴重的複製延時。爲了不復制延時,須要暫時爲中繼日誌建立硬連接,由於在linux系統中經過硬連接刪除大文件速度會很快。(在mysql數據庫中,刪除大表時,一般也採用創建硬連接的方式)
設置按期清理relay腳本(兩臺slave服務器):
[root@bogon ~]# vim purge_relay_log.sh
#!/bin/bash user=root passwd=123456 port=3306 log_dir='/data/masterha/log' work_dir='/data' purge='/usr/local/bin/purge_relay_logs' if [ ! -d $log_dir ] then mkdir $log_dir -p fi $purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
參數說明:
--user mysql //用戶名 --password mysql //密碼 --port //端口號 --workdir //指定建立relay log的硬連接的位置,默認是/var/tmp,因爲系統不一樣分區建立硬連接文件會失敗,故須要執行硬連接具體位置,成功執行腳本後,硬連接的中繼日誌文件被刪除 --disable_relay_log_purge //默認狀況下,若是relay_log_purge=1,腳本會什麼都不清理,自動退出,經過設定這個參數,當relay_log_purge=1的狀況下會將relay_log_purge設置爲0。清理relay log以後,最後將參數設置爲OFF。
[root@bogon ~]# crontab -l
0 6 * * * /bin/bash /root/purge_relay_log.sh #兩臺slave的清除時間不要是一致的,否則等到要恢復的時候就尷尬了。
purge_relay_logs腳本刪除中繼日誌不會阻塞SQL線程。下面咱們手動執行看看什麼狀況:
[root@bogon ~]# purge_relay_logs --user=root --password=123456 --port=3306 -disable_relay_log_purge --workdir=/data/
2015-01-18 12:30:51: purge_relay_logs script started. Found relay_log.info: /data/mysql/relay-log.info Removing hard linked relay log files localhost-relay-bin* under /data/.. done. Current relay log file: /data/mysql/localhost-relay-bin.000002 Archiving unused relay log files (up to /data/mysql/localhost-relay-bin.000001) ... Creating hard link for /data/mysql/localhost-relay-bin.000001 under /data//localhost-relay-bin.000001 .. ok. Creating hard links for unused relay log files completed. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok. Removing hard linked relay log files localhost-relay-bin* under /data/.. done. 2015-01-18 12:30:54: All relay log purging operations succeeded.
4.)檢查SSH配置(server01 192.168.50.28 Monitor 監控節點上操做),以下:
[root@bogon ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Fri Nov 3 15:29:01 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Nov 3 15:29:01 2017 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
Fri Nov 3 15:29:01 2017 - [info] Reading server configurations from /etc/masterha/app1.cnf..
Fri Nov 3 15:29:01 2017 - [info] Starting SSH connection tests..
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:02 2017 - [debug] Connecting via SSH from root@192.168.50.28(192.168.50.28:22) to root@192.168.50.116(192.168.50.116:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug] Connecting via SSH from root@192.168.50.28(192.168.50.28:22) to root@192.168.50.115(192.168.50.115:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:01 2017 - [debug] Connecting via SSH from root@192.168.50.115(192.168.50.115:22) to root@192.168.50.116(192.168.50.116:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug] Connecting via SSH from root@192.168.50.115(192.168.50.115:22) to root@192.168.50.28(192.168.50.28:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:01 2017 - [debug] Connecting via SSH from root@192.168.50.116(192.168.50.116:22) to root@192.168.50.115(192.168.50.115:22)..
Fri Nov 3 15:29:02 2017 - [debug] ok.
Fri Nov 3 15:29:02 2017 - [debug] Connecting via SSH from root@192.168.50.116(192.168.50.116:22) to root@192.168.50.28(192.168.50.28:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [info] All SSH connection tests passed successfully.
5.)檢查整個複製環境情況(192.168.50.28 Monitor 監控節點上操做),以下:
[root@bogon ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf (錯誤環境我就不重現了,直接複製的網上的案例,因此別太在乎IP什麼的,只看搭建流程和error的解決方法就好。)
Sun Jan 18 13:08:11 2015 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000004 Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.128(192.168.2.128).. Creating /tmp if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /data/mysql, up to mysql-bin.000004 Sun Jan 18 13:08:11 2015 - [info] Master setting check done. Sun Jan 18 13:08:11 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers.. Sun Jan 18 13:08:11 2015 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user=root --slave_host=192.168.2.129 --slave_ip=192.168.2.129 --slave_port=3306 --workdir=/tmp --target_version=5.5.60-log --manager_version=0.56 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/ --slave_pass=xxx Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.129(192.168.2.129:22).. Can't exec "mysqlbinlog": 沒有那個文件或目錄 at /usr/local/share/perl5/MHA/BinlogManager.pm line 99. mysqlbinlog version not found! at /usr/local/bin/apply_diff_relay_logs line 463 Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln193] Slaves settings check failed! Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln372] Slave configuration failed. Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations. at /usr/local/bin/masterha_check_repl line 48 Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers. Sun Jan 18 13:08:12 2015 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!
若是發現以下錯誤:
Can't exec "mysqlbinlog": No such file or directory at /usr/local/share/perl5/MHA/BinlogManager.pm line 99. mysqlbinlog version not found!
Testing mysql connection and privileges..sh: mysql: command not found mysql command failed with rc 127:0!
能夠經過如下方法解決(在全部節點上執行):
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
仍是報錯,糾結N久,才發現緣由是:原來Failover兩種方式:一種是虛擬IP地址,一種是全局配置文件。MHA並無限定使用哪種方式,而是讓用戶本身選擇,虛擬IP地址的方式會牽扯到其它的軟件,好比keepalive軟件,並且還要修改腳本master_ip_failover。
因此先暫時註釋master_ip_failover_script= /usr/local/bin/master_ip_failover這個選項。後面引入keepalived後和修改該腳本之後再開啓該選項,以下:
192.168.2.131 [root ~]$ grep master_ip_failover /etc/masterha/app1.cnf #master_ip_failover_script= /usr/local/bin/master_ip_failover
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf Sun Jan 18 13:23:57 2015 - [info] Slaves settings check done. Sun Jan 18 13:23:57 2015 - [info] 192.168.2.128 (current master) +--192.168.2.129 +--192.168.2.130 Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.129.. Sun Jan 18 13:23:57 2015 - [info] ok. Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.130.. Sun Jan 18 13:23:57 2015 - [info] ok. Sun Jan 18 13:23:57 2015 - [warning] master_ip_failover_script is not defined. Sun Jan 18 13:23:57 2015 - [warning] shutdown_script is not defined. Sun Jan 18 13:23:57 2015 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
6.)控制MHA Manager的運行狀態
查詢:masterha_check_status --conf=/etc/masterha/app1.cnf
啓動:nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
關閉:masterha_stop --conf=/etc/masterha/app1.cnf
查看日誌:
tail -f /var/log/masterha/app1/manager.log
Sun Jan 18 13:27:22 2015 - [warning] master_ip_failover_script is not defined. Sun Jan 18 13:27:22 2015 - [warning] shutdown_script is not defined. Sun Jan 18 13:27:22 2015 - [info] Set master ping interval 1 seconds. Sun Jan 18 13:27:22 2015 - [info] Set secondary check script: /usr/local/bin/masterha_secondary_check -s server03 -s server02 Sun Jan 18 13:27:22 2015 - [info] Starting ping health check on 192.168.50.116(192.168.50.116:3306).. Sun Jan 18 13:27:22 2015 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
啓動參數說明:
--remove_dead_master_conf //該參數表明當發生主從切換後,老的主庫的ip將會從配置文件中移除。 --manger_log //日誌存放位置 --ignore_last_failover //在缺省狀況下,若是MHA檢測到連續發生宕機,且兩次宕機間隔不足8小時的話,則不會進行Failover,之因此這樣限制是爲了不ping-pong效應。該參數表明忽略上次MHA觸發切換產生的文件,默認狀況下,MHA發生切換後會在日誌目錄,也就是上面我設置的/data產生app1.failover.complete文件,下次再次切換的時候若是發現該目錄下存在該文件將不容許觸發切換,除非在第一次切換後收到刪除該文件,爲了方便,這裏設置爲--ignore_last_failover。
7.)配置VIP
vip配置能夠採用兩種方式,一種經過keepalived的方式管理虛擬ip的浮動;另一種經過腳本方式啓動虛擬ip的方式(即不須要keepalived或者heartbeat相似的軟件)。
下面先介紹經過安裝keepalived來管理虛擬IP的浮動:
(1)下載軟件進行並進行安裝(兩臺master,準確的說一臺是master,另一臺是備選master,在沒有切換之前是slave)
http://www.keepalived.org/software/keepalived-1.3.8.tar.gz
進入解壓完後的目錄執行:
./configure --prefix=/usr/local/keepalived ;make && make install
製做快捷啓動:
cp keepalived/etc/init.d/keepalived /etc/init.d/ cp keepalived/etc/sysconfig/keepalived /etc/sysconfig mkdir /etc/keepalived cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
主master的配置:
! Configuration File for keepalived global_defs { notification_email { 1*******@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MySQL-HA } vrrp_instance VI_1 { state backup interface ens192 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.123 } }
備用master的配置:
! Configuration File for keepalived global_defs { notification_email { 11*******@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MySQL-HA } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 33 priority 120 advert_int 1 nopreempt authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.123 } }
其中router_id MySQL HA表示設定keepalived組的名稱,將192.168.50.123這個虛擬ip綁定到該主機的ens的網卡上,而且設置了狀態爲backup模式。priority 150表示設置的優先級爲150。nopreempt 容許一個priority比較低的節點做爲master,即便有priority更高的節點啓動,nopreemt必須在state爲BACKUP的節點上才生效。(還有一個細節要注意的,要看清楚本身的網卡是eth0作模擬VIP,仍是eth1)
在master與備用master依次啓動:
/etc/init.d/keepalived start
執行命令 ip a (注意ifconfig命令沒法查看到配置的虛擬IP)
注意:
上面兩臺服務器的keepalived都設置爲了BACKUP模式,在keepalived中2種模式,分別是master->backup模式和backup->backup模式。這兩種模式有很大區別。在master->backup模式下,一旦主庫宕機,虛擬ip會自動漂移到從庫,當主庫修復後,keepalived啓動後,還會把虛擬ip搶佔過來,即便設置了非搶佔模式(nopreempt)搶佔ip的動做也會發生。在backup->backup模式下,當主庫宕機後虛擬ip會自動漂移到從庫上,當原主庫恢復和keepalived服務啓動後,並不會搶佔新主的虛擬ip,即便是優先級高於從庫的優先級別,也不會發生搶佔。爲了減小ip漂移次數,一般是把修復好的主庫當作新的備庫。
(8)MHA引入keepalived(MySQL服務進程掛掉時經過MHA 中止keepalived):
要想把keepalived服務引入MHA,咱們只須要修改切換是觸發的腳本文件master_ip_failover便可,在該腳本中添加在master發生宕機時對keepalived的處理。
一、編輯腳本/usr/local/bin/master_ip_failover,修改後以下
#!/usr/bin/env perl
use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '192.168.0.88'; my $ssh_start_vip = "/etc/init.d/keepalived start"; my $ssh_stop_vip = "/etc/init.d/keepalived stop"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n";
exit 0; } else { &usage(); exit 1; } }
sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; }
sub stop_vip() {
return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; }
sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
把#master_ip_failover_script= /usr/local/bin/master_ip_failover打開
[root ~]$ grep 'master_ip_failover_script' /etc/masterha/app1.cnf master_ip_failover_script= /usr/local/bin/master_ip_failover
而後再次檢測:
masterha_check_repl --conf=/etc/masterha/app1.cnf
IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start=== Checking the Status of the script.. OK Tue Nov 7 13:48:16 2017 - [info] OK. Tue Nov 7 13:48:16 2017 - [warning] shutdown_script is not defined. Tue Nov 7 13:48:16 2017 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
查看日誌以下面同樣表示正在監控:tail -f /var/log/masterha/app1/manager.log
Tue Nov 7 12:46:18 2017 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
9.)在管理節點更改hosts
[root@bogon ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.50.115 server01
192.168.50.116 server02
192.168.50.28 server03
10.)停掉Master的MySQL服務模擬宕機,而後到管理節點查看日誌:
tail -f /var/log/masterha/app1/manager.log (最後一段的展現)
----- Failover Report -----
app1: MySQL Master failover 192.168.50.116(192.168.50.116:3306) to 192.168.50.115(192.168.50.115:3306) succeeded
Master 192.168.50.116(192.168.50.116:3306) is down!
Check MHA Manager logs at bogon:/var/log/masterha/app1/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on 192.168.50.116(192.168.50.116:3306)
Selected 192.168.50.115(192.168.50.115:3306) as a new master.
192.168.50.115(192.168.50.115:3306): OK: Applying all logs succeeded.
192.168.50.115(192.168.50.115:3306): OK: Activated master IP address.
192.168.50.28(192.168.50.28:3306): OK: Slave started, replicating from 192.168.50.115(192.168.50.115:3306)
192.168.50.115(192.168.50.115:3306): Resetting slave info succeeded.
Master failover to 192.168.50.115(192.168.50.115:3306) completed successfully.
Tue Nov 21 16:32:05 2017 - [info] Sending mail..
Unknown option: conf
11.)在以前的Master(192.168.50.116)上查看一下vip,發現已經消失。而後在候選master查看正常。完成切換。
4、報錯案例:
1.
複製檢測時的案例:ERROR 1142 (42000) at line 1: CREATE command denied to user 'root'@'192.168.50.28' for table 'apply_diff_relay_logs_test'
解決:grant all privileges on *.* to 'root'@'192.168.50.%' identified by '12345678';
2.mha0.53版本BUG
1.)模擬故障發生,進行切換的日誌報錯:Got ERROR: Use of uninitialized value $msg in scalar chomp at /usr/local/share/perl5/MHA/ManagerConst.pm line 90.
解決方法有兩個:
(1.1)網上說這是一個0.53的bug,建議更換0.56
(1.2)/usr/local/share/perl5/MHA/ManagerConst.pm 在此段中添加______的代碼
our $log_fmt = sub {
my %args = @_;
my $msg = $args{message};
+ $msg = "" unless($msg);
chomp $msg;
if ( $args{level} eq "error" ) {
my ( $ln, $script ) = ( caller(4) )[ 2, 1 ];
2.)採用GTID主從複製後,模擬故障發生,切換成功,但slave沒法指向新master,這是mha0.53版本的一個BUG更換MHA0.56及以上便可
[error][/usr/local/share/perl5/MHA/Server.pm, ln714] Checking slave status failed on 192.168.50.28(192.168.50.28:3306). [error][/usr/local/share/perl5/MHA/Server.pm, ln817] Starting slave IO/SQL thread on 192.168.50.28(192.168.50.28:3306) failed! Mon Nov 20 10:37:30 2017 - [info] End of log messages from 192.168.50.28. Mon Nov 20 10:37:30 2017 - [error][/usr/local/share/perl5/MHA/MasterFailover.pm, ln1537] Master failover to 192.168.50.115(192.168.50.115:3306) done, but recovery on slave partially failed. ----- Failover Report ----- app1: MySQL Master failover 192.168.50.116 to 192.168.50.115 Master 192.168.50.116 is down! Check MHA Manager logs at bogon:/var/log/masterha/app1/manager.log for details. Started automated(non-interactive) failover. Invalidated master IP address on 192.168.50.116. The latest slave 192.168.50.115(192.168.50.115:3306) has all relay logs for recovery. Selected 192.168.50.115 as a new master. 192.168.50.115: OK: Applying all logs succeeded. 192.168.50.115: OK: Activated master IP address. 192.168.50.28: This host has the latest relay log events. Generating relay diff files from the latest slave succeeded. 192.168.50.28: WARN: Applying all logs succeeded. But starting slave failed. Master failover to 192.168.50.115(192.168.50.115:3306) done, but recovery on slave partially failed. Mon Nov 20 10:37:30 2017 - [info] Sending mail.. Option new_slave_hosts requires an argument Unknown option: conf
3.故障切換成功後,啓動監控時的日誌:
[warning] SQL Thread is stopped(no error) on 192.168.50.115(192.168.50.115:3306)
[error][/usr/local/share/perl5/MHA/ServerManager.pm, ln732] Multi-master configuration is Master configurations are as below:
Master 192.168.50.115(192.168.50.115:3306), replicating from 192.168.50.116(192.168.50.116:3306)
Master 192.168.50.116(192.168.50.116:3306), dead
解決:
使用啓動時自動刪除老主的參數(--remove_dead_master_conf ),但前提條件是,主庫down機後切換的環境必須是正常的,無任何報錯狀況下可以使用。反之該參數無效,須要手動刪除相應配置。
4.故障切換成功後,啓動監控時的日誌:
[warning] SQL Thread is stopped(no error) on 192.168.50.115(192.168.50.115:3306)
[error][/usr/local/share/perl5/MHA/ServerManager.pm, ln622] Master 192.168.50.116:3306 from which slave 192.168.50.115(192.168.50.115:3306) replicates is not defined in the configuration file!
因爲新master此時仍是存有指向老master的slave狀態的,因此mha將新master當成了一個slave,而新master的slave所指向的是舊的且已經宕機的老master,因此報錯,解決辦法是清除新master的salve信息。
stop salve;
reset slave all;
正常狀況下MHA會自動清除新master的slave指向狀態的。因此這多是主從的binlog執行環境不一致致使的,也多是mha0.53版本BUG致使的。
5.設置MHA爲master->backup模式
老master恢復後搶佔主,此時須要將manager關閉,而後在配置文件中將老master添加回[server]字段,接着清除老master的slave信息並將新master change到老master,而後開啓manager監控,在將新master服務關閉,此時MHA會進行故障轉移的操做。(須要將keeplived設置爲master——》backup模式,VIP會自動飄到優先級高的去)