架構使用mysql5.7版本基於GTD加強半同步並行複製配置 reploication 一主兩從,使用MHA套件管理整個複製架構,實現故障自動切換高可用
優點:
一、加強半同步設置 AFTER_SYNC 提升數據安全性,主從一致性,
二、mha 特性提升故障後主從數據一致性,自動切換並從新配置主從、切換後不影響業務正常寫入
1、MHA介紹:
一、簡介
( Master High Availability)是一款開源的MySQL的高可用per腳本開發的程序套件,它爲MySQL主從複製架構提供了automating master failover 功能。MHA在監控到master節點故障時,會提高其中擁有最新數據的slave節點成爲新的master節點,在此期間,MHA會經過與其它從節點獲取額外信息來避免一致性方面的問題。MHA還提供了master節點的在線切換功能,即按需切換master/slave節點。
相較於其它HA軟件,MHA的目的在於維持MySQL Replication中Master庫的高可用性,其最大特色是能夠修復多個Slave之間的差別日誌,最終使全部Slave保持數據一致,而後從中選擇一個充當新的Master,並將其它Slave指向它。
二、角色功能:
MHA 服務有兩種角色,MHA Manager(管理節點)和MHA Node(數據節點):
MHA Manager:一般單獨部署在一臺獨立的機器上或者直接部署在其中一臺slave上(不建議後者),能夠管理多個master/slave集羣:
(1)master自動切換及故障轉移命令運行
(2)其餘的幫助腳本運行:手動切換master;master/slave狀態檢測
MHA node:運行在每臺MySQL服務器上(master/slave/manager),它經過監控具有解析和清理logs功能的腳原本加快故障轉移。其做用有:
(1)複製主節點的binlog數據
(2)對比從節點的中繼日誌文件
(3)無需中止從節點的SQL線程,定時刪除中繼日誌
注意:MHA集羣環境下須要時刪除relaylog,由於關閉了mysql的 自動刷新功能 relay-log-purge = 0 能夠經過,
(1)動態開啓全局參數設置
SET GLOBAL relay_log_purge=1; FLUSH LOGS; SET GLOBAL relay_log_purge=0;
(2)刪除relaylog 文件
(3) 使用MHA 腳本 purge_relay_logs --help
三、架構圖
(1)從宕機崩潰的master保存二進制日誌事件(binlog events);
(2)識別含有最新更新的slave;
(3)應用差別的中繼日誌(relay log)到其餘的slave;
(4)應用從master保存的二進制日誌事件(binlog events);
(5)提高一個slave爲新的master;
(6)使其餘的slave鏈接新的master進行復制;
環境準備
主機
|
角色
|
服務 |
端口
|
mha-node
|
172.16.40.201
|
slave
|
mysql-5.7.25(PerconaServer)
|
7066
|
node
|
172.16.40.202
|
slave(master-b)
|
mysql-5.7.25(PerconaServer)
|
7066
|
manager/node
|
172.16.40.203
|
master
|
mysql-5.7.25(PerconaServer)
|
7066
|
node
|
2、在3臺服務器上安裝mysql 服務
一、版本選擇:Percona-Server-5.7.25-28-Linux.x86_64.ssl101.tar.gz
Mysql 安裝:略
二、 配置主從
【172.16.40.203(master)】:
mysql> grant replication slave on *.* to 'repl'@'172.16.40.%' identified by 'replpasswod';
mysql>flush privileges;
【172.16.40.202(slave(master-b)】:
mysql> CHANGE MASTER TO MASTER_HOST='172.16.40.203',MASTER_PORT=7066,MASTER_USER='repl',MASTER_PASSWORD='replpasswod',MASTER_AUTO_POSITION=194;
mysql>start slave;
mysql>show slave status\G;
【172.16.40.201(slave)】:
mysql> CHANGE MASTER TO MASTER_HOST='172.16.40.203',MASTER_PORT=7066,MASTER_USER='repl',MASTER_PASSWORD='replpasswod',MASTER_AUTO_POSITION=194;
mysql>start slave;
mysql>show slave status\G;
三、開啓Mysql 半同步複製
# mysql -S /tmp/7066.sock -p
mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
mysql> SET GLOBAL rpl_semi_sync_master_enabled=1;
mysql> SET GLOBAL rpl_semi_sync_slave_enabled=1;
# 重啓mysql
# /etc/init.d/mysqld-7066 restart
#確認是否開啓半同步
mysql> show global variables like '%semi%';或 show global status like '%semi%';
3、搭建MHA
一、下載MHA套件
二、配置服務器間免密登錄
( 注意:Manager 要是裝到某一臺MySQL上,則須要本身和本身無密碼登入,單獨到一臺服務器則不須要)
【172.16.40.202(manager)】:
# ssh-keygen -t rsa
# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
# ssh-copy-id root@172.16.40.203
# ssh-copy-id root@172.16.40.201
【172.16.40.203(node)】:
# ssh-keygen -t rsa
# ssh-copy-id root@172.16.40.202
# ssh-copy-id root@172.16.40.201
【172.16.40.201(node)】:
# ssh-keygen -t rsa
# ssh-copy-id root@172.16.40.202
# ssh-copy-id root@172.16.40.203
三、安裝MHA
(注意:若是manager 沒有安裝在獨立的服務器上則每一個節點都須要安裝node)
(1)上傳 mha4mysql-node-0.58.tar.gz 包到全部服務器並安裝
# 安裝 須要perl,perl-DBD-MySQL,
perl-devel 依賴,yum安裝便可,yum install DBD-MySQL
# tar zxvf mha4mysql-node-0.58.tar.gz
# cd mha4mysql-node-0.58
# perl Makefile.PL
# make&&make install
(2)上傳 mha4mysql-manager-0.58.tar.gz 包到manager 服務器並安裝
# 安裝依賴
# yum install perl-Config-Tiny,perl-Log-Dispatch, perl-Parallel-ForkManager,perl-Time-HiRes -y
# tar zxvf mha4mysql-manager-0.58.tar.gz
# cd mha4mysql-manager-0.58
# perl Makefile.PL
# make&&make install
工具包介紹:
Manager工具:
- masterha_check_ssh : 檢查MHA的SSH配置。
- masterha_check_repl : 檢查MySQL複製。
- masterha_manager : 啓動MHA。
- masterha_check_status : 檢測當前MHA運行狀態。
- masterha_master_monitor : 監測master是否宕機。
- masterha_master_switch : 控制故障轉移(自動或手動)。
- masterha_conf_host : 添加或刪除配置的server信息。
Node工具:
- save_binary_logs : 保存和複製master的二進制日誌。
- apply_diff_relay_logs : 識別差別的中繼日誌事件並應用於其它slave。
- filter_mysqlbinlog : 去除沒必要要的ROLLBACK事件(MHA已再也不使用這個工具)。
- purge_relay_logs : 清除中繼日誌(不會阻塞SQL線程)。
報錯解決:
[root@localhost authors]#
masterha_check_ssh
"NI_NUMERICHOST" is not exported by the Socket module
"getaddrinfo" is not exported by the Socket module
"getnameinfo" is not exported by the Socket module
Can't continue after import errors at /usr/local/share/perl5/MHA/NodeUtil.pm line 29
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/NodeUtil.pm line 29.
Compilation failed in require at /usr/local/share/perl5/MHA/SlaveUtil.pm line 28.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/SlaveUtil.pm line 28.
Compilation failed in require at /usr/local/share/perl5/MHA/DBHelper.pm line 26.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/DBHelper.pm line 26.
Compilation failed in require at /usr/local/share/perl5/MHA/HealthCheck.pm line 30.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/HealthCheck.pm line 30.
Compilation failed in require at /usr/local/share/perl5/MHA/Server.pm line 28.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/Server.pm line 28.
Compilation failed in require at /usr/local/share/perl5/MHA/Config.pm line 29.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/Config.pm line 29.
Compilation failed in require at /usr/local/share/perl5/MHA/SSHCheck.pm line 32.
BEGIN failed--compilation aborted at /usr/local/share/perl5/MHA/SSHCheck.pm line 32.
Compilation failed in require at /usr/local/bin/masterha_check_ssh line 25.
BEGIN failed--compilation aborted at /usr/local/bin/masterha_check_ssh line 25.
使用cpan 安裝依賴包
cpan[1]> install ExtUtils::Constant
cpan[1]> install Socket
Tips:若是服務器沒法聯網的狀況下、能夠根據cpan 的提示信息地址手動下載依賴包並放到對應的目錄下在執行安裝命令便可
問題解決:
[root@localhost authors]# masterha_check_ssh --help
Usage:
masterha_check_ssh --global_conf=/etc/masterha_default.cnf
--conf=/etc/conf/masterha/app1.cnf
See online reference
(http://code.google.com/p/mysql-master-ha/wiki/Requirements#SSH_public_k
ey_authentication) for details.
四、配置MHA
(1) 在【172.16.40.202(manager)】建立工做目錄
# mkdir -p /home/mysql/app/mha/masterha
(2) 複製配置文件並修改
[server default]
manager_workdir=/home/mysql/app/mha/masterha
manager_log=/home/mysql/app/mha/masterha/logs/manager.log
master_binlog_dir=/home/mysql/app/mha/7066/logs/binlog
password=romysqladmint // 設置監控用戶
user=root
ping_interval=1
remote_workdir=/opt/TMHA2/mha4mysql-node-master
repl_password=replpasswod
repl_user=repl
ssh_user=root
shutdown_script=""
log_level=debug
#master node
[server1]
hostname=172.16.40.203
port=7066
ssh_port=22
#slave node
[server2]
hostname=172.16.40.202
port=7066
ssh_port=22
#candidate_master=1 //設置爲候選master,若是設置該參數之後,發生主從切換之後將會將此從庫提高爲主庫,即便這個主庫不是集羣中事件最新的slave
#slave node
[server3]
hostname=172.16.40.201
port=7066
ssh_port=22
# 數據庫受權監控用戶
(3) Manager 狀態檢查:
[root@fuzhou202 conf]# masterha_check_ssh --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
Sun Mar 24 19:30:26 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sun Mar 24 19:30:26 2019 - [info] Reading application default configuration from /home/mysql/app/mha/masterha/conf/app1.cnf..
Sun Mar 24 19:30:26 2019 - [info] Reading server configuration from /home/mysql/app/mha/masterha/conf/app1.cnf..
Sun Mar 24 19:30:26 2019 - [info] Starting SSH connection tests..
Sun Mar 24 19:30:26 2019 - [debug]
Sun Mar 24 19:30:26 2019 - [debug] Connecting via SSH from root@172.16.40.202(172.16.40.202:22) to root@172.16.40.203(172.16.40.203:22)..
Sun Mar 24 19:30:26 2019 - [debug] ok.
Sun Mar 24 19:30:26 2019 - [debug] Connecting via SSH from root@172.16.40.202(172.16.40.202:22) to root@172.16.40.201(172.16.40.201:22)..
Sun Mar 24 19:30:26 2019 - [debug] ok.
Sun Mar 24 19:30:27 2019 - [debug]
Sun Mar 24 19:30:26 2019 - [debug] Connecting via SSH from root@172.16.40.203(172.16.40.203:22) to root@172.16.40.202(172.16.40.202:22)..
Sun Mar 24 19:30:26 2019 - [debug] ok.
Sun Mar 24 19:30:26 2019 - [debug] Connecting via SSH from root@172.16.40.203(172.16.40.203:22) to root@172.16.40.201(172.16.40.201:22)..
Sun Mar 24 19:30:26 2019 - [debug] ok.
Sun Mar 24 19:30:27 2019 - [debug]
Sun Mar 24 19:30:27 2019 - [debug] Connecting via SSH from root@172.16.40.201(172.16.40.201:22) to root@172.16.40.202(172.16.40.202:22)..
Sun Mar 24 19:30:27 2019 - [debug] ok.
Sun Mar 24 19:30:27 2019 - [debug] Connecting via SSH from root@172.16.40.201(172.16.40.201:22) to root@172.16.40.203(172.16.40.203:22)..
Sun Mar 24 19:30:27 2019 - [debug] ok.
Sun Mar 24 19:30:27 2019 - [info] All SSH connection tests passed successfully.
---------
#
masterha_check_repl --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
# masterha_check_status --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
上述腳本執行都經過開啓manager 監控
(4)開啓manager 監控服務
#啓動manager
# nohup masterha_manager --conf=/home/mysql/app/mha/masterha/conf/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /home/mysql/app/mha/masterha/logs/manager.log 2>&1 &
#檢查狀態
[root@fuzhou202 logs]# masterha_check_status --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
app1 (pid:9163) is running(0:PING_OK), master:172.16.40.203
#關閉manager
# masterha_stop --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
(5) 配置腳本方式管理VIP
# 在 master 【172.16.40.203 (master)】節點上手動綁定VIP
#
ifconfig eth0:1 172.16.40.99/24
# 建立perl master-failover 腳本
# 在配置文件中添加參數
master_ip_failover_script= /usr/local/bin/master_ip_failover
# vim /home/mysql/app/mha/masterha/conf/app1.cnf #添加
master_ip_failover_script= /usr/local/bin/master_ip_failover
#編輯腳本,內容以下
# vim /usr/local/bin/master_ip_failover
---------------------------------------------------
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '172.16.40.99/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth1:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth1:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
--------------------------------------
# chmod +x /usr/local/bin/master_ip_failover
驗證自動自動 master-failover
# masterha_check_repl --conf=/home/mysql/app/mha/masterha/conf/app1.cnf
...
un Mar 24 20:17:11 2019 - [info] Checking master_ip_failover_script status:
Sun Mar 24 20:17:11 2019 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=172.16.40.203 --orig_master_ip=172.16.40.203 --orig_master_port=7066
IN SCRIPT TEST====/sbin/ifconfig eth1:1 down==/sbin/ifconfig eth1:1 172.16.40.99/24===
4、測試
一、自動 master-failover
sysbench生成測試數據
# 主庫生成數據
# sysbench --test=oltp --oltp-table-size=1000000 --oltp-read-only=off --init-rng=on --num-threads=4 --max-requests=0 --oltp-dist-type=uniform --max-time=1800 --mysql-user=root --mysql-socket=/tmp/7706.sock --mysql-password=mysqladmin--db-driver=mysql --mysql-table-engine=innodb --oltp-test-mode=complex prepare
#
關閉一臺mysql的slave io_thread,模擬複製延遲狀況
mysql > stop slave io_thread;
# sysbench --test=oltp --oltp-table-size=1000000 --oltp-read-only=off --init-rng=on --num-threads=4--max-requests=0 --oltp-dist-type=uniform --max-time=180 --mysql-user=root --mysql-socket=/tmp/7066.sock --mysql-password=mysqladmin --db-driver=mysql --mysql-table-engine=innodb --oltp-test-mode=complex run
# 關閉 master mysql
# pkill -9 mysqld
#觀察manager 日誌
...
Mon Mar 25 11:07:42 2019 - [info] 172.16.40.201: Resetting slave info succeeded.
Mon Mar 25 11:07:42 2019 - [info] Master failover to 172.16.40.201(172.16.40.201:7066) completed successfully.
Mon Mar 25 11:07:42 2019 - [info] Deleted server1 entry from /home/mysql/app/mha/masterha/conf/app1.cnf .
Mon Mar 25 11:07:42 2019 - [debug] Disconnected from 172.16.40.202(172.16.40.202:7066)
Mon Mar 25 11:07:42 2019 - [debug] Disconnected from 172.16.40.201(172.16.40.201:7066)
Mon Mar 25 11:07:42 2019 - [info]
----- Failover Report -----
app1: MySQL Master failover 172.16.40.203(172.16.40.203:7066) to 172.16.40.201(172.16.40.201:7066) succeeded
Master 172.16.40.203(172.16.40.203:7066) is down!
Check MHA Manager logs at fuzhou202:/home/mysql/app/mha/masterha/logs/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on 172.16.40.203(172.16.40.203:7066)
Selected 172.16.40.201(172.16.40.201:7066) as a new master.
172.16.40.201(172.16.40.201:7066): OK: Applying all logs succeeded.
172.16.40.201(172.16.40.201:7066): OK: Activated master IP address.
172.16.40.202(172.16.40.202:7066): OK: Slave started, replicating from 172.16.40.201(172.16.40.201:7066)
172.16.40.201(172.16.40.201:7066): Resetting slave info succeeded.
Master failover to 172.16.40.201(172.16.40.201:7066) completed successfully.
#mha 自動切換已經識別最新的slave,提高爲master,並修改配置成功
# vip 也已經切換到另一臺
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:b7:29:df brd ff:ff:ff:ff:ff:ff
inet 172.16.40.201/24 brd 172.16.40.255 scope global eth0
inet 172.16.40.99/24 brd 172.16.40.255 scope global secondary eth0:1
inet6 fe80::250:56ff:feb7:29df/64 scope link
valid_lft forever preferred_lft forever
注意:
mha 使用自動切換 master-failover後,manager的監控程序就會自動中止,由於啓動參數設置了 --remove_dead_master_conf
因此已經恢復的mysql節點在手動加入mha 時須要的操做
一、將節點信息添加到配置文件 /home/mysql/app/mha/masterha/conf/app1.cnf
二、爲已經恢復得mysql從新配置主從 CHANGE MASTER TO MASTER_HOST='
172.16.40.203', MASTER_PORT=7066, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='replpasswod'; MASTER_HOST爲切換後的新masetr的ip,start salve; 查看複製狀態 show slave status\G
三、從新啓動 manager的監控程序
二、手動 master-failover
手動 master-failover 無需開啓 manager的監控程序, 當主服務器故障時,人工手動調用MHA來進行故障切換操做
# masterha_master_switch --master_state=dead --conf=/home/mysql/app/mha/masterha/conf/app1.cnf -dead_master_host=
172.16.40.202 --dead_master_port=7066 --new_master_host=
172.16.40.203 --new_master_port=7066 --ignore_last_failover
#此時會輸出交互信息確認繼續便可