日本DeNA公司youshimaton (現就任於Facebook公司) 開發 一套優秀的做爲MySQL高可用性環境下故障切換和主從提高的高可用軟件
MHA Manager (管理節點) MHA Node (數據節點)
自動故障切換過程當中,MHA試圖從宕機的主服務器上保存二進制日誌,最大程度的保證數據的不丟失 使用MySQL 5.5的半同步複製,能夠大大下降數據丟失的風險
一、MHA架構node
(1)數據庫安裝 (2)一主兩從 (3)MHA搭建
二、故障模擬mysql
(1)主庫失效 (2)備選主庫成爲主庫 (3)從庫2將備選主庫指向爲主庫
一、實驗環境 |
服務器角色 | IP地址 | 服務軟件包 |
---|---|---|---|
master | 192.168.142.130 | mha4mysql-node | |
slave1 | 192.168.142.131 | mha4mysql-node | |
slave2 | 192.168.142.132 | mha4mysql-node | |
manager | 192.168.142.133 | mha4mysql-manager、 mha4mysql-node |
二、實驗要求c++
本案例要求經過MHA監控MySQL 數據庫在故障時進行自動切換,不影響業務。
三、實現思路sql
(1)安裝MySQL數據庫 (2)配置MySQL一主兩從 (3)安裝MHA軟件 (4)配置無密碼認證 (5)配置MySQL MHA高可用 (6)模擬master 故障切換
(MySOL版本請使用5.6.36, cmake版本請使用2.8.6)數據庫
一、安裝編譯依賴的環境vim
yum install -y install ncurses-devel gcc gcc-c++ perl-Module-Install
二、遠程掛載安全
mkdir /abc mount.cifs //192.168.1421/mha /abc/
三、安裝gmake編譯軟件bash
cd /abc/mha/ tar zxvf cmake-2.8.6.tar.gz -C /opt/ cd /opt/cmake-2.8.6/ ./configure gmake && gmake install
四、安裝MySQL數據庫服務器
cd /abc/mha/ tar zxvf mysql-5.6.36.tar.gz -C /opt/ cd /opt/mysql-5.6.36/ cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \ -DDEFAULT_CHARSET=utf8 \ -DDEFAULT_COLLATION=utf8_general_ci \ -DWITH_EXTRA_CHARSETS=all \ -DSYSCONFDIR=/etc make && make install cp support-files/my-default.cnf /etc/my.cnf cp support-files/mysql.server /etc/rc.d/init.d/mysqld chmod +x /etc/rc.d/init.d/mysqld chkconfig --add mysqld echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile source /etc/profile useradd -M -s /sbin/nologin mysql chown -R mysql.mysql /usr/local/mysql /usr/local/mysql/scripts/mysql_install_db \ --basedir=/usr/local/mysql \ --datadir=/usr/local/mysql/data \ --user=mysql
五、修改master的主配置文件/etc/my.cnf文件,三臺服務器的server-id不能同樣架構
vim /etc/my.cnf [mysqld] server-id = 1 log_bin = master-bin log-slave-updates = true
修改 mysql 的主配置文件
#在/etc/my.cnf中修改或者增長下面內容。 [mysqld] server-id = 2 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index
1.修改 mysql 的主配置文件:/etc/my.cnf
vim /etc/my.cnf [mysql] server-id = 3 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index
2.在master、slave一、slave2上分別作兩個軟鏈接
ln -s /usr/local/mysql/bin/mysql /usr/sbin/ ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/
3.master、slave一、slave2上啓動mysql,並查看開啓情況
#啓動mysql /usr/local/mysql/bin/mysqld_safe --user=mysql & #查看服務端口狀態 netstat -ntap | grep 3306 #關閉防火牆和安全功能 systemctl stop firewalld.service setenforce 0
1.mysq主從配置相對比較簡單須要注意的是受權,在全部數據庫節點上受權兩個用戶,一個是從庫同步使用用戶myslave,另外一個是manager使用監控用戶mha
grant replication slave on *.* to 'myslave'@'192.168.142.%' identified by '123'; grant all privileges on *.* to 'mha'@'192.168.142.%' identified by 'manager'; flush privileges;
2.下面三條受權按理論是不用添加的,可是作案例實驗環境時候經過MHA檢查mysql主從有報錯,
報兩個從庫經過主機名鏈接不上主庫,因此全部數據庫加上下面的受權
grant all privileges on *.* to 'mha'@'master' identified by 'manager'; grant all privileges on *.* to 'mha'@'slave1' identified by 'manager'; grant all privileges on *.* to 'mha'@'slave2' identified by 'manager'; #刷新數據庫 flush privileges;
3.在master主機上查看二進制文件和同步點
mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 1292 | | | | +-------------------+----------+--------------+------------------+-------------------+
4.在slave一、slave2上分別執行同步
change master to master_host='192.168.142.130',master_user='myslave',master_password='123',master_log_file='masterbin.000001',master_log_pos=1292; start slave; #開啓slave
5.查看IO和SQL線程都是yes表明表明同步正常
show slave status\G; Slave_IO_Running: Yes Slave_SQL_Running: Yes #必須設置兩個從庫爲只讀模式 #設置兩個從庫爲只讀模式 set global read_only=1; #刷新數據庫 flush privileges;
#關閉防火牆和安全功能 systemctl stop firewalld.service setenforce 0 #安裝MHA依賴的環境 yum install epel-release --nogpgcheck -y yum install -y perl-DBD-MySQL \ perl-Config-Tiny \ perl-Log-Dispatch \ perl-Parallel-ForkManager \ perl-ExtUtils-CBuilder \ perl-ExtUtils-MakeMaker \ perl-CPAN #安裝node(在全部服務器上安裝node) tar zxvf /abc/rpm/MHA/mha4mysql-node-0.57.tar.gz cd mha4mysql-node-0.57/ perl Makefile.PL make && make install
tar zxvf /abc/rpm/MHA/mha4mysql-manager-0.57.tar.gz cd mha4mysql-manager-0.57/ perl Makefile.PL make make install manager安裝後會在/usr/local/bin下面生成幾個工具: masterha_conf_host #添加或刪除配置的server信息 masterha_stop #關閉manager masterha_manager #啓動manager腳本 masterha_check_repl #檢查mysql複製狀況 masterha_master_monitor #檢查master是否宕機 masterha_check_ssh #檢查MHA的SSH配置情況 masterha_master_switch #控制故障轉移(自動或者手動) masterha_check_status #檢測當前MHA運行狀態 node安裝後也會在/usr/local/bin下面生成幾個腳本(這些工具一般由MHA Manager的腳本出發,無需人爲陳操做) apply_diff_relay_logs #識別差別的中繼日誌事件並將其差別的事件應用與其餘的slave filter_mysqlbinlog #去除沒必要要的ROLLBACK事件(MHA已再也不使用這個工具) purge_relay_logs #清除中繼日誌(不會阻塞SQL線程) save_binary_logs #保存和複製master的二進制日誌
(1)在manager上配置到全部數據庫節點的無密碼認證
#由於是無密碼驗證,因此一路按回車鍵 ssh-keygen -t rsa ssh-copy-id 192.168.142.130 ssh-copy-id 192.168.142.131 ssh-copy-id 192.168.142.132
(2)在master上配置到數據庫節點slave1和slave2的無密碼驗證
ssh-keygen -t rsa ssh-copy-id 192.168.142.131 ssh-copy-id 192.168.142.132
(3)在slave1上配置到數據庫節點master和slave2的無密碼認證
ssh-keygen -t rsa ssh-copy-id 192.168.142.130 ssh-copy-id 192.168.142.132
(4)在slave2上配置到數據庫節點master和slave1的無密碼認證
ssh-keygen -t rsa ssh-copy-id 192.168.142.130 ssh-copy-id 192.168.142.131
1.在manager節點上覆制相關腳本到/usr/local/bin目錄
cp -ra /root/mha4mysql-manager-0.57/samples/scripts /usr/local/bin #拷貝後會有四個執行文件 #查看目錄權限 ll /usr/local/bin/scripts/ -rwxr-xr-x. 1 1001 1001 3648 May 31 2015 master_ip_failover #自動切換時VIP管理的腳本 -rwxr-xr-x. 1 1001 1001 9870 May 31 2015 master_ip_online_change #在線切換時VIP的管理 -rwxr-xr-x. 1 1001 1001 11867 May 31 2015 power_manager #故障發生後關閉主機的腳本 -rwxr-xr-x. 1 1001 1001 1360 May 31 2015 send_report #因故障切換後發送警報的腳本
2.複製上述的自動切換時VIP管理的腳本到/usr/local/bin目錄,這裏使用腳本管理VIP
cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin
3.從新編寫 master_ip_failover 腳本:(刪除原有內容,直接寫入下述內容)
vim /usr/local/bin/master_ip_failover #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); #添加內容部分 my $vip = '192.168.142.200'; my $brdc = '192.168.142.255'; my $ifdev = 'ens33'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; my $exit_code = 0; #my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;"; #my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
4.建立MHA軟件目錄並拷貝配置文件
mkdir /etc/masterha cp /root/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha vim /etc/masterha/app1.cnf [server default] #manager配置文件 manager_log=/var/log/masterha/app1/manager.log #manager日誌 manager_workdir=/var/log/masterha/app1 #master保存binlog的位置,這裏的路徑要與master裏配置的bilog的相同 master_binlog_dir=/usr/local/mysql/data #設置自動failover時候的切換腳本。也就是上邊的那個腳本 master_ip_failover_script=/usr/local/bin/master_ip_failover #設置手動切換時候的切換腳本 master_ip_online_change_script=/usr/local/bin/master_ip_online_change #這個密碼是前文中建立監控用戶的那個密碼 password=manager remote_workdir=/tmp #設置複製用戶密碼 repl_password=123 #設置複製用戶的用戶 repl_user=myslave #設置發生切換後發生報警的腳本 reporl_script=/usr/local/send_report secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.45.130 -s 192.168.45.134 #設置故障發生關閉故障腳本主機 shutdown_script="" #設置ssh的登陸用戶名 ssh_user=root #設置監控用戶 user=mha [server1] hostname=192.168.142. port=3306 [server2] candidate_master=1 check_repl_delay=0 hostname=192.168.142. port=3306 [server3] hostname=192.168.142. port=3306
5.測試ssh無密碼認證
masterha_check_ssh -conf=/etc/masterha/app1.cnf masterha_check_repl -conf=/etc/masterha/app1.cnf #注意:第一次配置須要去master上手動開啓虛擬IP /sbin/ifconfig ens33:1 192.168.142.200/24
6.啓動MHA
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
7.查看MHA狀態,能夠看到當前的master是mysql1節點
masterha_check_status --conf=/etc/masterha/app1.cnf
8.查看MHA日誌,也能夠看到當前的master是192.168.142.130
cat /var/log/masterha/app1/manager.log
一、啓動監控觀察日誌記錄
tailf /var/log/masterha/app1/manager.log
二、查看地址變化
pkill -9 mysql #宕掉mysql服務 VIP地址不會由於manager節點中止MHA服務而消失,VIP地址會轉移到slave1上 #從服務器查看vip地址轉移 ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.142.131 netmask 255.255.255.0 broadcast 192.168.142.255 inet6 fe80::b81a:9df:a960:45ac prefixlen 64 scopeid 0x20<link> ether 00:0c:29:97:8e:66 txqueuelen 1000 (Ethernet) RX packets 1687418 bytes 1157627305 (1.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1376468 bytes 170996461 (163.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.142.200 netmask 255.255.255.0 broadcast 192.168.142.255 ether 00:0c:29:97:8e:66 txqueuelen 1000 (Ethernet)
三、在mha-manager上開啓另一個新的終端,直接yum安裝一個mysql
yum install mysql -y #在slave1上賦予權限,要否則mha-manager這邊是進不到數據庫的: grant all on *.* to 'root'@'%' identified by 'abc123'; #在mh-manager上進行登陸: mysql -h 192.168.142.200 -uroot -p Enter password: #輸入密碼
(1)建立個數據庫school,並建立個表info,寫一下簡單的內容
MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) MySQL [(none)]> create database school; Query OK, 1 row affected (0.00 sec) MySQL [(none)]> use school; Database changed MySQL [school]> create table info (id int); Query OK, 0 rows affected (0.01 sec)
(2)建立好之後再slave1上的數據庫中查看,會同步數據
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | school | | test | +--------------------+
(3)由於slave1和slave2之間是相互同步的,因此在slave2上數據也應該同步
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | school | | test | +--------------------+