環境:mysql
角色 | ip | Server-id | Write vip | Read vip |
---|---|---|---|---|
master1 | 192.168.1.7 | 1 | 192.168.1.100 | |
master2 | 192.168.1.8 | 2 | 192.168.1.101 | |
slave1 | 192.168.1.10 | 3 | 192.168.1.102 | |
slave2 | 192.168.1.12 | 4 | 192.168.1.103 | |
monitor | 192.168.1.13 | 無 |
①部署perl環境(全部主機)
[root@192 ~]# yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64
MMM架構基於perl環境實現,務必安裝perl全部環境插件。若是yum報錯,請認真排查,推薦yum remove -y libvirt-client,以後從新yum安裝sql
②安裝相關插件庫(全部主機)
[root@192 ~]# cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP
提示後,回車繼續shell
③關閉防火牆或者開啓端口,關閉selinx,修改相應主機名
mmm_agent:代理端口號爲9989
mmm_monitor:監控端口爲9988
[root@192 ~]# hostnamectl set-hostname master1
[root@192 ~]# hostnamectl set-hostname master2
[root@192 ~]# hostnamectl set-hostname slave1
[root@192 ~]# hostnamectl set-hostname slave2數據庫
主從環境:
master2,slave1,slave2都是master1的slave庫
master1是master2的slave庫vim
①master1:
[root@master1 ~]# cat /etc/my.cnf安全
[mysqld] basedir = /usr/local/mysql datadir = /usr/local/mysql/data port = 3306 server_id = 1 socket = /usr/local/mysql/mysql.sock log-error=/usr/local/mysql/data/mysqld.err log-bin = mysql-bin relay-log = relay-bin relay-log-index = slave-relay-bin.index log-slave-updates = 1 auto-increment-increment = 2 auto-increment-offset = 1 binlog_format = mixed [client] host = 127.0.0.1 user = root password = 123.com
②master2:
[root@master2 ~]# cat /etc/my.cnfbash
[mysqld] basedir = /usr/local/mysql datadir = /usr/local/mysql/data port = 3306 server_id = 2 socket = /usr/local/mysql/mysql.sock log-error=/usr/local/mysql/data/mysqld.err log-bin = mysql-bin relay-log = relay-bin relay-log-index = slave-relay-bin.index log-slave-updates = 1 auto-increment-increment = 2 auto-increment-offset = 2 binlog_format = mixed [client] host = 127.0.0.1 user = root password = 123.com
③slave1:
[root@slave1 ~]# cat /etc/my.cnf服務器
[mysqld] basedir = /usr/local/mysql datadir = /usr/local/mysql/data port = 3306 server_id = 3 socket = /usr/local/mysql/mysql.sock log-error=/usr/local/mysql/data/mysqld.err relay-log = relay-bin relay-log-index = slave-relay-bin.index [client] host = 127.0.0.1 user = root password = 123.com
④slave2:
[root@slave2 ~]# cat /etc/my.cnf網絡
[mysqld] basedir = /usr/local/mysql datadir = /usr/local/mysql/data port = 3306 server_id = 4 socket = /usr/local/mysql/mysql.sock log-error=/usr/local/mysql/data/mysqld.err relay-log = relay-bin relay-log-index = slave-relay-bin.index [client] host = 127.0.0.1 user = root password = 123.com
⑤權限與change
master1:
mysql> grant replication slave on . to myslave@'%' identified by '123.com';多線程
mysql> show master status \G
*************************** 1. row *************************** File: mysql-bin.000001 Position: 154 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:
master2:
mysql> grant replication slave on . to myslave@'%' identified by '123.com';
mysql> show master status \G
*************************** 1. row *************************** File: mysql-bin.000001 Position: 154 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:
mysql> change master to master_host='192.168.1.7',master_user='myslave',master_password='123.com',master_log_file=’mysql-bin.000001’,master_log_pos=154;
mysql> start slave;
slave1:
mysql> change master to master_host='192.168.1.7',master_user='myslave',master_password='123.com',master_log_file=’mysql-bin.000001’,master_log_pos=154;
mysql> start slave;
slave2:
mysql> change master to master_host='192.168.1.7',master_user='myslave',master_password='123.com',master_log_file=’mysql-bin.000001’,master_log_pos=154;
mysql> start slave;
master1:
mysql> change master to master_host='192.168.1.8',master_user='myslave',master_password='123.com',master_log_file=’mysql-bin.000001’,master_log_pos=154;
mysql> start slave;
①配置監控用戶權限(主從複製只在master-1執行便可)
mysql> grant super,replication client,process on . to 'mmm_agent'@'%' identified by '123.com';
mysql> grant replication client on . to 'mmm_monitor'@'%' identified by '123.com';
mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');
user | host |
---|---|
mmm_agent | % |
mmm_monitor | % |
②安裝MMM(monitor主機與四臺主從主機都須要安裝)
[root@master1 ~]# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz
[root@master1 ~]# tar zxf :mmm2:mysql-mmm-2.2.1.tar.gz
[root@master1 ~]# cd mysql-mmm-2.2.1/
[root@master1 mysql-mmm-2.2.1]# make && make install
③配置mmm_common.conf(monitor主機與四臺主從主機務必一致)
[root@master1 mysql-mmm]# vim mmm_common.conf
active_master_role writer <host default> cluster_interface eno16777736 pid_path /var/run/mmm_agentd.pid bin_path /usr/lib/mysql-mmm/ replication_user myslave replication_password 123.com agent_user mmm_agent agent_password 123.com </host> <host master1> ip 192.168.1.7 mode master peer master2 </host> <host master2> ip 192.168.1.8 mode master peer master1 </host> <host slave1> ip 192.168.1.10 mode slave </host> <host slave2> ip 192.168.1.12 mode slave </host> <role writer> hosts master1,master2 ips 192.168.1.100 mode exclusive </role> <role reader> hosts master2,slave1,slave2 ips 192.168.1.101, 192.168.1.102, 192.168.1.103 mode balanced </role>
④配置mmm_agent.conf(monitor主機不須要配置,四臺主從主機都需更改其相應主機名)
[root@master1 ~]# cat /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf this master1
[root@master2 ~]# cat /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf this master2
[root@slave1 ~]# cat /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf this slave1
[root@slave2 ~]# cat /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf this slave2
⑤修改mysql-mmm-agent執行腳本(五臺主機所有修改)
[root@master1 ~]# cat /etc/init.d/mysql-mmm-agent
#!/bin/sh source /root/.bash_profile # mysql-mmm-agent This shell script takes care of starting and stopping # the mmm agent daemon. # # chkconfig: - 64 36 .......
⑥啓動mmm-agent代理進程(四臺DB主機啓動代理進程)
[root@master1 ~]# chkconfig --add mysql-mmm-agent
[root@master1 ~]# chkconfig mysql-mmm-agent on
[root@master1 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd' Daemon pid: '/var/run/mmm_agentd.pid' Starting MMM Agent daemon... running.
若是在啓動時報錯,有緣由爲perl環境的問題,根據提示缺乏什麼就安裝什麼
[root@master1 ~]# ss -anpt | grep agentd
LISTEN 0 10 192.168.1.4:9989 *:* users:(("mmm_agentd",pid=26468,fd=3))
①安裝MMM(參考上述步驟)
②配置mmm_common.conf(與DB主機一致)
③配置mmm_mon.conf
[root@localhost ~]# cat /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf <monitor> ip 127.0.0.1 pid_path /var/run/mmm_mond.pid bin_path /usr/lib/mysql-mmm/ status_path /var/lib/misc/mmm_mond.status ping_ips 192.168.1.7,192.168.1.8,192.168.1.10,192.168.1.12 auto_set_online 0 </monitor> <host default> monitor_user mmm_monitor monitor_password 123.com </host> debug 0
④啓動mmm-monitor監控進程
[root@localhost ~]# chkconfig --add mysql-mmm-monitor
[root@localhost ~]# chkconfig mysql-mmm-monitor on
[root@loaclhost ~]# /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond' Daemon pid: '/var/run/mmm_mond.pid' Starting MMM Monitor daemon: running
[root@localhost ~]# ss -anpt | grep mond
LISTEN 0 10 127.0.0.1:9988 *:* users:(("mmm_mond",pid=27378,fd=9))
步驟:
①在初次啓動monitor之初,要檢查各節狀態
[root@localhost ~]# mmm_control show
master1(192.168.1.7) master/AWAITING_RECOVERY. Roles: master2(192.168.1.8) master/AWAITING_RECOVERY. Roles: slave1(192.168.1.10) slave/AWAITING_RECOVERY. Roles: slave2(192.168.1.12) slave/AWAITING_RECOVERY. Roles:
②啓動各節點
[root@localhost ~]# mmm_control set_online slave1
OK: State of 'slave1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost ~]# mmm_control set_online master2
OK: State of 'master2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost ~]# mmm_control set_online slave2
OK: State of 'slave2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost init.d]# mmm_control show
master1(192.168.1.4) master/ONLINE. Roles: writer(192.168.1.100) master2(192.168.1.8) master/ONLINE. Roles: reader(192.168.1.103) slave1(192.168.1.10) slave/ONLINE. Roles: reader(192.168.1.101) slave2(192.168.1.12) slave/ONLINE. Roles: reader(192.168.1.102)
③查看各節點vip狀態
master1:
[root@master1 ~]# ip a
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:81:20:3c brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic eno16777736 valid_lft 78128sec preferred_lft 78128sec inet 192.168.1.100/32 scope global eno16777736 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe81:203c/64 scope link valid_lft forever preferred_lft forever
master2:
[root@master2 ~]# ip a
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f6:7f:57 brd ff:ff:ff:ff:ff:ff inet 192.168.1.8/24 brd 192.168.1.255 scope global dynamic eno16777736 valid_lft 78143sec preferred_lft 78143sec inet 192.168.1.103/32 scope global eno16777736 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fef6:7f57/64 scope link valid_lft forever preferred_lft forever
slave1:
[root@slave1 ~]# ip a
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4b:6a:1e brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global dynamic eno16777736 valid_lft 78172sec preferred_lft 78172sec inet 192.168.1.101/32 scope global eno16777736 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe4b:6a1e/64 scope link valid_lft forever preferred_lft forever
slave2:
[root@slave2 ~]# ip a
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:db:f7:b8 brd ff:ff:ff:ff:ff:ff inet 192.168.1.12/24 brd 192.168.1.255 scope global dynamic eno16777736 valid_lft 80764sec preferred_lft 80764sec inet 192.168.1.102/32 scope global eno16777736 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fedb:f7b8/64 scope link valid_lft forever preferred_lft forever
④模擬master1宕機,觀察vip狀態,以及主從複製狀態
master1:
[root@master1 ~]# systemctl stop mysqld
monitor:
[root@localhost ~]# tailf /var/log/mysql-mmm/mmm_mond.log
2018/04/22 15:14:05 WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.1.7:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.1.7' (111) 2018/04/22 15:14:16 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK) 2018/04/22 15:14:16 INFO Removing all roles from host 'master1': 2018/04/22 15:14:16 INFO Removed role 'writer(192.168.1.100)' from host 'master1' 2018/04/22 15:14:16 INFO Orphaned role 'writer(192.168.1.100)' has been assigned to 'master2'
[root@localhost ~]# mmm_control show
master1(192.168.1.7) master/HARD_OFFLINE. Roles: master2(192.168.1.8) master/ONLINE. Roles: reader(192.168.1.101), writer(192.168.1.100) slave1(192.168.1.10) slave/ONLINE. Roles: reader(192.168.1.102) slave2(192.168.1.12) slave/ONLINE. Roles: reader(192.168.1.103)
[root@localhost ~]# mmm_control checks all
master1 ping [last change: 2018/04/22 15:42:02] OK master1 mysql [last change: 2018/04/22 15:47:57] ERROR: Connect error (host = 192.168.1.7:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.1.7' (111)
master2:
[root@master2 ~]# ip a
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f6:7f:57 brd ff:ff:ff:ff:ff:ff inet 192.168.1.8/24 brd 192.168.1.255 scope global dynamic eno16777736 valid_lft 77901sec preferred_lft 77901sec inet 192.168.1.103/32 scope global eno16777736 valid_lft forever preferred_lft forever inet 192.168.1.100/32 scope global eno16777736 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fef6:7f57/64 scope link valid_lft forever preferred_lft forever
slave1:
mysql> show slave statusG
*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.8 Master_User: myslave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 154 Relay_Log_File: relay-bin.000002 Relay_Log_Pos: 320 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ......
slave2:
mysql> show slave status G
*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.1.8 Master_User: myslave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 154 Relay_Log_File: relay-bin.000002 Relay_Log_Pos: 320 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes .......
⑤從新手動啓動master1,觀察各狀態
master1:
[root@master1 ~]# systemctl restart mysqld
monitor:
[root@localhost ~]# tailf /var/log/mysql-mmm/mmm_mond.log
2018/04/22 15:53:27 INFO Check 'mysql' on 'master1' is ok! 2018/04/22 15:53:28 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY 2018/04/22 15:53:28 INFO Check 'rep_threads' on 'master1' is ok! 2018/04/22 15:53:28 INFO Check 'rep_backlog' on 'master1' is ok
[root@localhost ~]# mmm_control show
master1(192.168.1.7) master/AWAITING_RECOVERY. Roles: master2(192.168.1.8) master/ONLINE. Roles: reader(192.168.1.101), writer(192.168.1.100) slave1(192.168.1.10) slave/ONLINE. Roles: reader(192.168.1.102) slave2(192.168.1.12) slave/ONLINE. Roles: reader(192.168.1.103)
觀察到master1從hard_offline變爲awaiting狀態,從新使master1上線
[root@localhost ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
再去觀察最新的集羣狀態,發現master啓動後不會從新接管主
[root@localhost ~]# mmm_control show
master1(192.168.1.7) master/ONLINE. Roles: master2(192.168.1.8) master/ONLINE. Roles: reader(192.168.1.101), writer(192.168.1.100) slave1(192.168.1.10) slave/ONLINE. Roles: reader(192.168.1.102) slave2(192.168.1.12) slave/ONLINE. Roles: reader(192.168.1.103)
①maser2備主節點宕機不會影響集羣性能,就是移除了master2備選節點的讀狀態②master1主節點宕機,master2備主會接管角色,slave1,slave2會從新指向新的主庫進行復制,自動change
③若是master1主庫宕機,master2複製應用又落後於master1時就變成了主可寫狀態,這時數據沒法保持一致
④若是master2,slave1,slave2延遲於master1主,master1宕機,slave1,slave2將會等待數據同步master1後,再從新指向master2,這時數據沒法保持一致
⑤若是採用MMM高可用架構,主,備節點機器配置同樣,而開啓半同步進一步提升安全性或採用mariadb/mysql5.7進行多線程從複製,提升複製性能
⑥monitor根據mmm_mon.conf中auto_set_online是否開啓,每隔60s檢查主機狀態,將等待awaiting_recovery設置爲online,前提已經從故障狀態hard_offline中恢復,monitor監控數據庫的三種狀態分別爲HARD_OFFLINE→AWATING_RECOVERY→ONLINE
⑦對外提供的vip是由monitor程序提供。monitor不啓動,vip是不會提供的。若是已經分配好了vip,monitor關閉了原先分配的vip,不會當即關閉外部程序只要不重啓網絡,這樣好處對於monitor可靠性要求低一點,可是若是是服務器宕掉了,vip發生變動,訪問的宕機服務器是不會接受訪問的