MySQL高可用架構-MMM環境部署記錄

 

MMM介紹
MMM(Master-Master replication manager for MySQL)是一套支持雙主故障切換和雙主平常管理的腳本程序。MMM使用Perl語言開發,主要用來監控和管理MySQL Master-Master(雙主)複製,能夠說是mysql主主複製管理器。雖然叫作雙主複製,可是業務上同一時刻只容許對一個主進行寫入,另外一臺備選主上提供部分讀服務,以加速在主主切換時刻備選主的預熱,能夠說MMM這套腳本程序一方面實現了故障切換的功能,另外一方面其內部附加的工具腳本也能夠實現多個slave的read負載均衡。關於mysql主主複製配置的監控、故障轉移和管理的一套可伸縮的腳本套件(在任什麼時候候只有一個節點能夠被寫入),這個套件也能對居於標準的主從配置的任意數量的從服務器進行讀負載均衡,因此你能夠用它來在一組居於複製的服務器啓動虛擬ip,除此以外,它還有實現數據備份、節點之間從新同步功能的腳本。html

MMM提供了自動和手動兩種方式移除一組服務器中複製延遲較高的服務器的虛擬ip,同時它還能夠備份數據,實現兩節點之間的數據同步等。因爲MMM沒法徹底的保證數據一致性,因此MMM適用於對數據的一致性要求不是很高,可是又想最大程度的保證業務可用性的場景。MySQL自己沒有提供replication failover的解決方案,經過MMM方案能實現服務器的故障轉移,從而實現mysql的高可用。對於那些對數據的一致性要求很高的業務,很是不建議採用MMM這種高可用架構mysql

從網上分享一個Mysql-MMM的內部架構圖:sql

MySQL-MMM優缺點shell

優勢:高可用性,擴展性好,出現故障自動切換,對於主主同步,在同一時間只提供一臺數據庫寫操做,保證的數據的一致性。
缺點:Monitor節點是單點,能夠結合Keepalived實現高可用。

MySQL-MMM工做原理數據庫

MMM(Master-Master replication managerfor Mysql,Mysql主主複製管理器)是一套靈活的腳本程序,基於perl實現,用來對mysql replication進行監控和故障遷移,
並能管理mysql Master-Master複製的配置(同一時間只有一個節點是可寫的)。
mmm_mond:監控進程,負責全部的監控工做,決定和處理全部節點角色活動。此腳本須要在監管機上運行。
mmm_agentd:運行在每一個mysql服務器上(Master和Slave)的代理進程,完成監控的探針工做和執行簡單的遠端服務設置。此腳本須要在被監管機上運行。
mmm_control:一個簡單的腳本,提供管理mmm_mond進程的命令。

mysql-mmm的監管端會提供多個虛擬IP(VIP),包括一個可寫VIP,多個可讀VIP,經過監管的管理,這些IP會綁定在可用mysql之上,當某一臺mysql宕機時,監管會將VIP遷移
至其餘mysql。在整個監管過程當中,須要在mysql中添加相關受權用戶,以便讓mysql能夠支持監理機的維護。受權的用戶包括一個mmm_monitor用戶和一個mmm_agent用戶,若是
想使用mmm的備份工具則還要添加一個mmm_tools用戶。

MySQL-MMM高可用架構環境部署記錄(自動切換讀寫分離)vim

0)機器配置信息bash

角色                  ip地址                 主機名字                  server-id
monitoring           182.48.115.233         mmm-monit                -
master1              182.48.115.236         db-master1               1
master2              182.48.115.237         db-master2               2
slave1               182.48.115.238         db-slave                 3
 
業務中的服務ip(vip)信息以下所示:
ip地址                     角色             描述
182.48.115.234            write           應用程序鏈接該ip對主庫進行寫請求
182.48.115.235            read            應用程序鏈接該ip進行讀請求
182.48.115.239            read            應用程序鏈接該ip進行讀請求

1)配置/etc/hosts(全部機器都要操做)服務器

[root@mmm-monit ~]# cat /etc/hosts
.......
182.48.115.233   mmm-monit                
182.48.115.236   db-master1               
182.48.115.237   db-master2               
182.48.115.238   db-slave 

2)首先在3臺主機上安裝mysql,部署複製環境架構

其中:182.48.115.236和182.48.115.237互爲主從,182.48.115.238爲1182.48.115.236的從
........................................................................
mysql安裝參考:http://www.cnblogs.com/kevingrace/p/6109679.html
mysql主從/主主配置參考:http://www.cnblogs.com/kevingrace/p/6256603.html
........................................................................
---------182.48.115.236的my.cnf添加配置---------
server-id = 1
log-bin = mysql-bin
log_slave_updates = 1
auto-increment-increment = 2
auto-increment-offset = 1
---------182.48.115.237的my.cnf添加配置---------
server-id = 2
log-bin = mysql-bin
log_slave_updates = 1
auto-increment-increment = 2
auto-increment-offset = 2
---------182.48.115.238的my.cnf添加配置---------
server-id = 3
log-bin = mysql-bin
log_slave_updates = 1

注意:
上面的server-id不必定要按順序來,只要沒有重複便可。
而後182.48.115.236和182.48.115.237相互受權鏈接;182.48.115.236受權給182.48.115.238鏈接。
最後經過"change master...."作對應的主主和主從複製,具體操做步驟在此省略,能夠參考上面給出的文檔。

3)安裝MMM(全部機器上都要執行)app

.......先安裝MMM所須要的Perl模塊.......
[root@db-master1 ~]# vim install.sh        //在全部機器上執行下面的安裝腳本
#!/bin/bash
wget http://xrl.us/cpanm --no-check-certificate
mv cpanm /usr/bin
chmod 755 /usr/bin/cpanm
cat > /root/list << EOF
install Algorithm::Diff
install Class::Singleton
install DBI
install DBD::mysql
install File::Basename
install File::stat
install File::Temp
install Log::Dispatch
install Log::Log4perl
install Mail::Send
install Net::ARP
install Net::Ping
install Proc::Daemon
install Thread::Queue
install Time::HiRes
EOF

for package in `cat /root/list`
do
    cpanm $package
done
[root@db-master1 ~]# chmod 755 install.sh
[root@db-master1 ~]# ./install.sh 

.........下載mysql-mmm軟件,在全部服務器上安裝............
[root@db-master1 ~]# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz
[root@db-master1 ~]# mv :mmm2:mysql-mmm-2.2.1.tar.gz mysql-mmm-2.2.1.tar.gz
[root@db-master1 ~]# tar -zvxf mysql-mmm-2.2.1.tar.gz
[root@db-master1 ~]# cd mysql-mmm-2.2.1
[root@db-master1 mysql-mmm-2.2.1]# make install

mysql-mmm安裝後的主要拓撲結構以下所示(注意:yum安裝的和源碼安裝的路徑有所區別):
/usr/lib/perl5/vendor_perl/5.8.8/MMM                    MMM使用的主要perl模塊
/usr/lib/mysql-mmm                                      MMM使用的主要腳本
/usr/sbin                                               MMM使用的主要命令的路徑
/etc/init.d/                                            MMM的agent和monitor啓動服務的目錄
/etc/mysql-mmm                                          MMM配置文件的路徑,默認因此的配置文件位於該目錄下
/var/log/mysql-mmm                                      默認的MMM保存日誌的位置

到這裏已經完成了MMM的基本需求,接下來須要配置具體的配置文件,其中mmm_common.conf,mmm_agent.conf爲agent端的配置文件,mmm_mon.conf爲monitor端的配置文件。

4)配置agent端的配置文件,須要在db-master1 ,db-master2,db-slave上分別配置(配置內容同樣)

先在db-master1主機上配置agent的mmm_common.conf文件(這個在全部機器上都要配置,包括monitor機器)
[root@db-master1 ~]# cd /etc/mysql-mmm/
[root@db-master1 mysql-mmm]# cp mmm_common.conf  mmm_common.conf.bak
[root@db-master1 mysql-mmm]# vim mmm_common.conf
active_master_role      writer
<host default>
        cluster_interface               eth0
  
        pid_path                                /var/run/mmm_agentd.pid
        bin_path                                /usr/lib/mysql-mmm/
        replication_user                        slave                       //注意這個帳號和下面一行的密碼是在前面部署主主/主從複製時建立的複製帳號和密碼
        replication_password                    slave@123
        agent_user                              mmm_agent
        agent_password                          mmm_agent
</host>
<host db-master1>
        ip                                              182.48.115.236
        mode                                            master
        peer                                            db-master2
</host>
<host db-master2>
        ip                                              182.48.115.237
        mode                                            master
        peer                                            db-master1
</host>
<host db-slave>
        ip                                              182.48.115.238
        mode                                            slave
</host>
<role writer>
        hosts                                           db-master1, db-master2
        ips                                             182.48.115.234
        mode                                            exclusive
</role>
 
<role reader>
        hosts                                           db-master2, db-slave
        ips                                             182.48.115.235, 182.48.115.239
        mode                                            balanced
</role>
 
配置解釋,其中:
replication_user  用於檢查複製的用戶
agent_user爲agent的用戶
mode標明是否爲主或者備選主,或者從庫。
mode exclusive主爲獨佔模式,同一時刻只能有一個主
<role write>中hosts表示目前的主庫和備選主的真實主機ip或者主機名,ips爲對外提供的虛擬機ip地址
<role readr>中hosts表明從庫真實的ip和主機名,ips表明從庫的虛擬ip地址。
  
能夠直接把mmm_common.conf從db-master1拷貝到db-master二、db-slave和mmm-monit三臺主機的/etc/mysql-mmm下。
[root@db-master1 ~]# scp /etc/mysql-mmm/mmm_common.conf db-master2:/etc/mysql-mmm/
[root@db-master1 ~]# scp /etc/mysql-mmm/mmm_common.conf db-slave:/etc/mysql-mmm/
[root@db-master1 ~]# scp /etc/mysql-mmm/mmm_common.conf mmm-monit:/etc/mysql-mmm/

分別在db-master1,db-master2,db-slave三臺主機的/etc/mysql-mmm配置mmm_agent.conf文件,分別用不一樣的字符標識。注意這個文件的this db1這行內容要修改
爲各自的主機名。好比本環境中,db-master1要配置this db-master1,db-master2要配置爲this db-master2,而db-slave要配置爲this db-slave。
  
在db-master1(182.48.115.236)上:
[root@db-master1 ~]# vim /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf
this db-master1                                       
  
在db-master2(182.48.115.237)上:
[root@db-master2 ~]# vim /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf
this db-master2
  
在db-slave(182.48.115.238)上:
[root@db-slave ~]# vim /etc/mysql-mmm/mmm_agent.conf
include mmm_common.conf
this db-slave
  
------------------------------------------------------------------------------------------------------
接着在mmm-monit(182.48.115.233)配置monitor的配置文件:
[root@mmm-monit ~]# cp /etc/mysql-mmm/mmm_mon.conf  /etc/mysql-mmm/mmm_mon.conf.bak
[root@mmm-monit ~]# vim /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf
 
<monitor>
    ip                  182.48.115.233
    pid_path            /var/run/mysql-mmm/mmm_mond.pid
    bin_path            /usr/libexec/mysql-mmm
    status_path         /var/lib/mysql-mmm/mmm_mond.status
    ping_ips            182.48.115.238,182.48.115.237,182.48.115.236
    auto_set_online     10               //發現節點丟失,則過10秒進行切換
</monitor>
 
<host default>
    monitor_user        mmm_monitor
    monitor_password    mmm_monitor
</host>
 
debug 0                                   
  
這裏只在原有配置文件中的ping_ips添加了整個架構被監控主機的ip地址,而在<host default>中配置了用於監控的用戶。

5)建立監控用戶,這裏須要建立3個監控用戶

具體描述:
用戶名                 描述                                               權限
monitor user          MMM的monitor端監控全部的mysql數據庫的狀態用戶           REPLICATION CLIENT
agent user            主要是MMM客戶端用於改變的master的read_only狀態用戶      SUPER,REPLICATION CLIENT,PROCESS
repl                  用於複製的用戶                                       REPLICATION SLAVE

在3臺服務器(db-master1,db-master2,db-slave)進行受權,由於以前部署的主主複製,以及主從複製已是ok的,因此這裏在其中一臺服務器執行就ok了,執行後
權限會自動同步到其它兩臺機器上。用於複製的帳號以前已經有了,因此這裏就受權兩個帳號。

在db-master1上進行受權操做:
mysql> GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'182.48.115.%'   IDENTIFIED BY 'mmm_agent';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'182.48.115.%' IDENTIFIED BY 'mmm_monitor';
Query OK, 0 rows affected (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

而後在db-master2和db-slave兩臺機器上查看,發現上面在db-master1機器上受權的帳號已經同步過來了!

6)啓動agent和monitor服務

最後分別在db-master1,db-master2,db-slave上啓動agent
[root@db-master1 ~]# /etc/init.d/mysql-mmm-agent start     //將start替換成status,則查看agent進程起來了沒?
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
 
[root@db-master2 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
 
[root@db-slave ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
 
接着在mmm-monit上啓動monitor程序
[root@mmm-monit ~]# mkdir /var/run/mysql-mmm
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor start        
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
........................................................................................................
若是monitor程序啓動出現以下報錯:
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Base class package "Class::Singleton" is empty.
    (Perhaps you need to 'use' the module which defines that package first,
    or make that module available in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
 at /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2
BEGIN failed--compilation aborted at /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 2.
Compilation failed in require at /usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.
BEGIN failed--compilation aborted at /usr/share/perl5/vendor_perl/MMM/Monitor/Monitor.pm line 15.
Compilation failed in require at /usr/sbin/mmm_mond line 28.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 28.
failed
 
解決辦法:
[root@mmm-monit ~]# perl -MCPAN -e shell
...............................................
如是執行這個命令後,有以下報錯: 
Can't locate CPAN.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
BEGIN failed--compilation aborted.
 
解決:
[root@mmm-monit ~]# rpm -q perl-CPAN
package perl-CPAN is not installed
[root@mmm-monit ~]# yum install perl-CPAN
...............................................
執行上面的"perl -MCPAN -e shell"命令後,出現下面的安裝命令
......
cpan[1]> install MIME::Entity          //依次輸入這些安裝命令
cpan[2]> install MIME::Parser
cpan[3]> install Crypt::PasswdMD5
cpan[4]> install Term::ReadPassword
cpan[5]> install Crypt::CBC
cpan[6]> install Crypt::Blowfish
cpan[7]> install Daemon::Generic
cpan[8]> install DateTime
cpan[9]> install SOAP::Lite
 
或者直接執行下面的安裝命令的命令也行:
[root@mmm-monit ~]# perl -MCPAN -e 'install HTML::Template'
[root@mmm-monit ~]# perl -MCPAN -e 'install MIME::Entity'
[root@mmm-monit ~]# perl -MCPAN -e 'install Crypt::PasswdMD5'
[root@mmm-monit ~]# perl -MCPAN -e 'install Term::ReadPassword'
[root@mmm-monit ~]# perl -MCPAN -e 'install Crypt::CBC'
[root@mmm-monit ~]# perl -MCPAN -e 'install Crypt::Blowfish'
[root@mmm-monit ~]# perl -MCPAN -e 'install Daemon::Generic'
[root@mmm-monit ~]# perl -MCPAN -e 'install DateTime'
[root@mmm-monit ~]# perl -MCPAN -e 'install SOAP::Lite'
............................................................................................................

monitor進程啓動後,以下查看,發現進程並無起來!
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor status
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Checking MMM Monitor process: not running.

解決辦法:
將mmm_mon.conf的debug模式開啓設爲1,即打開debug模式,而後執行:
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor start
.......
open2: exec of /usr/libexec/mysql-mmm/monitor/checker  ping_ip failed at /usr/share/perl5/vendor_perl/MMM/Monitor/Checker.pm line 143.
2017/06/01 20:16:02  WARN Checker 'ping_ip' is dead!
2017/06/01 20:16:02  INFO Spawning checker 'ping_ip'...
2017/06/01 20:16:02 DEBUG Core: reaped child 17439 with exit 65280

緣由是mmm_mon.conf文件裏check的bin_path路徑寫錯了
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep bin_path
    bin_path            /usr/libexec/mysql-mmm
將上面的bin_path改成/usr/lib/mysql-mmm 便可解決!即:
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep bin_path
    bin_path            /usr/lib/mysql-mmm

接着再次啓動monitor進程
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor start
.......
FATAL Couldn't open status file '/var/lib/mysql-mmm/mmm_mond.status': Starting up without status inf
.......
Error in tempfile() using template /var/lib/mysql-mmm/mmm_mond.statusXXXXXXXXXX: Parent directory (/var/lib/mysql-mmm/) does not exist at /usr/share/perl5/vendor_perl/MMM/Monitor/Agents.pm line 158.
Perl exited with active threads:
    6 running and unjoined
    0 finished and unjoined
    0 running and detached

緣由是mmm_mon.conf文件裏check的status_path路徑寫錯了
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf |grep status_path
    status_path         /var/lib/mysql-mmm/mmm_mond.status
將上面的status_path改成/var/lib/misc//mmm_mond.status 便可解決!即:
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf|grep status_path
    status_path         /var/lib/misc/mmm_mond.status

而後再次啓動monitor進程
[root@mmm-monit ~]# /etc/init.d/mysql-mmm-monitor restart
........
2017/06/01 20:57:14 DEBUG Sending command 'SET_STATUS(ONLINE, reader(182.48.115.235), db-master1)' to db-master2 (182.48.115.237:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:885492.82
2017/06/01 20:57:14 DEBUG Sending command 'SET_STATUS(ONLINE, writer(182.48.115.234), db-master1)' to db-master1 (182.48.115.236:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:65356.14
2017/06/01 20:57:14 DEBUG Sending command 'SET_STATUS(ONLINE, reader(182.48.115.239), db-master1)' to db-slave (182.48.115.238:9989)
2017/06/01 20:57:14 DEBUG Received Answer: OK: Status applied successfully!|UP:945625.05
2017/06/01 20:57:15 DEBUG Listener: Waiting for connection...
2017/06/01 20:57:17 DEBUG Sending command 'SET_STATUS(ONLINE, reader(182.48.115.235), db-master1)' to db-master2 (182.48.115.237:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:885495.95
2017/06/01 20:57:17 DEBUG Sending command 'SET_STATUS(ONLINE, writer(182.48.115.234), db-master1)' to db-master1 (182.48.115.236:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:65359.27
2017/06/01 20:57:17 DEBUG Sending command 'SET_STATUS(ONLINE, reader(182.48.115.239), db-master1)' to db-slave (182.48.115.238:9989)
2017/06/01 20:57:17 DEBUG Received Answer: OK: Status applied successfully!|UP:945628.17
2017/06/01 20:57:18 DEBUG Listener: Waiting for connection...
.........

只要上面在啓動過程當中的check檢查中沒有報錯信息,而且有successfully信息,則表示monitor進程正常了。
[root@mmm-monit ~]# ps -ef|grep monitor
root     30651 30540  0 20:59 ?        00:00:00 perl /usr/lib/mysql-mmm/monitor/checker ping_ip
root     30654 30540  0 20:59 ?        00:00:00 perl /usr/lib/mysql-mmm/monitor/checker mysql
root     30656 30540  0 20:59 ?        00:00:00 perl /usr/lib/mysql-mmm/monitor/checker ping
root     30658 30540  0 20:59 ?        00:00:00 perl /usr/lib/mysql-mmm/monitor/checker rep_backlog
root     30660 30540  0 20:59 ?        00:00:00 perl /usr/lib/mysql-mmm/monitor/checker rep_threads

那麼,最終mmm_mon.cnf文件的配置以下:
[root@mmm-monit ~]# cat /etc/mysql-mmm/mmm_mon.conf
include mmm_common.conf

<monitor>
    ip                  182.48.115.233
    pid_path            /var/run/mysql-mmm/mmm_mond.pid
    bin_path            /usr/lib/mysql-mmm
    status_path         /var/lib/misc/mmm_mond.status
    ping_ips            182.48.115.238,182.48.115.237,182.48.115.236
    auto_set_online     10
</monitor>

<host default>
    monitor_user        mmm_monitor
    monitor_password    mmm_monitor
</host>

debug 1

[root@mmm-monit ~]# ll /var/lib/misc/mmm_mond.status
-rw-------. 1 root root 121 6月   1 21:06 /var/lib/misc/mmm_mond.status
[root@mmm-monit ~]# ll /var/run/mysql-mmm/mmm_mond.pid
-rw-r--r--. 1 root root 5 6月   1 20:59 /var/run/mysql-mmm/mmm_mond.pid

-----------------------------------------------------------
其中agent的日誌存放在/var/log/mysql-mmm/mmm_agentd.log,monitor日誌放在/var/log/mysql-mmm/mmm_mond.log,
啓動過程當中有什麼問題,一般日誌都會有詳細的記錄。

7)在monitor主機上檢查集羣主機的狀態

[root@mmm-monit ~]# mmm_control checks all
db-master2  ping         [last change: 2017/06/01 20:59:39]  OK
db-master2  mysql        [last change: 2017/06/01 20:59:39]  OK
db-master2  rep_threads  [last change: 2017/06/01 20:59:39]  OK
db-master2  rep_backlog  [last change: 2017/06/01 20:59:39]  OK: Backlog is null
db-master1  ping         [last change: 2017/06/01 20:59:39]  OK
db-master1  mysql        [last change: 2017/06/01 20:59:39]  OK
db-master1  rep_threads  [last change: 2017/06/01 20:59:39]  OK
db-master1  rep_backlog  [last change: 2017/06/01 20:59:39]  OK: Backlog is null
db-slave    ping         [last change: 2017/06/01 20:59:39]  OK
db-slave    mysql        [last change: 2017/06/01 20:59:39]  OK
db-slave    rep_threads  [last change: 2017/06/01 20:59:39]  OK
db-slave    rep_backlog  [last change: 2017/06/01 20:59:39]  OK: Backlog is null

8)在monitor主機上檢查集羣環境在線情況

[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235)
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)

而後到mmm agent機器上查看,就會發現vip已經綁定了
[root@db-master1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:5f:58:dc brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.236/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.234/32 scope global eth0
    inet6 fe80::5054:ff:fe5f:58dc/64 scope link 
       valid_lft forever preferred_lft forever

[root@db-master2 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:1b:6e:53 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.235/32 scope global eth0
    inet6 fe80::5054:ff:fe1b:6e53/64 scope link 
       valid_lft forever preferred_lft forever
 
 [root@db-slave ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ca:d5:f8 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.238/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.239/27 brd 182.48.115.255 scope global secondary eth0:1
    inet6 fe80::5054:ff:feca:d5f8/64 scope link 
       valid_lft forever preferred_lft forever

從上面輸出信息中能夠看出,虛擬ip已經綁定到各agent上了。其中:
182.48.115.234順利添加到182.48.115.236上做爲主對外提供寫服務
182.48.115.235順利添加到182.48.115.237上做爲主對外提供讀服務
182.48.115.239順利添加到182.48.115.238上做爲主對外提供讀服務

9)online(上線)全部主機

這裏主機已經在線了,若是沒有在線,可使用下面的命令將相關主機online
[root@mmm-monit ~]# mmm_control set_online db-master1
OK: This host is already ONLINE. Skipping command.
[root@mmm-monit ~]# mmm_control set_online db-master2
OK: This host is already ONLINE. Skipping command.
[root@mmm-monit ~]# mmm_control set_online db-slave
OK: This host is already ONLINE. Skipping command.

提示主機已經在線,已經跳過命令執行了。到這裏整個集羣就配置完成了。

--------------------------------------------------MMM高可用測試-------------------------------------------------------
已經完成高可用環境的搭建了,下面咱們就能夠作MMM的HA測試咯。

首先查看整個集羣的狀態,能夠看到整個集羣狀態正常。
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235)
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)
 
1)模擬db-master2(182.48.115.237)宕機,手動中止mysql服務.
[root@db-master2 ~]# /etc/init.d/mysql stop
Shutting down MySQL.... SUCCESS!
 
在mmm-monit機器上觀察monitor日誌
[root@mmm-monit ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
.........
2017/06/01 21:28:17 FATAL State of host 'db-master2' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
 
從新查看mmm集羣的最新狀態:
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
  db-master2(182.48.115.237) master/HARD_OFFLINE. Roles:
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.235), reader(182.48.115.239)
 
發現以前添加到db-master2對外提供讀服務器的虛擬ip,即182.48.115.235已經漂移到db-slave機器上了.
[root@db-slave ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ca:d5:f8 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.238/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.235/32 scope global eth0
    inet 182.48.115.239/27 brd 182.48.115.255 scope global secondary eth0:1
    inet6 fe80::5054:ff:feca:d5f8/64 scope link
       valid_lft forever preferred_lft forever
 
測試mysql數據同步:
雖然db-master2機器的mysql服務關閉,可是因爲它的vip漂移到db-slave機器上了,因此此時db-master1和db-slave這個時候是主從複製關係。
在db-master1數據庫裏更新數據,會自動更新到db-slave數據庫裏。
------------------
接着重啓db-master2的mysql服務,能夠看到db-master2由HARD_OFFLINE轉到AWAITING_RECOVERY。這時候db-master2再次接管讀請求。
[root@db-master2 ~]# /etc/init.d/mysql start
Starting MySQL.. SUCCESS!
 
在mmm-monit機器上觀察monitor日誌
[root@mmm-monit ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
.........
2017/06/01 21:36:00 FATAL State of host 'db-master2' changed from HARD_OFFLINE to AWAITING_RECOVERY
2017/06/01 21:36:12 FATAL State of host 'db-master2' changed from AWAITING_RECOVERY to ONLINE because of auto_set_online(10 seconds). It was in state AWAITING_RECOVERY for 12 seconds
 
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235)
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)
 
[root@db-master2 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:1b:6e:53 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.235/32 scope global eth0
    inet6 fe80::5054:ff:fe1b:6e53/64 scope link
       valid_lft forever preferred_lft forever
 
發現以前的vip資源又回到了db-master2機器上,db-master2從新接管了服務。而且db-master2恢復後,在故障期間更新的數據也會自動和其它兩臺機器同步!
 
---------------------------------------------------------------------------------------------------
2)模擬db-master1主庫宕機,手動關閉mysql服務
[root@db-master1 ~]# /etc/init.d/mysql stop
Shutting down MySQL.... SUCCESS!
 
在mmm-monit機器上觀察monitor日誌
[root@mmm-monit ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
.........
2017/06/01 21:43:36 FATAL State of host 'db-master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
 
查看mmm集羣狀態:
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/HARD_OFFLINE. Roles:
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235), writer(182.48.115.234)
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)
 
從上面能夠發現,db-master1由之前的ONLINE轉化爲HARD_OFFLINE,移除了寫角色,由於db-master2是備選主,因此接管了寫角色,db-slave指向新的主庫db-master2,
應該說db-slave實際上找到了db-master2的sql如今的位置,即db-master2的show master返回的值,而後直接在db-slave上change master to到db-master2。
 
db-master2機器上能夠發現,db-master1對外提供寫服務的vip漂移過來了
[root@db-master2 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:1b:6e:53 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.235/32 scope global eth0
    inet 182.48.115.234/32 scope global eth0
    inet6 fe80::5054:ff:fe1b:6e53/64 scope link
       valid_lft forever preferred_lft forever
 
這個時候,在db-master2數據庫裏更新數據,db-slave數據庫會自動同步過去。
 
------------------------
接着重啓db-master1的mysql
[root@db-master1 ~]# /etc/init.d/mysql start
Starting MySQL.. SUCCESS!
 
在mmm-monit機器上觀察monitor日誌
[root@mmm-monit ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
.........
2017/06/01 21:52:14 FATAL State of host 'db-master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
 
再次查看mmm集羣狀態(發現寫服務的vip轉移到db-master2上了):
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles:
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235), writer(182.48.115.234)
  db-slave(182.48.115.238) slave/ONLINE. Roles: reader(182.48.115.239)
 
發現db-master1雖然恢復了,並已經上線在集羣中,可是其以前綁定的寫服務的vip並無從db-master2上轉移回來,即db-master1恢復後沒有從新接管服務。
只有等到db-master2發生故障時,纔會把182.48.115.234的寫服務的vip轉移到db-master1上,同時把182.48.115.235的讀服務的vip轉移到db-slave
機器上(而後db-master2恢復後,就會把轉移到db-slave上的182.48.115.235的讀服務的vip再次轉移回來)。

---------------------------------------------------------------------------------------------------
再接着模擬db-slave從庫宕機,手動關閉mysql服務
[root@db-slave ~]# /etc/init.d/mysql stop
Shutting down MySQL..                                      [肯定]

在mmm-monit機器上觀察monitor日誌
[root@mmm-monit ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
.........
2017/06/01 22:42:24 FATAL State of host 'db-slave' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)

查看mmm集羣的最新狀態:
[root@mmm-monit ~]# mmm_control show
  db-master1(182.48.115.236) master/ONLINE. Roles: writer(182.48.115.234)
  db-master2(182.48.115.237) master/ONLINE. Roles: reader(182.48.115.235), reader(182.48.115.239)
  db-slave(182.48.115.238) slave/HARD_OFFLINE. Roles: 

發現db-slave發生故障後,其讀服務的182.48.115.239的vip轉移到db-master2上了。
[root@db-master2 mysql-mmm]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:1b:6e:53 brd ff:ff:ff:ff:ff:ff
    inet 182.48.115.237/27 brd 182.48.115.255 scope global eth0
    inet 182.48.115.235/32 scope global eth0
    inet 182.48.115.239/32 scope global eth0
    inet6 fe80::5054:ff:fe1b:6e53/64 scope link 

當db-slave恢復後,讀服務的vip仍是再次轉移回來,即從新接管服務。而且故障期間更新的數據會自動同步回來。

須要注意:
db-master1,db-master2,db-slave之間爲一主兩從的複製關係,一旦發生db-master2,db-slave延時於db-master1時,這個時刻db-master1 mysql宕機,
db-slave將會等待數據追上db-master1後,再從新指向新的主db-master2,進行change master to db-master2操做,在db-master1宕機的過程當中,一旦db-master2
落後於db-master1,這時發生切換,db-master2變成了可寫狀態,數據的一致性將會沒法保證。

總結:MMM不適用於對數據一致性要求很高的環境。可是高可用徹底作到了。 

相關文章
相關標籤/搜索