部署MySQL高可用集羣

 

  一.簡介html

    本文將介紹如何使用mysql-mmm搭建數據庫的高可用架構.node

  二.環境mysql

          

服務器linux

主機名sql

Ip數據庫

Severedvim

Mysql版本服務器

系統架構

Master1less

master1

192.168.4.10

10

5.6.15

Centos6.9

Master2

master2

192.168.4.11

11

5.6.15

 

Slave1

slave1

192.168.4.12

12

5.6.15

 

Slave2

slave2

192.168.4.13

13

5.6.15

 

Monitor

monitor

192.168.4.100

 

Client

client

192.168.4.120

5.6.15

 

 

         虛擬IP  

      

虛擬ip

功能

描述

192.168.4.200

Write

主用master寫入虛擬Ip

192.168.4.201

read

讀服務器虛擬Ip

192.168.4.202

Read

讀服務器虛擬Ip

 

                案例圖譜

 

 

  三.mmm架構

                服務器角色

類型

服務進程

主要用途

管理節點

mmm-monitor

負責全部的監控工做的監控守護進程,決定故障節點的移除或恢復。

數據庫節點

mmm-agent

運行所在MySQL服務器殤的代理守護進程,提供簡單遠程服務集、提供給監控節點(可用來更改只讀模式、複製的主服務器等 )

 

    

                   核心軟件包及應用

軟件包名

包做用

Net-ARP-1.0.8.tgz

分配虛擬ip

mysql-mmm-2.2.1.tar.gz

MySQL-MMM架構核心進程,安裝完成後便可啓動管理進程也可啓動客戶端進程。

 

  四.部署集羣基本結構

   咱們將部署集羣的工做分爲兩大塊,第一塊就是部署集羣基礎環境。使用4臺RHEL6服務器,以下圖所示。其中192.168.4.十、192.168.4.11做爲MySQL雙主服務器,192.168.4.十二、192.168.4.13做爲主服務器的從服務器。

  安裝服務器時建議管理防火牆及SELINUX.

 

 

4.1 mysql服務器的安裝

      

下面我會介紹MySQL的安裝方式。本文將使用64位的RHEL 6操做系統,MySQL數據庫的版本是5.6.15。

訪問http://dev.mysql.com/downloads/mysql/,找到MySQL Community Server下載頁面,平臺選擇「Red Hat Enterprise Linux 6 / Oracle Linux 6」,而後選擇64位的bundle整合包下載,以下圖所示。

 

 

           注意:下載MySQL軟件時須要以Oracle網站帳戶登陸,若是沒有請根據頁面提示先註冊一個(免費) 。

    

    4.1.1 卸載系統自帶的mysql-server、mysql軟件包(若是有的話)   

yum -y remove mysql-server mysql

 

    4.1.2 釋放MySQL-bundle整合包         

[root@master1 ~]#tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
MySQL-shared-5.6.15-1.el6.x86_64.rpm              //共享庫
MySQL-shared-compat-5.6.15-1.el6.x86_64.rpm      //兼容包
MySQL-server-5.6.15-1.el6.x86_64.rpm              //服務端程序
MySQL-client-5.6.15-1.el6.x86_64.rpm              //客戶端程序
MySQL-devel-5.6.15-1.el6.x86_64.rpm              //庫和頭文件
MySQL-embedded-5.6.15-1.el6.x86_64.rpm          //嵌入式版本
MySQL-test-5.6.15-1.el6.x86_64.rpm              //測試包

    4.1.3 安裝MySQL數據庫

[root@master1]# rpm -Uvh MySQL-*.rpm

    4.1.4 啓動MySQL數據庫

[root@master1 ~]# service mysql start && chkconfig --list mysql
Starting MySQL SUCCESS! 
mysql              0:關閉    1:關閉    2:啓用    3:啓用    4:啓用    5:啓用    6:關閉

    4.1.5 MySQL密碼

    在安裝完後會自動生成在root目錄下.mysql_secret文件內 查詢後可以使用此密碼登陸Mysql

[root@master1 ~]# cat .mysql_secret 
# The random password set for the root user at Mon Jan  1 16:48:31 2001 (local time): kZ5j71cyZiKKhSeX          // 密碼文件

    4.1.6 登陸MySQL,並修改密碼  使用剛查到的密碼進行登陸

[root@master1 ~]# mysql -u root -p
Enter password:

mysql> SET PASSWORD FOR 'root'@'localhost'=PASSWORD('123456');    

        

              修改後再次登陸時既可使用新密碼了。

              按照上述方法 將4臺服務器均裝好MySQL。

 

    4.2 部署雙主多從結構

      1.數據庫受權(4臺數據庫主機master1,master2,slave1,slave2執行如下操做)

  部署主從同步只須要受權一個主從同步用戶便可,可是咱們要部署MySQL-MMM架構,因此在這裏咱們將MySQL-MMM所需用戶一併進行受權設置。再受權一個測試用戶,在架構搭建完成時測試使用。

 mysql> grant   replication  slave  on  *.*  to  slaveuser@"%" identified by  "pwd123";
Query OK, 0 rows affected (0.01 sec)         //主從同步受權
mysql> grant  replication  client  on *.*  to  monitor@"%" identified by "monitor";  
Query OK, 0 rows affected (0.00 sec)       //MMM所需架構用戶受權
mysql> grant  replication client,process,super   on *.*  to  agent@"%" identified by "agent"; 
Query OK, 0 rows affected (0.00 sec)         //MMM所需架構用戶受權
mysql> grant  all  on *.*  to  root@"%" identified by "123456";
Query OK, 0 rows affected (0.00 sec)              //測試用戶受權

      2.開啓主數據庫binlog日誌、設置server_id(master1,master2)

        master1設置

[root@master1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=10                          //指定服務器ID
log_bin                               //啓用binlog日誌
log_slave_updates=1 //啓用鏈式複製 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master1 ~]# [root@master1 ~]# service mysql restart                //重啓MySQL服務 Shutting down MySQL.. [肯定] Starting MySQL.. [肯定] [root@master1 ~]# ls /var/lib/mysql/master1-bin*        //查看binlog日誌是否生成 /var/lib/mysql/master1-bin.000001 /var/lib/mysql/master1-bin.index

          master2設置:

[root@master2 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=11
log_slave_updates=1
log-bin [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master2 mysql]# /etc/init.d/mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! [root@master2 mysql]# ls /var/lib/mysql/master2-bin.* /var/lib/mysql/master2-bin.000001 /var/lib/mysql/master2-bin.000002 /var/lib/mysql/master2-bin.index

 

      

從庫設置serverid    

slave1設置

[root@slave1 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=12


 

[root@slave1 ~]# service mysql restart

 

slave2設置

[root@slave2 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=13
…

[root@slave2~]# service mysql restart

 

3.配置主從從從關係

配置master二、slave一、slave2成爲master1的從服務器

查看master1的master狀態 

mysql> show master status\G
*************************** 1. row ***************************
             File: master1-bin.000002
         Position: 120
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

依照上面參數配置master2爲master1的從服務器

mysql>    change  master  to                         
    ->      master_host="192.168.4.10",                
    ->      master_user="slaveuser",                
    ->      master_password="pwd123",               
    ->      master_log_file="master1-bin.000002",     
->      master_log_pos=120; 
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start slave;    
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G 
Slave_IO_Running: Yes                //IO節點正常
Slave_SQL_Running: Yes                //SQL節點正常

用一樣的方法設置slave1以及slave2爲master1的從服務器

4.配置主主從從關係,將master1配置爲master2的從服務器

查看master2的master信息:

mysql> show master status \G
*************************** 1. row ***************************
             File: master2-bin.000002
         Position: 120
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

設置master1成爲master2的從服務器

mysql>    change  master  to                         
    ->      master_host="192.168.4.11",                
    ->      master_user="slaveuser",                
    ->      master_password="pwd123",               
    ->      master_log_file="master2-bin.000002",     
    ->      master_log_pos=120; 
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave ;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status \G
*************************** 1. row ***************************
             Slave_IO_Running: Yes      //IO節點正常
            Slave_SQL_Running: Yes      //SQL節點正常

5.測試主從架構是否成功

 master1更新數據庫,查看其它主機是否成功 ,當全部主機分別在本機都能看到剛剛創建的數據庫db1,則正常。

mysql> create database db1;
Query OK, 1 row affected (0.00 sec)
mysql>  show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema  |
| db1                |
| mysql               |
| performance_schema |
| test                 |
+--------------------+
5 rows in set (0.00 sec)

 

    至此,基本環境構建完成。

 

五.MySQL-MMM架構部署

  5.1 MMM集羣方案

     使用第4章的架構,192.168.4.十、192.168.4.11做爲MySQL雙主服務器,192.168.4.十二、192.168.4.13做爲主服務器的從服務器,添加192.168.4.100做爲MySQL-MMM架構中管理監控服務器,實現監控MySQL主從服務器的工做狀態及決定故障節點的移除或恢復工做,架構搭建完成後使用客戶機192.168.4.120進行訪問,客戶機須要安裝MySQL-client軟件包。拓撲見圖2

 

5.2 步驟

 實現此案例須要按照以下步驟進行。

    步驟一:安裝MySQL-MMM

1.安裝依賴關係(MySQL集羣內5臺服務器master1,master2,slave1,slave2,monitor)均需安裝

[root@master2 mysql]# yum -y install gcc* perl-Date-Manip  perl-Date-Manip  perl-Date-Manip perl-XML-DOM-XPath perl-XML-Parser perl-XML-RegExp rrdtool perl-Class-Singleton perl perl-DBD-MySQL perl-Params-Validate perl-MailTools perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

 

 

2.安裝MySQL-MMM軟件依賴包(MySQL集羣內5臺服務器master1,master2,slave1,slave2,monitor)均需安裝。

  1. 安裝Log-Log4perl 類

    [root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm
    warning: perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
    error: Failed dependencies:
    perl(Test::More) >= 0.45 is needed by perl-Log-Log4perl-1.26-1.el6.rf.noarch
    [root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm           --force --nodeps

    我在安裝過程當中報了錯誤顯示nokey ,因爲是安裝了舊版本GPGkeys形成,增長參數--force –nodeps強制安裝便可跳過

  2. 安裝Algorithm-Diff類
[root@master1 mysql-mmm]#  tar -zxvf Algorithm-Diff-1.1902.tar.gz  
Algorithm-Diff-1.1902/
Algorithm-Diff-1.1902/diffnew.pl
Algorithm-Diff-1.1902/t/
Algorithm-Diff-1.1902/t/oo.t
Algorithm-Diff-1.1902/t/base.t
Algorithm-Diff-1.1902/htmldiff.pl
Algorithm-Diff-1.1902/lib/
Algorithm-Diff-1.1902/lib/Algorithm/
Algorithm-Diff-1.1902/lib/Algorithm/Diff.pm
Algorithm-Diff-1.1902/lib/Algorithm/DiffOld.pm
Algorithm-Diff-1.1902/META.yml
Algorithm-Diff-1.1902/Changes
Algorithm-Diff-1.1902/cdiff.pl
Algorithm-Diff-1.1902/MANIFEST
Algorithm-Diff-1.1902/diff.pl
Algorithm-Diff-1.1902/Makefile.PL
Algorithm-Diff-1.1902/README
[root@master1 mysql-mmm]# cd Algorithm-Diff-1.1902
[root@master1 Algorithm-Diff-1.1902]#  perl  Makefile.PL 
Checking if your kit is complete...
Looks good
Writing Makefile for Algorithm::Diff
[root@master1 Algorithm-Diff-1.1902]# make && make install

3.安裝Proc-Daemon類

root@master1 mysql-mmm]# tar -zxvf Proc-Daemon-0.03.tar.gz
Proc-Daemon-0.03/
Proc-Daemon-0.03/t/
Proc-Daemon-0.03/t/00modload.t
Proc-Daemon-0.03/t/01filecreate.t
Proc-Daemon-0.03/README
Proc-Daemon-0.03/Makefile.PL
Proc-Daemon-0.03/Daemon.pm
Proc-Daemon-0.03/Changes
Proc-Daemon-0.03/MANIFEST
[root@master1 mysql-mmm]# cd Proc-Daemon-0.03                
[root@master1 Proc-Daemon-0.03]# perl    Makefile.PL 
Checking if your kit is complete...
Looks good
Writing Makefile for Proc::Daemon
[root@master1 Proc-Daemon-0.03]# make && make install
cp Daemon.pm blib/lib/Proc/Daemon.pm
Manifying blib/man3/Proc::Daemon.3pm
Installing /usr/local/share/perl5/Proc/Daemon.pm
Installing /usr/local/share/man/man3/Proc::Daemon.3pm
Appending installation info to /usr/lib64/perl5/perllocal.pod
[root@master1 Proc-Daemon-0.03]#

 4.安裝Net-ARP虛擬IP分配工具:

[root@mysql-master1 ~]# gunzip Net-ARP-1.0.8.tgz    
[root@mysql-master1 ~]# tar xvf Net-ARP-1.0.8.tar       
.. ..
[root@mysql-master1 ~]# cd Net-ARP-1.0.8                    
[root@mysql-master1 Net-ARP-1.0.8]# perl Makefile.PL        
Module Net::Pcap is required for make test!
Checking if your kit is complete...
Looks good
Writing Makefile for Net::ARP
[root@mysql-master1 Net-ARP-1.0.8]# make && make install    
.. ..
[root@mysql-master1 Net-ARP-1.0.8]# cd                        
[root@mysql-master1 ~]#

 5. 安裝Mysql-MMM軟件包:

[root@mysql-master1 ~]# tar xvf mysql-mmm-2.2.1.tar.gz       
.. ..
[root@mysql-master1 ~]# cd mysql-mmm-2.2.1                    
[root@mysql-master1 mysql-mmm-2.2.1]# make && make install    
.. ..
[root@mysql-master1 mysql-mmm-2.2.1]#



  

步驟二:修改配置文件

  1. 修改公共配置文件

    本案例中MySQL集羣的5臺服務器(master一、master二、slave一、slave二、monitor)都須要配置,能夠先配好一臺後使用scp複製。

root@master1 ~]# vim  /etc/mysql-mmm/mmm_common.conf 
active_master_role    writer
<host default>
    cluster_interface        eth0                //設置主從同步的用戶
    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/
 replication_user        slaveuser            //設置主從同步的用戶
 replication_password    pwd123            //設置主從同步用戶密碼
    agent_user            agent       //mmm-agent控制數據庫用戶
    agent_password        agent         //mmm-agent控制數據庫用戶密碼
</host>
<host master1>                            //設置第一個主服務器
    ip                    192.168.4.10            //master1 IP 地址
    mode                    master
    peer                    master2        //指定另一臺主服務器
</host>
<host master2>                            //指定另一臺主服務器
    ip                    192.168.4.11
    mode                    master
    peer                    master1
</host>
<host slave1>                                //設置第一臺從服務器
    ip                    192.168.4.12            //slave1 IP 地址
    mode                slave          //本段落配置的是slave服務器
</host>
<host slave2>
    ip                    192.168.4.13
    mode                    slave
</host>
<role writer>                              //設置寫入服務器工做模式
    hosts                master1,master2        //提供寫的主服務器
    ips                    192.168.4.200        //設置VIP地址
    mode                    exclusive            //排他模式
</role>
<role reader>                             //設置讀取服務器工做模式
    hosts                slave1,slave2        //提供讀的服務器信息
    ips                 192.168.4.201,192.168.4.202    //多個虛擬IP
    mode                    balanced                  //均衡模式
</role>

 

  2.修改管理主機配置文件(monitor主機配置)

[root@monitor ~]# vim /etc/mysql-mmm/mmm_mon.conf 
include mmm_common.conf
<monitor>
    ip                        192.168.4.100        //設置管理主機IP地址
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path                /var/lib/misc/mmm_mond.status
    ping_ips                192.168.4.10,192.168.4.11,192.168.4.12,192.168.4.13
                                                //設置被監控數據庫
</monitor>
<host default>
    monitor_user            monitor        //監控數據庫MySQL用戶
    monitor_password        monitor        //監控數據庫MySQL用戶密碼
</host>
debug 0
[root@monitor ~]#

 

  3.修改客戶端配置文件

   master一、master二、slave一、slave2,都要配置相應名稱

root@master1 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this master1
[root@master2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this master2
[root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this slave2
[root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this slave2    

 

六.MySQL-MMM架構使用

6.1.啓動MySQL-MMM架構

  1.啓動mmm-agent

  master1,master2,slave1,slave2均執行如下操做。

[root@master1 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok

  2.啓動mmm-monitor

[root@moitor ~]#  /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok

 

6.2 設置集羣中服務器爲online狀態。

  控制命令只能在管理端Monitor上運行。開用命令查看當前個服務器狀態。  默認全部服務器爲waiting狀態,若有異常,檢查各服務器SELinux及iptabless

[root@localhost ~]# mmm_control show
  master1(192.168.4.10) master/AWAITING_RECOVERY. Roles: 
  master2(192.168.4.11) master/AWAITING_RECOVERY. Roles: 
  slave1(192.168.4.12) slave/ AWAITING_RECOVERY. Roles:   slave2(192.168.4.13) slave/AWAITING_RECOVERY. Roles: 

經過命令設置4臺數據庫主機爲online

[root@monitor ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online master2
OK: State of 'master2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online slave1
OK: State of 'slave1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online slave2
OK: State of 'slave2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]#

再次查看當前集羣中各服務器狀態

[root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/ONLINE. Roles: writer(192.168.4.200)
  master2(192.168.4.11) master/ONLINE. Roles: 
  slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201)
  slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)

經過狀態,能夠看到4臺主機全是online狀態,寫入服務器爲master1,ip爲虛擬ip192.168.4.200.從服務器爲slave1,slave2.

6.3 測試MySQL-MMM架構

  客戶端安裝MySQL-client

[root@client ~]# tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
.. ..
[root@client ~]# rpm -ivh MySQL-client-5.6.15-1.el6.x86_64.rpm

   MySQL-MMM虛擬IP訪問測試,同時可測試創建插入查看等功能。

[root@client /]#  mysql -h192.168.4.200 -uroot -p123456 -e "show databases"
Warning: Using a password on the command line interface can be insecure.
+--------------------+
| Database           |
+--------------------+
| information_schema |
| db1                |
| db2                |
| mysql              |
| performance_schema |
| test               |
+--------------------+
[root@client /]# 

6.4 主數據庫宕機測試

  咱們能夠認爲將主數據庫停用達到測試集羣的目的。

    [root@master1 ~]# /etc/init.d/mysql stop
Shutting down MySQL.. SUCCESS! 
[root@master1 ~]# 

此時咱們查看monitor日誌能夠看到詳細的檢測及切換過程。

017/10/24 01:37:07  WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:07  WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:15 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:16 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2017/10/24 01:37:16  INFO Removing all roles from host 'master1':
2017/10/24 01:37:16  INFO     Removed role 'writer(192.168.4.200)' from host 'master1'
2017/10/24 01:37:16  INFO Orphaned role 'writer(192.168.4.200)' has been assigned to 'master2'

在monitor上再次查看數據庫服務器狀態,能夠發現此時master1已經爲offline狀態,寫入服務器及虛擬ip192.168.4.200已經變動爲master2

[root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/HARD_OFFLINE. Roles: 
  master2(192.168.4.11) master/ONLINE. Roles: writer(192.168.4.200)
  slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201)
  slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)

 查看slave1.slave2的的從屬關係,主服務器以變動爲master2

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.4.11
                  Master_User: slaveuser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master2-bin.000002
          Read_Master_Log_Pos: 211
               Relay_Log_File: slave1-relay-bin.000002
                Relay_Log_Pos: 285
        Relay_Master_Log_File: master2-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

  

注意服務器恢復後由offline狀態轉換爲waiting狀態,但不會變爲Online狀態,需手動啓用

 

 至此,咱們的MySQL高可用集羣已經部署完畢。

 

7.簡化版集羣。

    上面咱們已經部署完了5臺服務器組成的MySQL集羣,但根據各公司實際狀況,可能訪問量並不須要如此多的服務器進行集羣化。而主從服務器的形式又不能實現主備用之間的熱備。下面我將上例作了更改,使用3臺服務器搭建集羣。此例即不須要太多服務器,又能實現數據庫的熱備份。

 

 

咱們如今要作的就是對monitor配置的更改,首先調整被監控服務器IP,修改MMM主配置文件。

 [root@monitor ~]# cat /etc/mysql-mmm/mmm_mon.conf 
include mmm_common.conf
<monitor>
    ip                        192.168.4.100
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path                /var/lib/misc/mmm_mond.status
    ping_ips                192.168.4.10,192.168.4.11  //此處是被監控服務器IP
</monitor>
    
<host default>
    monitor_user            monitor
    monitor_password        monitor
</host>

debug 0
[root@monitor ~]# 

而後修改公共配置文件,注意master1,master2,monior保持一致,都要修改。

[root@monitor ~]# cat /etc/mysql-mmm/mmm_common.conf
active_master_role    writer
<host default>
    cluster_interface        eth0

    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/

    replication_user        slaveuser
    replication_password    pwd123

    agent_user                agent
    agent_password            agent
</host>

<host master1>
    ip                      192.168.4.10
    mode                    master
    peer                    master2
</host>

<host master2>
    ip                    192.168.4.11
    mode                    master
    peer                    master1
</host>
<role writer>
    hosts                    master1,master2
    ips                192.168.4.200
    mode                    exclusive
</role>

<role reader>
    hosts                    master1,master2
    ips                                   192.168.4.201,192.168.4.202
    mode                    balanced
</role>
[root@monitor ~]# 

 

 

   配置完成後 ,其它配置與5臺拓撲相似,啓用master1和master2 的 mmm進程 ,在 monitor服務器啓用online模式後 查看mmm狀態 

 [root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/ONLINE. Roles: reader(192.168.4.201), writer(192.168.4.200)
  master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.202)

 

 能夠看到master1和master2都承擔讀的工做,而master1又單獨承擔寫的任務。下面測試將master1的數據庫關閉看結果。

 [root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/HARD_OFFLINE. Roles: 
  master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.201), reader(192.168.4.202), writer(192.168.4.200)

能夠看到 ,當關閉master1時,master2即承擔讀的工做又承擔寫的工做 。用戶始終能夠經過連接192.168.4.200進行數據庫操做,實現了雙擊熱備。

 

八.故障分析

試驗中遇到了兩次問題 。列出僅供參考

問題1 

mysql> show slave status \G
*************************** 1. row ***************************
…………..
             Slave_IO_Running: Connecting
            Slave_SQL_Running: Yes
 ………….
                Last_IO_Errno: 2003
                Last_IO_Error: error connecting to master 'slaveuser@192.168.4.10:3306' - retry-time: 60  retries: 2
        
1 row in set (0.00 sec)

關閉matse selinux  清空防火牆 ,關閉防火牆後重啓slave解決。

 

 

問題2 

搭建過程當中,屢次更改配置後 ,遇到以下錯誤

mysql>  start slave;

ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository

此問題使用reset slave all清空全部的複製信息,而後重置master.infor start slave後複製正常。

相關文章
相關標籤/搜索