構建DRBD模型的MySQL高可用(HA)集羣

1、DRBD
node

   在架構MySQL高可用集羣以前,有必要先介紹一下什麼是DRBD,以及DRBD的原理和其工做方式
mysql

一、什麼是DRBDlinux

DRBD (Distributed Replicated Block Device,分佈式複製塊設備)是由內核模塊和相關腳本而構成,用以構建高可用性的集羣。其實現方式是經過網絡來鏡像整個設備,是一個跨主機的鏡像,因此能夠把DRBD理解爲是一種網絡RAID1。web

二、DRBD原理算法

每一個設備(drbd 提供了不止一個設備)都有一個狀態,多是‘主’狀態,也多是‘從’狀態。可是在使用時,主從必定不要同時掛載使用。由於,對於任何一個客戶機掛載一個塊級別設備之後,它們對於數據和元數據管理是在內存中實現的,而後按期存儲到硬盤上去;兩者的掛載操做都在內存中進行,因此是看不到對方的操做的。在這種狀況下,就會發生資源爭用,致使文件系統崩潰。sql

   既然這樣,每當一個節點掛了的話,啓不是還要手動去提高另外一個節點爲主節點?因此,要想實現同時掛載使用,就只能在集羣的高可用模型下使用,由於集羣支持分佈式文件鎖,當A節點持有鎖時,能夠通知給B節點,這也就意味着他們要依靠高可用集羣的信息層才能夠作雙主,這也正是今天要使用的方式。在主節點上,應用程序應能運行和訪問drbd設備(/dev/drbd*)。每次寫入都會發往本地磁盤設備和從節點設備中。從節點只能簡單地把數據寫入它的磁盤設備上。 讀取數據一般在本地進行。 若是主節點發生故障,心跳(heartbeat或corosync)將會把從節點轉換到主狀態,並啓動其上的應用程序。bootstrap

三、DRBD的複製模式vim

wKiom1NXOq2Chmd4AAB52dRR0mU491.png

(1)異步(協議A)
api

   只需發給本地的TCP/IP協議棧,併發送到本地發送隊列,準備發送,即返回
安全

(2)半同步(協議B)

   發送到對方的TCP/IP協議棧並返回

(3)同步(協議C)

   複製寫到對方磁盤才返回

2、環境準備(兩臺作一樣操做,只在node1上演示)

一、操做系統及主機

   CentOS 6.5 x86_64平臺

   node1.shuishui.com    172.16.7.100

   node2.shuishui.com    172.16.7.200

二、修改兩臺主機的主機名,保證主機名與uname -n的顯示結果一至


[root@node1 ~]# uname -n
node1.shuishui.com

三、配置節點互相解析


[root@node1 ~]# vim /etc/hosts
172.16.7.100 node1.shuishui.com node1
172.16.7.200 node2.shuishui.com node2

四、時間同步

[root@node1 ~]# ntpdate 172.16.0.1

五、配置SSH雙機互信

[root@node1 ~]# ssh-keygen -t rsa -P ''
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2

六、所需軟件

corosync      #直接yum安裝
pacemaker     #直接yum安裝
crmsh-1.2.6-4.el6.x86_64.rpm             #pacemaker的配置接口
pssh-2.3.1-2.el6.x86_64.rpm              #crmsh的依賴包
mariadb-10.0.10-linux-x86_64.tar.gz      #二進制格式MariaDB
drbd-8.4.3-33.el6.x86_64.rpm        #drbd管理工具
drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm     #drbd內核模塊

七、硬盤準備

   爲了配置DRBD,在node1和node2上各準備一塊大小相同的硬盤/dev/sdb。若是你的/dev/sda有足夠空間,建立分區就能夠

3、配置corosync

一、安裝軟件包

我把上面第6步所須要的軟件都放到了/root下,因此直接yum一下安裝

[root@node1 ~]# yum -y install corosync
[root@node1 ~]# yum -y install pacemaker
[root@node1 ~]# yum -y install *.rpm

drbd共有兩部分組成:內核模塊和用戶空間的管理工具。其中drbd內核模塊代碼已經整合進Linux內核2.6.33之後的版本中,所以,若是您的內核版本高於此版本的話,你只須要安裝管理工具便可;不然,您須要同時安裝內核模塊和管理工具兩個軟件包,而且此二者的版本號必定要保持對應。

   目前適用CentOS 5的drbd版本主要有8.0、8.二、8.3三個版本,其對應的rpm包的名字分別爲drbd, drbd82和drbd83,對應的內核模塊的名字分別爲kmod-drbd, kmod-drbd82和kmod-drbd83。而適用於CentOS 6的版本爲8.4,其對應的rpm包爲drbd和drbd-kmdl,但在實際選用時,要切記兩點:drbd和drbd-kmdl的版本要對應;另外一個是drbd-kmdl的版本要與當前系統的內容版本相對應。各版本的功能和配置等略有差別;咱們實驗所用的平臺爲x86_64且系統爲CentOS 6.5,所以須要同時安裝內核模塊和管理工具。咱們這裏選用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm),下載地址爲ftp://rpmfind.net/linux/atrpms/

二、配置corosync

[root@node1 ~]# cd /etc/corosync/
[root@node1 corosync]# cp corosync.conf.example corosync.conf

(1)修改corosync的配置文件,增長service段和aisexec段

compatibility: whitetank
totem {
        version: 2
        secauth: off        #安全認證
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 172.16.7.0        #綁定網絡地址
                mcastaddr: 230.100.100.7       #心跳信息傳遞的組播地址
                mcastport: 5405                #多播端口
                ttl: 1
        }
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes            #是否寫入日誌文件
        to_syslog: no
        logfile: /var/log/cluster/corosync.log    #cluster這個目錄若是沒有的話,需手動建立
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
amf {
        mode: disabled
}
service {
        ver:0
        name:pacemaker            #定義corosync在啓動時自動啓動pacemaker
}
aisexec {                         #表示啓動corosync的ais功能,以哪一個用戶的身份運行
        user:root
        group:root
}

(2)生成密鑰文件

對於corosync而言,各節點之間通訊須要安全認證,因此須要安全密鑰,生成後會自動保存至當前目錄下,命名爲authkey,權限爲400。我在《corosync+pacemaker實現web集羣高可用》:http://nmshuishui.blog.51cto.com/1850554/1399811 這篇博文中使用的是隨機數方法生成密鑰,有時它熵池中的隨機數不夠用,因此生成速度會至關慢,因此今天這裏就不使用隨機數生成了,而是使用僞隨機數生成,可是這種方法不安全,請慎用

[root@node1 corosync]# mv /dev/random /dev/h
[root@node1 corosync]# ln /dev/urandom /dev/random
[root@node1 corosync]# corosync-keygen
[root@node1 corosync]# rm /dev/random
[root@node1 corosync]# mv /dev/h /dev/random

(3)將corosync.conf和生成的authkey傳到node2上

[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

三、啓動corosync並檢查配置

   請參考這裏:《corosync+pacemaker實現web集羣高可用》http://nmshuishui.blog.51cto.com/1850554/1399811

[root@node1 ~]# service corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
[root@node1 ~]# ssh node2 "service corosync start"
Starting Corosync Cluster Engine (corosync): [  OK  ]

四、查看集羣狀態

[root@node1 ~]# crm status
Last updated: Wed Apr 23 14:44:11 2014
Last change: Wed Apr 23 14:44:07 2014 via crmd on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node1.shuishui.com node2.shuishui.com ]  #node1,node2都在線

4、配置DRBD

一、配置/etc/drbd.d/global-common.conf

global {
        usage-count no;    #是否讓linbit公司收集目前drbd的使用狀況
        # minor-count dialog-refresh disable-ip-verification
}
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
common {
        protocol C;
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
                # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
                # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
        startup {
                #wfc-timeout 120;
                #degr-wfc-timeout 120;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
        disk {
                on-io-error detach;          # 同步錯誤的作法是分離
                #fencing resource-only;
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
        net {
                cram-hmac-alg "sha1";         #加密算法爲sha1
                shared-secret "mydrbdlab";    #加密key
        }
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
        syncer {
                rate 500M;                    #同步速率
        }
}


二、定義一個資源

[root@node1 drbd.d]# vim mariadb.res
resource mariadb {
        on node1.shuishui.com {
        device  /dev/drbd0;
        disk    /dev/sdb;
        address 172.16.7.100:7789;
        meta-disk internal;
        }
        on node2.shuishui.com {
        device  /dev/drbd0;
        disk    /dev/sdb;
        address 172.16.7.200:7789;
        meta-disk internal;
        }
}
~

三、同步配置文件到node2

   第2步中的資源文件在兩個節點上必須相同,所以,能夠基於ssh將剛纔配置的文件所有同步至另一個節點

[root@node1 drbd.d]# scp /etc/drbd.d/* node2:/etc/drbd.d/

四、在兩個節點上初始化已定義的資源並啓動服務(只在node1上演示)

(1)初始化資源(兩個節點都需執行)

[root@node1 ~]# drbdadm create-md mariadb

這一步會出現以下報錯,不需理會,直接忽略

Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory

(2)啓動服務(兩個節點都需執行)

[root@node1 ~]# service drbd start

(3)查看啓動狀態

[root@node1 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:20970844

   也可使用drbd-overview命令來查看

[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Secondary/Secondary Inconsistent/Inconsistent C r-----

從上面的信息中能夠看出此時兩個節點均牌Secondary狀態,所以須要將一個節點設爲Primary

(4)設置node1爲主節點

[root@node1 ~]# drbdadm primary --force mariadb

   此時再來使用drbd-overview命令來查看狀態,能夠發現數據同步過程已經開始

[root@node1 ~]# drbd-overview
  0:mariadb/0  SyncSource Primary/Secondary UpToDate/Inconsistent C r---n-
    [================>...] sync'ed: 88.8% (2304/20476)M

(5)等待數據同步完成再次查看狀態

[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----

   此時能夠發現節點已經成實時狀態,且節點已經有了主次

五、建立文件系統

   文件系統的掛載只能在Primary節點進行,所以,也只有在設置了主節點後才能對drbd設備進行格式化:

[root@node1 ~]# mke2fs -j -L DRBD /dev/drbd0
[root@node1 ~]# mkdir /mnt/drbd
[root@node1 ~]# mount /dev/drbd0 /mnt/drbd/
[root@node1 ~]# ls /mnt/drbd/
lost+found                     #掛載成功

六、切換Primary和Secondary節點

   對主Primary/Secondary模型的drbd服務來說,在某個時刻只能有一個節點爲Primary,所以,要切換兩個節點的角色,只能在先將原有的Primary節點設置爲Secondary後,才能將原來的Secondary節點設置爲Primary:

(1)在切換前,咱們先往/mnt/drbd中拷個文件

[root@node1 drbd]# cp /etc/fstab .
[root@node1 drbd]# ls
fstab  lost+found

(2)降級node1

   再降級時必定要先卸載再降級

[root@node1 ~]# umount /mnt/drbd/             #先卸載
[root@node1 ~]# drbdadm secondary mariadb     #再降級
[root@node1 ~]# drbd-overview                 #降級成功
  0:mariadb/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----

(3)提高node2

[root@node2 ~]# drbdadm primary mariadb    #提高node2
[root@node2 ~]# drbd-overview              #node2已經成爲主節點
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node2 ~]# mkdir /mnt/drbd            #建立目錄並掛載
[root@node2 ~]# mount /dev/drbd0 /mnt/drbd

(4)查看此前在主節點上覆制到此設備的文件是否存在

[root@node2 ~]# ls /mnt/drbd
fstab  lost+found            #確實存在,沒有問題

   到此,DRBD配置結束

5、MySQL配置安裝說明

這裏爲何要詳細講解MySQL的配置安裝呢?由於你的MySQL的數據目錄須要安裝在DRBD中;我在實驗的過程當中就是忘了掛載DRBD,而把數據目錄沒能安裝在DRBD中,形成了不小的麻煩,因此仍是要再介紹一會兒!

一、建立mysql用戶mysql組(node1和node2都操做)

[root@node1 local]# groupadd -g 306 mysql
[root@node1 local]# useradd -u 306 -g mysql -s /sbin/nologin -M mysql

二、解壓mysql(node1和node2都操做)

[root@node1 ~]# tar xf mariadb-10.0.10-linux-x86_64.tar.gz -C /usr/local/
[root@node1 ~]# cd /usr/local/
[root@node1 local]# ln -sv mariadb-10.0.10-linux-x86_64/ mysql
`mysql' -> `mariadb-10.0.10-linux-x86_64/'
[root@node1 local]# chown -R mysql.mysql mysql/*

三、將node1的DRBD設置爲主節點並掛載

[root@node1 ~]# drbd-overview
  0:web/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount /dev/drbd0 /mydata/
[root@node1 ~]# cd /mydata/
[root@node1 mydata]# mkdir data
[root@node1 mydata]# chown -R  mysql.mysql /mydata/data/
[root@node1 mydata]# mkdir binlogs
[root@node1 mydata]# chown -R mysql.mysql binlogs/
[root@node1 mydata]# ll
total 24
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:37 binlogs
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:37 data
drwx------ 2 root  root  16384 Apr 23 16:26 lost+found


四、提供mysql配置文件

[root@node1 ~]# cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf
[root@node1 ~]# vim /etc/my.cnf
#增長下面這一行
datadir = /mydata/data
#修改二進制日誌路徑
log-bin=/mydata/binlogs/master-bin

五、初始化mysql

[root@node1 data]# /usr/local/mysql/scripts/mysql_install_db --datadir=/mydata/data/ --basedir=/usr/local/mysql --user=mysql

六、提供服務腳本

[root@node1 ~]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld

七、啓動並測試mysql

[root@node1 ~]# service mysqld start
Starting MySQL. SUCCESS!
[root@node1 ~]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

八、將配置好的配置文件和腳本複製到node2上

[root@node1 ~]# scp /etc/my.cnf node2:/etc/
my.cnf                                                100% 4940     4.8KB/s   00:00
[root@node1 ~]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/
mysqld                                                100%   11KB  11.4KB/s   00:00

九、關閉mysql並設置爲開機不啓動

[root@node1 ~]# service mysqld stop
Shutting down MySQL. SUCCESS!
[root@node1 data]# chkconfig mysqld off

十、設置node2爲主節點,並掛載測試

[root@node1 ~]# umount /mydata/
[root@node1 ~]# drbdadm secondary mariadb
[root@node1 ~]# drbd-overview
  0:mariadb/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----
======================================================================
[root@node2 ~]# drbdadm primary mariadb
[root@node2 ~]# drbd-overview
  0:mariadb/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@node2 ~]# mkdir /mydata/
[root@node2 ~]# mount /dev/drbd0 /mydata/
[root@node2 ~]# ll /mydata/
total 24
drwxr-xr-x 2 mysql mysql  4096 Apr 23 21:46 binlogs
drwxr-xr-x 5 mysql mysql  4096 Apr 23 21:46 data
drwx------ 2 root  root  16384 Apr 23 16:26 lost+found

十一、啓動並測試node2上的mysql

[root@node2 mydata]# service mysqld start
Starting MySQL.. SUCCESS!
[root@node2 mydata]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.01 sec)

十二、關閉node2上的mysql並設置爲開機不啓動

[root@node2 ~]# service mysqld stop
Shutting down MySQL. SUCCESS!
[root@node2 ~]# chkconfig mysqld off

6、配置高可用集羣資源

一、中止DRBD服務並設置爲開機不啓動,將由CRM管理

[root@node1 ~]# service drbd stop
[root@node1 ~]# chkconfig drbd off
[root@node1 ~]# ssh node2 "service drbd stop"
[root@node1 ~]# ssh node2 "chkconfig drbd off"

二、定義全局屬性,設置沒有法定票數的行爲和禁用stonith

   至於爲何這樣作,詳細說明在個人上一篇高可用博文裏

[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# property stonith-enabled=false
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# verify
crm(live)configure# commit

三、配置drbd爲集羣資源

(1)查看drbd的provider    

   提供drbd的RA目前由OCF歸類爲linbit,其路徑爲/usr/lib/ocf/resource.d/linbit/drbd,可使用下面命令查看RA及RAmeta信息

crm(live)ra# classes
lsb
ocf / heartbeat linbit pacemaker
service
stonith
crm(live)ra# list ocf linbit
drbd

(2)配置drdb資源

drbd須要同時運行在兩個節點上,但只能有一個節點(primary/secondary模型)是Master,而另外一個節點爲Slave;所以,它是一種比較特殊的集羣資源,其資源類型爲多態(Multi-state)clone類型,即主機節點有Master和Slave之分,且要求服務剛啓動時兩個節點都處於slave狀態。

crm(live)configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mariadb op monitor role=Master interval=50s timeout=30s op monitor role=Slave interval=60s timeout=30s op start timeout=240s interval=0 op stop timeout=100s interval=0
crm(live)configure#
crm(live)configure# master MS_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure#
crm(live)configure# show mysqldrbd
primitive mysqldrbd ocf:linbit:drbd \
    params drbd_resource="mariadb" \
    op monitor role="Master" interval="50s" timeout="30s" \
    op monitor role="Slave" interval="60s" timeout="30s" \
    op start timeout="240s" interval="0" \
    op stop timeout="100s" interval="0"
crm(live)configure# show MS_mysqldrbd
ms MS_mysqldrbd mysqldrbd \
    meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# verify
crm(live)configure# commit

   查看當前集羣的運行狀態

[root@node1 ~]# crm status
Last updated: Wed Apr 23 18:20:46 2014
Last change: Wed Apr 23 18:15:38 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]


   從上面的信息能夠看出,此時的drbd服務的Primary節點在node1.shuishui.com上,Secondary節點爲node2.shuishui.com。固然,也能夠在node2上使用以下命令驗證當前主機是否已經成爲mariadb資源的Slave節點

drbdadm role mariadb

四、配置文件系統資源

   爲Primary節點上的mariadb資源建立自動掛載的集羣服務:

   MS_mysqldrbd的Master節點即爲drbd服務mariadb資源的Primary節點,此節點的設備/dev/drbd0能夠掛載使用,且在某集羣服務的應用當中也須要可以實現自動掛載。假設咱們這裏的mariadb資源是爲mysql服務器集羣提供數據目錄的共享文件系統,其須要掛載至/mydata(此目錄須要在兩個節點都已經創建完成)目錄。


   此外,此自動掛載的集羣資源須要運行於drbd服務的Master節點上,而且只能在drbd服務將某節點設置爲Primary之後方可啓動。所以,還須要爲這兩個資源創建排列約束和順序約束。

crm(live)configure# primitive mysqlstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op monitor interval=40s timeout=40s op start timeout=60s interval=0 op stop timeout=60s interval=0
crm(live)configure#
crm(live)configure# verify
crm(live)configure# colocation mysqlstore_with_MS_mysqldrbd inf: mysqlstore MS_mysqldrbd:Master               #排列約束:drbd要與主節點永遠在一塊兒
crm(live)configure# order mysqlstore_after_MS_mysqldrbd mandatory: MS_mysqldrbd:promote mysqlstore:start        #drbd先提高,再掛載
crm(live)configure# verify
crm(live)configure# commit

   查看此刻集羣中資源的運行狀態

[root@node1 ~]# crm status
======================================================
Last updated: Wed Apr 23 19:07:27 2014
Last change: Wed Apr 23 18:55:52 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
=======================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
-------------------------------------------------------
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com

五、配置mysql資源

crm(live)configure# primitive mysqld lsb:mysqld op monitor interval=20s timeout=20s on-fail=restart
crm(live)configure#
crm(live)configure# colocation mysqld_with_mysqlstore inf: mysqld mysqlstore
crm(live)configure#
crm(live)configure# verify
crm(live)configure#
crm(live)configure# order mysqlstore_before_mysqld inf: mysqlstore:start mysqld:start
crm(live)configure#
crm(live)configure# verify
crm(live)configure#
crm(live)configure# commit

   查看此刻集羣中資源的運行狀態

[root@node1 ~]# crm status
========================================================
Last updated: Wed Apr 23 22:10:46 2014
Last change: Wed Apr 23 20:52:58 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
4 Resources configured
========================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com
 mysqld (lsb:mysqld):   Started node1.shuishui.com

   測試mysql是否能夠正常登陸

[root@node1 ~]# /usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+

六、配置VIP資源

crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=172.16.7.1 cidr_netmask=16 op monitor interval=20s timeout=20s on-fail=restart
crm(live)configure#
crm(live)configure# colocation vip_with_mysqld inf: vip mysqld
crm(live)configure#
crm(live)configure# order vip_before_mysqld inf: vip mysqld
crm(live)configure#
crm(live)configure# verify
crm(live)configure# commit

   查看此刻集羣中的資源運行狀態

[root@node1 ~]# crm status
=======================================================
Last updated: Wed Apr 23 22:33:13 2014
Last change: Wed Apr 23 22:31:13 2014 via cibadmin on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
=======================================================
Online: [ node1.shuishui.com node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node1.shuishui.com ]
     Slaves: [ node2.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node1.shuishui.com
 mysqld (lsb:mysqld):   Started node1.shuishui.com
 vip    (ocf::heartbeat:IPaddr):    Started node1.shuishui.com

七、最後再顯示一下全部的配置結果

node node1.shuishui.com
node node2.shuishui.com
primitive mysqld lsb:mysqld \
        op monitor interval="20s" timeout="20s" on-fail="restart"
primitive mysqldrbd ocf:linbit:drbd \
        params drbd_resource="mariadb" \
        op monitor role="Master" interval="50s" timeout="30s" \
        op monitor role="Slave" interval="60s" timeout="30s" \
        op start timeout="240s" interval="0" \
        op stop timeout="100s" interval="0"
primitive mysqlstore ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/mydata" fstype="ext4" \
        op monitor interval="40s" timeout="40s" \
        op start timeout="60s" interval="0" \
        op stop timeout="60s" interval="0"
primitive vip ocf:heartbeat:IPaddr \
        params ip="172.16.7.1" cidr_netmask="16" \
        op monitor interval="20s" timeout="20s" on-fail="restart"
ms MS_mysqldrbd mysqldrbd \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation mysqld_with_mysqlstore inf: mysqld mysqlstore
colocation mysqlstore_with_MS_mysqldrbd inf: mysqlstore MS_mysqldrbd:Master
colocation vip_with_mysqld inf: vip mysqld
order mysqlstore_after_MS_mysqldrbd inf: MS_mysqldrbd:promote mysqlstore:start
order mysqlstore_before_mysqld inf: mysqlstore:start mysqld:start
order vip_before_mysqld inf: vip mysqld
property $id="cib-bootstrap-options" \
        dc-version="1.1.10-14.el6-368c726" \
        cluster-infrastructure="classic openais (with plugin)" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"

7、測試mysql高可用集羣

一、受權可遠程登陸的網段及用戶

MariaDB [(none)]>
MariaDB [(none)]> grant all on *.* to 'test'@'172.16.%.%' identified by 'test';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

二、遠程客戶端測試

   使用虛擬IP:172.16.7.1遠程登陸mysql服務器,客戶端IP是:172.16.7.10

[root@node1 ~]# mysql -u test -h 172.16.7.1 -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

三、故障模擬

[root@node1 ~]# crm node standby
[root@node1 ~]# crm status
Last updated: Wed Apr 23 22:51:10 2014
Last change: Wed Apr 23 22:51:02 2014 via crm_attribute on node1.shuishui.com
Stack: classic openais (with plugin)
Current DC: node1.shuishui.com - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured
Node node1.shuishui.com: standby
Online: [ node2.shuishui.com ]
 Master/Slave Set: MS_mysqldrbd [mysqldrbd]
     Masters: [ node2.shuishui.com ]          #node2已經自動切換爲Master且全部資源已切換到node2上
     Stopped: [ node1.shuishui.com ]
 mysqlstore (ocf::heartbeat:Filesystem):    Started node2.shuishui.com
 mysqld (lsb:mysqld):   Started node2.shuishui.com
 vip    (ocf::heartbeat:IPaddr):    Started node2.shuishui.com

四、再次在遠程客戶端登陸VIP:172.16.7.1

[root@node1 ~]# mysql -u test -h 172.16.7.1 -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDB Server
Copyright (c) 2000, 2014, Oracle, SkySQL Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.05 sec)

   遠程客戶端訪問mysql服務器毫無壓力,根本意識不到節點已經自動切換到node2上。


   搭建基於DRBD模型的MySQL高可用(HA)集羣得到完美成功!j_0003.gif

相關文章
相關標籤/搜索