Mysql+Corosync+Pacemaker+DRBD構建高可用Mysql

本次實驗主要介紹Mysql的高可用集羣構建;其餘的很少說了,下面直接開始安裝配置
node

1、環境介紹及準備mysql

一、本次配置有兩個節點:nod1.allen.com(172.16.14.1) 與 nod2.allen.com(172.16.14.2)linux

######在NOD1與NOD2節點執行以下命令
cat > /etc/hosts << EOF
172.16.14.1 nod1.allen.com nod1
172.16.14.2 nod2.allen.com nod2
EOF
註釋:讓全部節點的主機名稱與對應的IP地址能夠正常解析

二、每一個節點的主機名稱須跟"uname -n"命令的執行結果同樣算法

######NOD1節點執行
sed -i 's@\(HOSTNAME=\).*@\1nod1.allen.com@g' /etc/sysconfig/network
hostname nod1.allen.com
######NOD2節點執行
sed -i 's@\(HOSTNAME=\).*@\1nod2.allen.com@g' /etc/sysconfig/network
hostname nod2.allen.com
註釋:修改文件須重啓系統生效,這裏先修改文件而後執行命令修改主機名稱能夠不用重啓

三、nod1與nod2兩個節點上各提供了一個相同大小的分區做爲DRBD設備,這裏咱們在兩個節點上分別建立"/dev/sda3"做爲DRBD設備,大小容量爲2Gsql

######在NOD1與NOD2節點上分別建立分區,分區大小必須保持同樣
fdisk /dev/sda
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, +cylinders or +size{K,M,G} (7859-15665, default 15665): +2G
Command (m for help): w
partx /dev/sda  #讓內核從新讀取分區
######查看內核有沒有識別分區,若是沒有須要從新啓動,這裏沒有識別須要重啓系統
cat /proc/partitions
major minor  #blocks  name
   8        0  125829120 sda
   8        1     204800 sda1
   8        2   62914560 sda2
 253        0   20971520 dm-0
 253        1    2097152 dm-1
 253        2   10485760 dm-2
 253        3   20971520 dm-3
reboot

四、關閉兩臺服務器的SELinux、Iptables與NetworkManager數據庫

setenforce 0            #關閉SELinux
service iptables stop   #關閉Iptables
chkconfig iptables off  #禁止Iptables開機啓動
service NetworkManager stop
chkconfig NetworkManager off
chkconfig --list NetworkManager
NetworkManager  0:off   1:off   2:off   3:off   4:off   5:off   6:off
chkconfig network on
chkconfig --list network
network         0:off   1:off   2:on    3:on    4:on    5:on    6:off
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
注意:作的過程當中必須關閉NetworkManager服務關閉並設置開機不能自動啓動;將network服務設置開機自啓動;不然做實驗過程當中會帶來沒必要要的麻煩,形成集羣系統不能正常運行

五、配置好YUM源並同步時間,且保證兩個節點的時間要同步 epel源下載
bootstrap

######配置epel源
######在NOD1與NOD2節點分別安裝
rpm -ivh epel-release-6-8.noarch.rpm

六、作雙機互信vim

[root@nod1 ~]# ssh-keygen -t rsa
[root@nod1 ~]# ssh-copy-id -i .ssh/id_rsa.pub nod2
==================================================
[root@nod2 ~]# ssh-keygen -t rsa
[root@nod2 ~]# ssh-copy-id -i .ssh/id_rsa.pub nod1

七、系統版本:CentOS 6.4_x86_64api

八、使用軟件: 其中pacemaker與corosync在光盤映像中有安全

pssh-2.3.1-2.el6.x86_64 下載見附件

crmsh-1.2.6-4.el6.x86_64 下載見附件

drbd-8.4.3-33.el6.x86_64 DRBD下載地址:http://rpmfind.net

drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64

mysql-5.5.33-linux2.6-x86_64 點此下載

pacemaker-1.1.8-7.el6.x86_64

corosync-1.4.1-15.el6.x86_64


2、安裝配置DRBD DRBD詳解

一、在NOD1與NOD2節點上安裝DRBD軟件包

######NOD1
[root@nod1 ~]# ls drbd-*
drbd-8.4.3-33.el6.x86_64.rpm  drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[root@nod1 ~]# yum -y install drbd-*.rpm
######NOD2
[root@nod2 ~]# ls drbd-*
drbd-8.4.3-33.el6.x86_64.rpm  drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[root@nod2 ~]# yum -y install drbd-*.rpm

二、查看DRBD配置文件

ll /etc/drbd.conf;ll /etc/drbd.d/
-rw-r--r-- 1 root root 133 May 14 21:12 /etc/drbd.conf #主配置文件
total 4
-rw-r--r-- 1 root root 1836 May 14 21:12 global_common.conf #全局配置文件
######查看主配置文件內容
cat /etc/drbd.conf
######主配置文件中包含了全局配置文件及"drbd.d/"目錄下以.res結尾的文件
# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";

三、修改配置文件以下:

[root@nod1 ~]#vim /etc/drbd.d/global_common.conf
global {
    usage-count no;  #是否參加DRBD使用統計,默認爲yes
    # minor-count dialog-refresh disable-ip-verification
}
common {
    protocol C;      #使用DRBD的同步協議
    handlers {
        # These are EXAMPLE handlers only.
        # They may have severe implications,
        # like hard resetting the node under certain circumstances.
        # Be careful when chosing your poison.
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
        # split-brain "/usr/lib/drbd/notify-split-brain.sh root";
        # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
        # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
        # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
    }
    startup {
        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
    }
    options {
        # cpu-mask on-no-data-accessible
    }
    disk {
        on-io-error detach; #配置I/O錯誤處理策略爲分離
        # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
        # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
    }
    net {
        cram-hmac-alg "sha1";       #設置加密算法
        shared-secret "allendrbd"; #設置加密密鑰
        # protocol timeout max-epoch-size max-buffers unplug-watermark
        # connect-int ping-int sndbuf-size rcvbuf-size ko-count
        # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
        # after-sb-1pri after-sb-2pri always-asbp rr-conflict
        # ping-timeout data-integrity-alg tcp-cork on-congestion
        # congestion-fill congestion-extents csums-alg verify-alg
        # use-rle
    }
    syncer {
        rate 1024M;    #設置主備節點同步時的網絡速率
    }
}

四、添加資源文件:

[root@nod1 ~]# vim /etc/drbd.d/drbd.res
resource drbd {
  on nod1.allen.com {    #第個主機說明以on開頭,後面是主機名稱
    device    /dev/drbd0;#DRBD設備名稱
    disk      /dev/sda3; #drbd0使用的磁盤分區爲"sda3"
    address   172.16.14.1:7789; #設置DRBD監聽地址與端口
    meta-disk internal;
  }
  on nod2.allen.com {
    device    /dev/drbd0;
    disk      /dev/sda3;
    address   172.16.14.2:7789;
    meta-disk internal;
  }
}

五、將配置文件爲NOD2提供一份

[root@nod1 ~]# scp /etc/drbd.d/{global_common.conf,drbd.res} nod2:/etc/drbd.d/
The authenticity of host 'nod2 (172.16.14.2)' can't be established.
RSA key fingerprint is 29:d3:28:85:20:a1:1f:2a:11:e5:88:cd:25:d0:95:c7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'nod2' (RSA) to the list of known hosts.
root@nod2's password:
global_common.conf                                                             100% 1943     1.9KB/s   00:00
drbd.res                                                                       100%  318     0.3KB/s   00:00

六、初始化資源並啓動服務

######在NOD1與NOD2節點上初始化資源並啓動服務
[root@nod1 ~]# drbdadm create-md drbd
Writing meta data...
initializing activity log
NOT initializing bitmap
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.  #提示已經建立成功
lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
######啓動服務
[root@nod1 ~]# service drbd start
Starting DRBD resources: [
     create res: drbd
   prepare disk: drbd
    adjust disk: drbd
     adjust net: drbd
]
..........
***************************************************************
 DRBD's startup script waits for the peer node(s) to appear.
 - In case this node was already a degraded cluster before the
   reboot the timeout is 0 seconds. [degr-wfc-timeout]
 - If the peer was available before the reboot the timeout will
   expire after 0 seconds. [wfc-timeout]
   (These values are for resource 'drbd'; 0 sec -> wait forever)
 To abort waiting enter 'yes' [  12]: yes

七、初始化設備同步

[root@nod1 ~]# drbdadm -- --overwrite-data-of-peer primary drbd
[root@nod1 ~]# cat /proc/drbd     #查看同步進度
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n-
    ns:1897624 nr:0 dw:0 dr:1901216 al:0 bm:115 lo:0 pe:3 ua:3 ap:0 ep:1 wo:f oos:207988
    [=================>..] sync'ed: 90.3% (207988/2103412)K
    finish: 0:00:07 speed: 26,792 (27,076) K/sec
######當同步完成時如如下狀態
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:2103412 nr:0 dw:0 dr:2104084 al:0 bm:129 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
註釋: drbd:爲資源名稱
######查看同步進度也可以使用如下命令
drbd-overview

八、建立文件系統

######格式化文件系統
[root@nod1 ~]# mkfs.ext4 /dev/drbd0

九、禁止NOD1與NOD2節點上的DRBD服務開機自啓動

[root@nod1 ~]# chkconfig drbd off
[root@nod1 ~]# chkconfig --list drbd
drbd            0:off   1:off   2:off   3:off   4:off   5:off   6:off
=====================================================================
[root@nod2 ~]# chkconfig drbd off
[root@nod2 ~]# chkconfig --list drbd
drbd            0:off   1:off   2:off   3:off   4:off   5:off   6:off

3、安裝Mysql

一、安裝Mysql並配置

######在NOD1節點上安裝Mysql
[root@nod1 ~]# mkdir /mydata
[root@nod1 ~]# mount /dev/drbd0 /mydata/
[root@nod1 ~]# mkdir /mydata/data
[root@nod1 ~]# tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/
[root@nod1 ~]# cd /usr/local/
[root@nod1 local]# ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[root@nod1 local]# cd mysql
[root@nod1 mysql]# cp support-files/my-large.cnf /etc/my.cnf
[root@nod1 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@nod1 mysql]# chmod +x /etc/init.d/mysqld
[root@nod1 mysql]# chkconfig --add mysqld
[root@nod1 mysql]# chkconfig mysqld off
[root@nod1 mysql]# vim /etc/my.cnf
datadir = /mydata/data
innodb_file_per_table = 1
[root@nod1 mysql]# echo "PATH=/usr/local/mysql/bin:$PATH" >> /etc/profile
[root@nod1 mysql]# . /etc/profile
[root@nod1 mysql]# useradd -r -u 306 mysql
[root@nod1 mysql]# chown mysql.mysql -R /mydata
[root@nod1 mysql]# chown root.mysql *
[root@nod1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
[root@nod1 mysql]# service mysqld start
Starting MySQL.....                                        [  OK  ]
[root@nod1 mysql]# chkconfig --list mysqld
mysqld          0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@nod1 mysql]# service mysqld stop
Shutting down MySQL.                                       [  OK  ]
######在NOD2節點上安裝Mysql
[root@nod2 ~]# scp nod1:/root/mysql-5.5.33-linux2.6-x86_64.tar.gz ./
[root@nod2 ~]# mkdir /mydata
[root@nod2 ~]# tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/
[root@nod2 ~]# cd /usr/local/
[root@nod2 local]# ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[root@nod2 local]# cd mysql
[root@nod2 mysql]# cp support-files/my-large.cnf /etc/my.cnf
######修改配置文件添加以下配置
[root@nod2 mysql]# vim /etc/my.cnf
datadir = /mydata/data
innodb_file_per_table = 1
[root@nod2 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@nod2 mysql]# chkconfig --add mysqld
[root@nod2 mysql]# chkconfig mysqld off
[root@nod2 mysql]# useradd -r -u 306 mysql
[root@nod2 mysql]# chown -R root.mysql *

二、卸載NOD1節點上的DRBD設備而後降級

[root@nod1 ~]# drbd-overview
  0:drbd/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@nod1 ~]# umount /mydata/
[root@nod1 ~]# drbdadm secondary drbd
[root@nod1 ~]# drbd-overview
  0:drbd/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----

三、在NOD2節點升級DBRD爲主而後掛載DRBD設備

[root@nod2 ~]# drbd-overview
  0:drbd/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----
[root@nod2 ~]# drbdadm primary drbd
[root@nod2 ~]# drbd-overview
  0:drbd/0  Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@nod2 ~]# mount /dev/drbd0 /mydata/

四、在NOD2節點上啓動Mysql服務進行測試

[root@nod2 ~]# chown -R mysql.mysql /mydata
[root@nod2 ~]# service mysqld start
Starting MySQL..                                           [  OK  ]
[root@nod2 ~]# service mysqld stop
Shutting down MySQL.                                       [  OK  ]
[root@nod2 ~]# chkconfig --list mysqld
mysqld          0:off   1:off   2:off   3:off   4:off   5:off   6:off

五、將DRBD服務都設置爲備用節點如:

[root@nod2 ~]# drbdadm secondary drbd
[root@nod2 ~]# drbd-overview
  0:drbd/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----

六、卸載DRBD設備並中止NOD1與NOD2節點上的DRBD服務

[root@nod2 ~]# umount /mydata/
[root@nod2 ~]# service drbd stop
Stopping all DRBD resources: .
[root@nod1 ~]# service drbd stop
Stopping all DRBD resources: .



4、安裝Corosync+Pacemaker軟件

一、在NOD1與NOD2節點上安裝

[root@nod1 ~]# yum -y install crmsh*.rpm pssh*.rpm pacemaker corosync
[root@nod2 ~]# scp nod1:/root/{pssh*.rpm,crmsh*.rpm} ./
[root@nod2 ~]# yum -y install crmsh*.rpm pssh*.rpm pacemaker corosync

二、在NOD1上配置Corosync

[root@nod1 ~]# cd /etc/corosync/
[root@nod1 corosync]# ls
corosync.conf.example  corosync.conf.example.udpu  service.d  uidgid.d
[root@nod1 corosync]# cp corosync.conf.example corosync.conf
[root@nod1 corosync]# vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
    version: 2    #版本號
    secauth: on   #是否開啓安全認證
    threads: 0    #多少個現成認證,0 爲無限制
    interface {
        ringnumber: 0
        bindnetaddr: 172.16.0.0 #經過哪一個網絡通訊
        mcastaddr: 226.94.14.12 #組播地址
        mcastport: 5405         #組播端口
        ttl: 1
    }
}
logging {
    fileline: off
    to_stderr: no    #是否發送標準錯誤輸出
    to_logfile: yes  #是否開啓日誌
    to_syslog: no    #是否開啓系統日誌,建議關閉一個
    logfile: /var/log/cluster/corosync.log #日誌存放路徑,須手動建立目錄
    debug: off
    timestamp: on    #日誌中是否記錄時間
    logger_subsys {
        subsys: AMF
        debug: off
    }
}
amf {
    mode: disabled
}
service {                #添加支持使用Pacemaker
    ver:   0
    name:  pacemaker
}
aisexec {                #是否使用openais,有時可能會用到
    user:  root
    group: root
}

三、生成節點之間通訊時用到的認證密鑰文件

[root@nod1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 152).
Press keys on your keyboard to generate entropy (bits = 216).
註釋:生成密鑰時若是出現以上問題,說明隨機數不夠用,能夠安裝軟件來解決

四、將配置文件及認證文件拷貝到NOD2節點一份

[root@nod1 corosync]# scp authkey corosync.conf nod2:/etc/corosync/
authkey                                    100%  128     0.1KB/s   00:00
corosync.conf                              100%  522     0.5KB/s   00:00

五、啓動Corosync服務

[root@nod1 ~]# service corosync start
Starting Corosync Cluster Engine (corosync):               [  OK  ]
######查看corosync引擎是否正常啓動
[root@nod1 ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.
Sep 19 18:44:36 corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
######查看啓動過程是否產生錯誤信息;以下信息能夠忽略
[root@nod1 ~]# grep ERROR: /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk  ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.
Sep 19 18:44:36 corosync [pcmk  ] ERROR: process_ais_conf:  Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN
######查看初始化成員節點通知是否正常發出
[root@nod1 ~]# grep  TOTEM  /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Sep 19 18:44:36 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Sep 19 18:44:36 corosync [TOTEM ] The network interface [172.16.14.1] is now up.
Sep 19 18:44:36 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
######查看pacemaker是否正常啓動
[root@nod1 ~]# grep pcmk_startup /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
Sep 19 18:44:36 corosync [pcmk  ] Logging: Initialized pcmk_startup
Sep 19 18:44:36 corosync [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Sep 19 18:44:36 corosync [pcmk  ] info: pcmk_startup: Service: 9
Sep 19 18:44:36 corosync [pcmk  ] info: pcmk_startup: Local hostname: nod1.allen.com

六、啓動NOD2節點上Corosync服務

[root@nod1 ~]# ssh nod2 'service corosync start'
Starting Corosync Cluster Engine (corosync): [  OK  ]
######查看集羣節點啓動狀態
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 19:01:33 2013
Last change: Thu Sep 19 18:49:09 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
0 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ] #兩個節點都已正常啓動

七、查看Corosync啓動的相關進程

[root@nod1 ~]# ps auxf
root     10336  0.3  1.2 556824  4940 ?        Ssl  18:44   0:04 corosync
305      10342  0.0  1.7  87440  7076 ?        S    18:44   0:01  \_ /usr/libexec/pacemaker/cib
root     10343  0.0  0.8  81460  3220 ?        S    18:44   0:00  \_ /usr/libexec/pacemaker/stonit
root     10344  0.0  0.7  73088  2940 ?        S    18:44   0:00  \_ /usr/libexec/pacemaker/lrmd
305      10345  0.0  0.7  85736  3060 ?        S    18:44   0:00  \_ /usr/libexec/pacemaker/attrd
305      10346  0.0  4.7 116932 18812 ?        S    18:44   0:00  \_ /usr/libexec/pacemaker/pengin
305      10347  0.0  1.0 143736  4316 ?        S    18:44   0:00  \_ /usr/libexec/pacemaker/crmd

5、配置資源

一、Corosync默認啓用了Stonith,而當前集羣並無相應的Stonith,會出現如下錯誤;須要禁用Stonith

[root@nod1 ~]# crm_verify -L -V
   error: unpack_resources:     Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:     Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:     NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
  -V may provide more details
######禁用Stonith並查看
[root@nod1 ~]# crm configure property stonith-enabled=false
[root@nod1 ~]# crm configure show
node nod1.allen.com
node nod2.allen.com
property $id="cib-bootstrap-options" \
    dc-version="1.1.8-7.el6-394e906" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
    stonith-enabled="false"

二、查看當前的集羣系統支持的類型

[root@nod1 ~]# crm ra classes
lsb
ocf / heartbeat linbit pacemaker redhat
service
stonith
註釋:linbit 資源類型只有安裝DRBD服務纔會有

三、如何查看某種類型下所用可用的資源代理列表?

crm ra list lsb
crm ra list ocf heartbeat
crm ra list ocf pacemaker
crm ra list stonith
crm ra list ocf linbit

四、配置VIP資源與Mysqld資源

[root@nod1 ~]# crm        #進入crm交互模式
crm(live)# configure
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# primitive MyVip ocf:heartbeat:IPaddr params ip="172.16.14.10"    #定義虛擬IP資源
crm(live)configure# primitive Mysqld lsb:mysqld #定義Mysql服務資源
crm(live)configure# verify     #檢查語法錯誤
crm(live)configure# commit     #提交
crm(live)configure# show       #查看配置
node nod1.allen.com
node nod2.allen.com
primitive MyVip ocf:heartbeat:IPaddr \
    params ip="172.16.14.10"
primitive Mysqld lsb:mysqld
property $id="cib-bootstrap-options" \
    dc-version="1.1.8-7.el6-394e906" \
    cluster-infrastructure="classic openais (with plugin)" \
    expected-quorum-votes="2" \
    stonith-enabled="false" \
    no-quorum-policy="ignore"

五、配置DRBD主從資源

crm(live)configure# primitive Drbd ocf:linbit:drbd params drbd_resource="drbd" op monitor interval=10s role="Master" op monitor interval=20s role="Slave" op start timeout=240s op stop timeout=100
crm(live)configure# master My_Drbd Drbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show Drbd
primitive Drbd ocf:linbit:drbd \
    params drbd_resource="drbd" \
    op monitor interval="10s" role="Master" \
    op monitor interval="20s" role="Slave" \
    op start timeout="240s" interval="0" \
    op stop timeout="100s" interval="0"
crm(live)configure# show My_Drbd
ms My_Drbd Drbd \
    meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

六、定義一個文件系統資源

crm(live)configure# primitive FileSys ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext4" op start timeout="60s" op stop timeout="60s"
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show FileSys
primitive FileSys ocf:heartbeat:Filesystem \
    params device="/dev/drbd0" directory="/mydata" fstype="ext4" \
    op start timeout="60s" interval="0" \
    op stop timeout="60s" interval="0"

七、定將資源之間的位置和啓動順序約束

crm(live)configure# colocation FileSys_on_My_Drbd inf: FileSys My_Drbd:Master #讓文件系統與DRBD主節點運行在一塊兒
crm(live)configure# order FileSys_after_My_Drbd inf: My_Drbd:promote FileSys:start  #讓DRBD服務比文件系統先啓動
crm(live)configure# verify
crm(live)configure# colocation Mysqld_on_FileSys inf: Mysqld FileSys #讓Mysql服務與文件系統運行在一塊兒
crm(live)configure# order Mysqld_after_FileSys inf: FileSys Mysqld:start #讓文件系統比Mysql服務先運行
crm(live)configure# verify
crm(live)configure# colocation MyVip_on_Mysqld inf: MyVip Mysqld #讓虛擬IP與Mysql服務運行在一塊兒
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# bye #斷開crm交互鏈接

八、查看服務狀態以下:

[root@nod1 ~]# crm status
Last updated: Thu Sep 19 21:18:20 2013
Last change: Thu Sep 19 21:18:06 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
 Master/Slave Set: My_Drbd [Drbd]
     Masters: [ nod2.allen.com ]
     Slaves: [ nod1.allen.com ]
 FileSys    (ocf::heartbeat:Filesystem):    Started nod2.allen.com
Failed actions:
    Mysqld_start_0 (node=nod1.allen.com, call=60, rc=1, status=Timed Out): unknown error
    MyVip_start_0 (node=nod2.allen.com, call=47, rc=1, status=complete): unknown error
    Mysqld_start_0 (node=nod2.allen.com, call=13, rc=1, status=complete): unknown error
    FileSys_start_0 (node=nod2.allen.com, call=39, rc=1, status=complete): unknown error
註釋:出現以上錯誤是由於咱們在定義資源提交時,期間會檢測服務是否運行;若是沒有運行可能會嘗試啓動,而資源尚未徹底定義好,因此會報錯誤;執行以下命令清除錯誤便可
[root@nod1 ~]# crm resource cleanup Mysqld
[root@nod1 ~]# crm resource cleanup MyVip
[root@nod1 ~]# crm resource cleanup FileSys

九、在上一步清除完錯誤後再次查看:

[root@nod1 ~]# crm status
Last updated: Thu Sep 19 21:26:49 2013
Last change: Thu Sep 19 21:19:35 2013 via crmd on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
 Master/Slave Set: My_Drbd [Drbd]
     Masters: [ nod1.allen.com ]
     Slaves: [ nod2.allen.com ]
 MyVip  (ocf::heartbeat:IPaddr):    Started nod1.allen.com
 Mysqld (lsb:mysqld):   Started nod1.allen.com
 FileSys    (ocf::heartbeat:Filesystem):    Started nod1.allen.com
======================================================================
註釋:由上可見,DRBD_Master、MyVip、Mysqld、FileSys都運行在NOD1節點上,也已經正常運行

6、驗證服務運行是否正常

一、在NOD1節點上查看是否已經運行Mysqld服務及配置好虛擬IP地址和文件系統

[root@nod1 ~]# netstat -anpt|grep mysql
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN      22564/mysqld
[root@nod1 ~]# mount | grep drbd0
/dev/drbd0 on /mydata type ext4 (rw)
[root@nod1 ~]# ifconfig eth0:0
eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:3D:3F:44
          inet addr:172.16.14.10  Bcast:172.16.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

二、登陸數據庫並建立數據庫用於驗證

[root@nod1 ~]# mysql
mysql> create database allen;
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| allen              |
| mysql              |
| performance_schema |
| test               |
+--------------------+

三、模擬主節點出現故障,將主節點設置爲"Standby"狀態,查看服務是否轉移到備用節點上;當前主節點爲:nod1.allen.com 備用節點:nod2.allen.com

[root@nod1 ~]# crm node standby nod1.allen.com
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 22:23:50 2013
Last change: Thu Sep 19 22:23:42 2013 via crm_attribute on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Node nod1.allen.com: standby
Online: [ nod2.allen.com ]
 Master/Slave Set: My_Drbd [Drbd]
     Masters: [ nod2.allen.com ]
     Stopped: [ Drbd:1 ]
 MyVip  (ocf::heartbeat:IPaddr):    Started nod2.allen.com
 Mysqld (lsb:mysqld):   Started nod2.allen.com
 FileSys    (ocf::heartbeat:Filesystem):    Started nod2.allen.com
----------------------------------------------------------------------
######由上可見,全部服務已經切換到NOD2節點服務器上面

四、在NOD2節點上登陸Mysql驗證是否有"allen"數據庫

[root@nod2 ~]# mysql
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| allen              |
| mysql              |
| performance_schema |
| test               |
+--------------------+

五、假如NOD1已修復好從新上線;這時NOD2節點上的服務是不會從新切換回NOD1節點上面的;若是想讓切換也不是不能夠,這須要設置資源粘性;但建議不要切換,避免服務切換時浪費沒必要要的資源

[root@nod1 ~]# crm node online nod1.allen.com
[root@nod1 ~]# crm status
Last updated: Thu Sep 19 22:34:55 2013
Last change: Thu Sep 19 22:34:51 2013 via crm_attribute on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [ nod1.allen.com nod2.allen.com ]
 Master/Slave Set: My_Drbd [Drbd]
     Masters: [ nod2.allen.com ]
     Slaves: [ nod1.allen.com ]
 MyVip  (ocf::heartbeat:IPaddr):    Started nod2.allen.com
 Mysqld (lsb:mysqld):   Started nod2.allen.com
 FileSys    (ocf::heartbeat:Filesystem):    Started nod2.allen.com

六、設置資源粘性命令;這裏就不在作測試了,若是各位博友有興趣能夠測試一下

crm configure rsc_defaults resource-stickiness=100

由上可見,全部服務均可以正常工做;到這裏Mysql高可用已所有完成,並且還驗證了Mysql服務的正常運行與數據;在這裏感謝廣大博友的關注與支持,我會繼續努力的!!! 加油 加油 加油

相關文章
相關標籤/搜索