Corosync+Pacemaker+DRBD+Mysql高可用HA配置

操做系統: CentOS 6.6 x64,本文采用rpm方式安裝corosync+pacemaker+drbd,採用二進制版本安裝mysql-5.6.29。本文是在Corosync+Pacemaker+DRBD+NFS高可用實例配置基礎上進行配置修改,而後進行測試的安裝過程。node

1、雙機配置

1. app1,app2配置hosts文件,以及主機名。

[root@app1 soft]# vi /etc/hosts  
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4   
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6   
192.168.0.24         app1   
192.168.0.25         app2   
10.10.10.24          app1-priv   
10.10.10.25          app2-privpython

說明:10段是心跳IP, 192.168段是業務IP, 採用VIP地址是192.168.0.26。mysql

 

2. 關閉selinux與防火牆

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce 0
chkconfig iptables off
service iptables stoplinux

 

3. 配置各節點ssh互信,好像可配\可不配,方便管理。

app1:
[root@app1 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P '' 
[root@app1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@app2web

app2:
[root@app2 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ''
[root@app2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@app1sql

 

2、DRDB安裝配置

1. app1,app2配置hosts文件以及準備磁盤分區

app1: /dev/sdb1  —> app2: /dev/sdb1數據庫

 

2. app1,app2安裝drbd並安裝

(1) 下載drbd安裝包, CentOS6.6採用kmod-drbd84-8.4.5-504.1安裝包纔可用。

http://rpm.pbone.net/bootstrap

drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm
kmod-drbd84-8.4.5-504.1.el6.x86_64.rpmapi

# rpm -ivh drbd84-utils-8.9.5-1.el6.elrepo.x86_64.rpm kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:drbd84-utils           ########################################### [ 50%]
   2:kmod-drbd84            ########################################### [100%]
Working. This may take some time ...
Done.
#安全

 

(2) 加載DRBD到內核模塊

app1,app2分別操做,並加入到/etc/rc.local文件中。
modprobe drbd
lsmode |grep drbd

 

3. 建立修改配置文件。節點1,節點2同樣配置。

[root@app1 ~]# vi /etc/drbd.d/global_common.conf
global {
        usage-count no;
}
common {
        protocol C;
        disk {
                on-io-error detach;
                no-disk-flushes;
                no-md-flushes; 
        }
        net {
                sndbuf-size 512k;
                max-buffers     8000;
                unplug-watermark   1024;
                max-epoch-size  8000;
                cram-hmac-alg "sha1";
                shared-secret "hdhwXes23sYEhart8t";
                after-sb-0pri disconnect;
                after-sb-1pri disconnect;
                after-sb-2pri disconnect;
                rr-conflict disconnect;
        }
        syncer {
                rate 300M;
                al-extents 517;
        }
}

resource data {
      on app1 {
               device    /dev/drbd0;
               disk      /dev/sdb1;
               address   10.10.10.24:7788;
               meta-disk internal;
      }
      on app2 {
               device     /dev/drbd0;
               disk       /dev/sdb1;
               address    10.10.10.25:7788;
               meta-disk internal;
      }
}

 

4. 初始化資源

在app1和app2上分別執行:

# drbdadm create-md data

initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

 

5. 啓動服務

在app1和app2上分別執行:或採用 drbdadm up data

# service drbd start

Starting DRBD resources: [
     create res: data
   prepare disk: data
    adjust disk: data
     adjust net: data
]
..........
#

 

6. 查看啓動狀態, 兩節點應均處於Secondary狀態。

cat /proc/drbd       #或者直接使用命令drbd-overview

節點1:
[root@app1 drbd.d]# cat /proc/drbd 
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by root@node1.magedu.com, 2015-01-02 12:06:20
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:20964116


節點2:
[root@app2 drbd.d]# cat /proc/drbd 
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by root@node1.magedu.com, 2015-01-02 12:06:20
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:20964116

 

7. 將其中一個節點配置爲主節點

咱們須要將其中一個節點設置爲Primary,在要設置爲Primary的節點上執行以下兩條命令都可:
drbdadm -- --overwrite-data-of-peer primary data  
drbdadm primary --force data


主節點查看同步狀態:
[root@app1 drbd.d]# cat /proc/drbd 
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by root@node1.magedu.com, 2015-01-02 12:06:20
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:1229428 nr:0 dw:0 dr:1230100 al:0 bm:0 lo:0 pe:2 ua:0 ap:0 ep:1 wo:d oos:19735828
        [>...................] sync'ed:  5.9% (19272/20472)M
        finish: 0:27:58 speed: 11,744 (11,808) K/sec
[root@app1 drbd.d]#

 

8. 建立文件系統

文件系統的掛載只能在Primary節點進行,只有在設置了主節點後才能對drbd設備進行格式化, 格式化與手動掛載測試。

[root@app1 ~]# mkfs.ext4 /dev/drbd0
[root@app1 ~]# mount /dev/drbd0 /data

 

3、安裝配置Mysql-5.6.x

1. app1\app2下載編譯版本mysql安裝

wget http://mirrors.sohu.com/mysql/MySQL-5.6/mysql-5.6.29-linux-glibc2.5-x86_64.tar.gz
tar zxvf mysql-5.6.29-linux-glibc2.5-x86_64.tar.gz  -C /usr/local
cd /usr/local/
ln -sv mysql-5.6.29-linux-glibc2.5-x86_64 mysql
groupadd mysql
useradd -g mysql -M -s /sbin/nologin mysql
chown -R mysql:mysql /usr/local/mysql

 

2. app1下初始化數據庫(初始化目錄爲drbd0同步目錄中)

/usr/local/mysql/scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --datadir=/data/mysql3306

 

3, app1,app2下建立配置文件及服務

cd /usr/local/mysql
cp support-files/my-default.cnf /etc/my.cnf
cp support-files/mysql.server  /etc/rc.d/init.d/mysqld
chkconfig --add mysqld

 

4. app1,app2配置Mysql命令連接,也能夠採用加入環境變量中,該方式能夠略過。

ln -sf /usr/local/mysql/bin/mysql /usr/bin/mysql
ln -sf /usr/local/mysql/bin/mysqldump /usr/bin/mysqldump
ln -sf /usr/local/mysql/bin/myisamchk /usr/bin/myisamchk
ln -sf /usr/local/mysql/bin/mysqld_safe /usr/bin/mysqld_safe

或經過加入環境變量中解決。

# vi /etc/profile
export PATH=/usr/local/mysql/bin/:$PATH
# source /etc/profile

ln -sv /usr/local/mysql/include  /usr/include/mysql
echo '/usr/local/mysql/lib' > /etc/ld.so.conf.d/mysql.conf
ldconfig

 

5. app1上Mysql配置文件(兩邊保持配置文件一致)

vi /etc/my.cnf

[client]
port        = 3306
default-character-set  = utf8
socket      = /tmp/mysql.sock
[mysqld]
character-set-server   = utf8
collation-server       = utf8_general_ci
port                   = 3306
socket                 = /tmp/mysql.sock
basedir                = /usr/local/mysql
datadir                = /data/mysql3306
skip-external-locking
key_buffer_size        = 16M
max_allowed_packet     = 1M
table_open_cache       = 64
sort_buffer_size       = 512K
net_buffer_length      = 8K
read_buffer_size       = 256K
read_rnd_buffer_size    = 512K
myisam_sort_buffer_size = 8M
log-bin                 = mysql-bin
binlog_format           = mixed
server-id               = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash

[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

 

6. 啓動mysql,不要配置開機自啓動。

service mysqld start

 

7. 修改管理員密碼並測試

# /usr/local/mysql/bin/mysqladmin -u root password 'admin' #設置管理員密碼
# /usr/local/mysql/bin/mysql -u root -p   #測試密碼輸入

 

8. 複製配置文件到app2

# scp /etc/my.cnf app2:/etc/

 

9. app1關閉mysql並設置開機不啓動

[root@node1 ~]# service mysqld stop
[root@node1 data]# chkconfig mysqld off

 

10.將node2節點上的DRBD設置爲主節點並掛載

(1) app1卸載/dev/drbd0

# umount /data/
# drbdadm secondary data 
# drbd-overview  
  0:web/0  Connected Secondary/Secondary UpToDate/UpToDate C r-----

(2) app2配置drbd爲主後,測試mysql的啓動。

# drbdadm primary data
# drbd-overview  
  0:web/0  Connected Primary/Secondary UpToDate/UpToDate C r-----  

# mkdir /data 
# mount /dev/drbd0 /data/ 
# service mysqld start

 

4、corosync+pacemaker

1. app1,app2配置安裝corosync pacemaker 

# yum install corosync pacemaker -y

2. app1,app2安裝crmsh

RHEL自6.4起再也不提供集羣的命令行配置工具crmsh,要實現對集羣資源管理,還須要獨立安裝crmsh。
crmsh的rpm安裝可從以下地址下載:http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/

[root@app1 crm]# yum install python-dateutil -y  
說明:python-pssh、pssh依懶於python-dateutil包

[root@app1 crm]# rpm -ivh pssh-2.3.1-4.2.x86_64.rpm python-pssh-2.3.1-4.2.x86_64.rpm crmsh-2.1-1.6.x86_64.rpm
warning: pssh-2.3.1-4.2.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 17280ddf: NOKEY
Preparing...                ########################################### [100%]
   1:python-pssh            ########################################### [ 33%]
   2:pssh                   ########################################### [ 67%]
   3:crmsh                  ########################################### [100%]
[root@app1 crm]#
[root@app1 crm]#

 

3. 建立corosync配置文件,app1,app2同樣。

cd /etc/corosync/
cp corosync.conf.example corosync.conf

vi /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {   
        version: 2
        secauth: on
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 10.10.10.0
                mcastaddr: 226.94.8.8
                mcastport: 5405
                ttl: 1
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: no
        logfile: /var/log/cluster/corosync.log
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

service {
        ver:  1                  
        name: pacemaker       
}
aisexec {
        user: root
        group:  root
}

 

4. 建立認證文件,app1,app2同樣

各節點之間通訊須要安全認證,須要安全密鑰,生成後會自動保存至當前目錄下,命名爲authkey,權限爲400。

[root@app1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 128).
Press keys on your keyboard to generate entropy (bits = 192).
Press keys on your keyboard to generate entropy (bits = 256).
Press keys on your keyboard to generate entropy (bits = 320).
Press keys on your keyboard to generate entropy (bits = 384).
Press keys on your keyboard to generate entropy (bits = 448).
Press keys on your keyboard to generate entropy (bits = 512).
Press keys on your keyboard to generate entropy (bits = 576).
Press keys on your keyboard to generate entropy (bits = 640).
Press keys on your keyboard to generate entropy (bits = 704).
Press keys on your keyboard to generate entropy (bits = 768).
Press keys on your keyboard to generate entropy (bits = 832).
Press keys on your keyboard to generate entropy (bits = 896).
Press keys on your keyboard to generate entropy (bits = 960).
Writing corosync key to /etc/corosync/authkey.
[root@app1 corosync]#

 

5. 將剛纔配置的三個文件同步至app2,同步過去後要修改ha.cf文件中的心跳IP

# scp authkeys corosync.conf  root@app2:/etc/corosync/  

 

6. 啓動corosync\pacemaker服務,測試可否正常提供服務

節點1:  
[root@app1 ~]# service corosync start   
Starting Corosync Cluster Engine (corosync):               [OK]

[root@app1 ~]# service pacemaker start
Starting Pacemaker Cluster Manager                         [OK]

配置服務開機自啓動:
chkconfig corosync on
chkconfig pacemaker on


節點2:  
[root@app2 ~]# service corosync start   
Starting Corosync Cluster Engine (corosync):               [OK]

[root@app1 ~]# service pacemaker start
Starting Pacemaker Cluster Manager                         [OK]

配置服務開機自啓動:
chkconfig corosync on
chkconfig pacemaker on

 

7. 測試corosync,pacemaker,crmsh安裝狀況

(1) 查看節點狀況

[root@app1 ~]# crm status
Last updated: Tue Jan 26 13:13:19 2016
Last change: Mon Jan 25 17:46:04 2016 via cibadmin on app1
Stack: classic openais (with plugin)
Current DC: app1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured

Online: [ app1 app2 ]

 

(2) 查看端口啓動狀況

# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name  
udp        0      0 10.10.10.25:5404            0.0.0.0:*                               2828/corosync      
udp        0      0 10.10.10.25:5405            0.0.0.0:*                               2828/corosync      
udp        0      0 226.94.8.8:5405             0.0.0.0:*                               2828/corosync      

 

(3) 查看日誌

[root@app1 corosync]# tail -f  /var/log/cluster/corosync.log

能夠查看日誌中關鍵信息:
Jan 23 16:09:30 corosync [MAIN  ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Jan 23 16:09:30 corosync [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
....
Jan 23 16:09:30 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jan 23 16:09:31 corosync [TOTEM ] The network interface [10.10.10.24] is now up.
Jan 23 16:09:31 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jan 23 16:09:48 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
[root@app1 corosync]#

 

5、配置pacemaker

1. 基本配置

corosync默認啓用了stonith功能,而咱們要配置的集羣並無stonith設備,所以在配置集羣的全局屬性時要對其禁用。

# crm
crm(live)# configure                                      ##進入配置模式
crm(live)configure# property stonith-enabled=false        ##禁用stonith設備
crm(live)configure# property no-quorum-policy=ignore      ##不具有法定票數時採起的動做
crm(live)configure# rsc_defaults resource-stickiness=100  ##設置默認的資源黏性,只對當前節點有效。
crm(live)configure# verify                                ##校驗
crm(live)configure# commit                                ##校驗沒有錯誤再提交
crm(live)configure# show                                  ##查看當前配置
node app1
node app2
property cib-bootstrap-options: \
        dc-version=1.1.11-97629de \
        cluster-infrastructure="classic openais (with plugin)" \
        expected-quorum-votes=2 \
        stonith-enabled=false \
        default-resource-stickiness=100 \
        no-quorum-policy=ignore

 

2. 資源配置

#命令使用經驗說明:verify報錯的,能夠直接退出,也能夠採用edit編輯,修改正確爲止。
# crm configure edit  能夠直接編輯配置文件

 

(1) 添加VIP

不要單個資源提交,等全部資源及約束一塊兒創建以後提交。
crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=192.168.0.26 cidr_netmask=24 nic=eth0:1 op monitor interval=30s timeout=20s on-fail=restart
crm(live)configure# verify 

 

(2) 添加drdb服務

crm(live)configure# primitive mydrbd ocf:linbit:drbd params drbd_resource=data op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30 op start timeout=240 op stop timeout=100
crm(live)configure# verify

把drbd設爲主從資源:
crm(live)configure# ms ms_mydrbd mydrbd meta master-max=1 master-node-max=1 clone-max=2  clone-node-max=1 notify=true
crm(live)configure# verify

 

(3) 文件系統掛載服務:

crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/data fstype=ext4 op start timeout=60s op stop timeout=60s op monitor interval=30s timeout=40s on-fail=restart
crm(live)configure# verify

 

(4) 建立約束,很關鍵,VIP,DRBD, 目錄掛載均在一臺節點上,並且VIP,目錄掛載均依懶於主DRBD.

建立組資源,vip與mystore一塊兒。                   
crm(live)configure# group g_service vip mystore
crm(live)configure# verify

建立位置約束,組資源的啓動依懶於drbd主節點
crm(live)configure# colocation c_g_service inf: g_service ms_mydrbd:Master

建立位置約整,mystore存儲掛載依賴於drbd主節點

crm(live)configure# colocation mystore_with_drbd_master inf: mystore ms_mydrbd:Master

啓動順序依懶,drbd啓動後,建立g_service組資源

crm(live)configure# order o_g_service inf: ms_mydrbd:promote g_service:start
crm(live)configure# verify
crm(live)configure# commit

 

(5) 增長mysql資源

crm(live)# configure  
crm(live)configure# primitive mysqld lsb:mysqld  op monitor interval=20 timeout=20 on-fail=restart
 
建立mysql服務與g_service組在一塊兒
crm(live)configure# colocation mysqld_with_g_service inf: mysqld g_service  
crm(live)configure# verify   
crm(live)configure# show   

建立啓動順序,mysql服務在g_service組啓動以後再啓動
crm(live)configure# order mysqld_after_g_service mandatory: g_service mysqld 
crm(live)configure# verify   
crm(live)configure# show   
crm(live)configure# commit

 

3. 配置完成後,查看狀態

[root@app1 ~]# crm status
Last updated: Fri Apr 29 14:59:14 2016
Last change: Fri Apr 29 14:59:05 2016 via cibadmin on app1
Stack: classic openais (with plugin)
Current DC: app1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured

Online: [ app1 app2 ]

Master/Slave Set: ms_mydrbd [mydrbd]
     Masters: [ app1 ]
     Slaves: [ app2 ]
mysqld (lsb:mysqld):   Started app1
Resource Group: g_service
     vip        (ocf::heartbeat:IPaddr):        Started app1
     mystore    (ocf::heartbeat:Filesystem):    Started app1

[root@app1 ~]#

 

4. 模擬故障切換

(1) app1上操做standby

[root@app1 mysql]# crm node standby app1

(2) app1再查看切換狀態:狀態轉移都很成功。

[root@app1 ~]# crm status
Last updated: Fri Apr 29 15:12:01 2016
Last change: Fri Apr 29 15:01:49 2016 via crm_attribute on app1
Stack: classic openais (with plugin)
Current DC: app1 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
5 Resources configured

Node app1: standby
Online: [ app2 ]

Master/Slave Set: ms_mydrbd [mydrbd]
     Masters: [ app2 ]
     Stopped: [ app1 ]
mysqld (lsb:mysqld):   Started app2
Resource Group: g_service
     vip        (ocf::heartbeat:IPaddr):        Started app2
     mystore    (ocf::heartbeat:Filesystem):    Started app2
[root@app1 ~]#

 

(3) app2上就能夠測試mysql登陸了:

[root@app2 ~]# mysql -uroot -padmin
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.29-log MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> \q
Bye

 

(4) app2上查看drbd掛載目錄狀況

[root@app2 ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/vg_app2-lv_root   36G  5.0G   29G  16% /
tmpfs                       1004M   29M  976M   3% /dev/shm
/dev/sda1                    485M   39M  421M   9% /boot
/dev/drbd0                   5.0G  249M  4.5G   6% /data
[root@app2 ~]#
[root@app2 ~]#

#說明:切換測試時有時會出現警告提示,影響真實狀態查看,能夠採用以下方式清除,提示哪一個資源報警就清哪一個,清理後,再次crm status查看狀態顯示正常。
Failed actions:
mystore_stop_0 on app1 'unknown error' (1): call=97, status=complete, last-rc-change='Tue Jan 26 14:39:21 2016', queued=6390ms, exec=0ms

[root@app1 ~]# crm resource cleanup mystore
Cleaning up mystore on app1
Cleaning up mystore on app2
Waiting for 2 replies from the CRMd.. OK
[root@app1 ~]#

 

5. 配置小結

在切換的過程當中最大的問題就是DRBD的同步問題,必竟數據都在磁盤上,若是不一樣步就會形成數據不一致的問題,standby模擬切換其實不能真實模擬drbd的故障轉移的。由於在故障轉移以後,drbd被stop以後,從庫接管主節點會從因stop以後會出現unknownn狀態,這時候須要作會數據初始化同步。

相關文章
相關標籤/搜索