***********************************************node
1、準備實驗環境mysql
2、安裝corosync+cmanlinux
3、Drbd的安裝於配置nginx
4、安裝Mysql服務器web
5、使用corosync+crm定義資源sql
6、測試mysql是不是高可用vim
************************************************服務器
1、準備實驗環境網絡
一、服務器IP地址規劃dom
Drbd1:172.16.10.3
Drbd1:172.16.10.4
二、服務器操做系統
Drbd1:Centos 6.4 x86_64
Drbd2:Centos 6.4 x86_64
三、修改主機名以及hosts文件
####drbd1 server############ sed -i 's@\(HOSTNAME=\).*@\1drbd1@g' /etc/sysconfig/network hostname drbd1 [root@drbd1 ~]# echo "172.16.10.3 drbd1" >> /etc/hosts [root@drbd1 ~]# echo "172.16.10.4 drbd2" >> /etc/hosts [root@drbd1 ~]# ssh-keygen -t rs [root@drbd1 ~]# ssh-copy-id -i .ssh/id_rsa.pub drbd2 [root@drbd1 ~]# scp /etc/hosts drbd2:/etc/ ####drbd2 server############ sed -i 's@\(HOSTNAME=\).*@\1drbd2@g' /etc/sysconfig/network hostname drbd2 [root@drbd2 ~]# ssh-keygen -t rsa [root@drbd2 ~]# ssh-copy-id -i .ssh/id_rsa.pub drbd1
四、建立Drbd磁盤分區規劃
在drbd1與drbd2上建立磁盤分區(sda3)大小爲5G,相信這個對你們來講都很easy,此處就省略了。注意在Centos6系統上必須重啓系統,內核才能讀到分區信息
2、安裝Corosync+Crm
一、安裝Corosync
###########drbd1############# [root@drbd1 ~]# yum install corosync -y ###########drbd2############# [root@drbd2 ~]# yum install corosync -y
二、配置Corosync
[root@drbd1 ~]# cd /etc/corosync/ [root@drbd1 corosync]# cp corosync.conf.example corosync.conf [root@drbd1 corosync]# vim corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: off threads: 0 interface { # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: off # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: on #開啓認證功能 threads: 0 interface { ringnumber: 0 bindnetaddr: 172.16.0.0 #修改成本機的網絡地址 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } ####添加如下幾行 service { ver: 0 name: pacemaker # use_mgmtd: yes } aisexec { user: root group: root }
三、生成節點間通訊時用到的認證密鑰文件並將祕鑰以及配置文件copy到drbd2
[root@drbd1 corosync]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Writing corosync key to /etc/corosync/authkey. [root@drbd1 corosync]# scp corosync.conf authkey drbd2:/etc/corosync/
四、爲兩個節點建立corosync生成的日誌所在的目錄
[root@drbd1 corosync]# mkdir -pv /var/log/cluster [root@drbd1 corosync]# ssh drbd2 "mkdir -pv /var/log/cluster"
五、安裝Crmsh
[root@drbd1 ~]# wget ftp://195.220.108.108/linux/opensuse/factory/repo/oss/suse/x86_64/crmsh-1.2.6-0.rc2.1.1.x86_64.rpm [root@drbd1 ~]# wget ftp://195.220.108.108/linux/opensuse/factory/repo/oss/suse/noarch/pssh-2.3.1-6.1.noarch.rpm [root@drbd1 ~]# yum localinstall --nogpgcheck crmsh*.rpm pssh*.rpm -y [root@drbd1 ~]# scp crmsh-1.2.6-4.el6.x86_64.rpm pssh-2.3.1-6.1.noarch.rpm drbd2:/root/ [root@drbd1 ~]# ssh drbd2 "yum localinstall --nogpgcheck crmsh*.rpm pssh*.rpm -y"qid
六、啓動Corosync
[root@drbd1 ~]# /etc/init.d/corosync start #啓動以後沒有報錯,啓動節點2 [root@drbd1 ~]# ssh drbd2 "/etc/init.d/corosync start"
七、查看集羣節點的啓動狀態
[root@drbd1 ~]# crm status Last updated: Thu Sep 19 11:06:54 2013 Last change: Thu Sep 19 11:03:21 2013 via crmd on drbd1 Stack: classic openais (with plugin) Current DC: drbd2 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 0 Resources configured. Online: [ drbd1 drbd2 ] #drbd1與drbd2都在線 You have new mail in /var/spool/mail/root [root@drbd1 ~]#
Drbd共有兩部分組成:內核模塊和用戶空間的管理工具。其中drbd內核模塊代碼已經整合進Linux內核2.6.33之後的版本中,所以,若是您的內核版本高於此版本的話,你只須要安裝管理工具便可;不然,您須要同時安裝內核模塊和管理工具兩個軟件包,而且此二者的版本號必定要保持對應。
目前適用CentOS 5的drbd版本主要有8.0、8.二、8.3三個版本,其對應的rpm包的名字分別爲drbd, drbd82和drbd83,對應的內核模塊的名字分別爲kmod-drbd, kmod-drbd82和kmod-drbd83。而適用於CentOS 6的版本爲8.4,其對應的rpm包爲drbd和drbd-kmdl,但在實際選用時,要切記兩點:drbd和drbd-kmdl的版本要對應;另外一個是drbd-kmdl的版本要與當前系統的內容版本相對應。咱們實驗所用的平臺爲x86_64且系統爲CentOS 6.4,所以須要同時安裝內核模塊和管理工具。咱們這裏選用最新的8.4的版本(drbd-8.4.3-33.el6.x86_64.rpm和drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm),下載地址爲ftp://rpmfind.net/linux/atrpms/,請按照須要下載。
3、Drbd的安裝於配置
一、安裝Drbd軟件包
[root@drbd1 ~]# wget ftp://195.220.108.108/linux/atrpms/el6-x86_64/atrpms/stable/drbd-8.4.3-33.el6.x86_64.rpm [root@drbd1 ~]# wget ftp://195.220.108.108/linux/atrpms/el6-x86_64/atrpms/stable/drbd-kmdl-2.6.32-358.6.2.el6-8.4.3-33.el6.x86_64.rpm [root@drbd1 ~]# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY Preparing... ########################################### [100%] 1:drbd-kmdl-2.6.32-358.el########################################### [ 50%] 2:drbd ########################################### [100%] [root@drbd1 ~]# scp drbd-* drbd2:/root/ drbd-8.4.3-33.el6.x86_64.rpm 100% 283KB 283.3KB/s 00:00 drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm 100% 145KB 145.2KB/s 00:00 [root@drbd1 ~]# ssh drbd2 " rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm " warning: drbd-8.4.3-33.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 66534c2b: NOKEY Preparing... ################################################## drbd-kmdl-2.6.32-358.el6 ################################################## drbd ################################################## [root@drbd1 ~]#
二、修改Drbd的主配置文件爲
[root@drbd1 ~]# vim /etc/drbd.d/global_common.conf global { usage-count no; # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { #wfc-timeout 120; #degr-wfc-timeout 120; } disk { on-io-error detach; #fencing resource-only; } net { cram-hmac-alg "sha1"; shared-secret "mydrbdlab"; } syncer { rate 1000M; } }
三、定義一個資源名稱爲drbd
[root@drbd1 ~]# vim /etc/drbd.d/web.res resource drbd { on drbd1 { device /dev/drbd0; disk /dev/sda3; address 172.16.10.3:7789; meta-disk internal; } on drbd2 { device /dev/drbd0; disk /dev/sda3; address 172.16.10.4:7789; meta-disk internal; } }
四、將配置文件copy到Drbd2
[root@drbd1 ~]# cd /etc/drbd.d/ [root@drbd1 drbd.d]# scp global_common.conf web.res drbd2:/etc/drbd.d/ global_common.conf 100% 1401 1.4KB/s 00:00 web.res 100% 266 0.3KB/s 00:00 [root@drbd1 drbd.d]#
五、在兩個節點上初始化已 定義的資源並啓動服務
[root@drbd1 ~]# drbdadm create-md drbd NOT initializing bitmap Writing meta data... initializing activity log New drbd meta data block successfully created. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory [root@drbd1 ~]# ssh drbd2 "drbdadm create-md drbd" NOT initializing bitmap Writing meta data... initializing activity log New drbd meta data block successfully created. lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory lk_bdev_save(/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory [root@drbd1 ~]#/etc/init.d/drbd start [root@drbd2 ~]# /etc/init.d/drbd start
六、查看Drbd狀態信息
[root@drbd1 ~]# drbd-overview 0:drbd/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r----- [root@drbd1 ~]#
從上面的信息中能夠看出此時兩個節點均處於Secondary狀態。因而,咱們接下來須要將其中一個節點設置爲Primary。
七、設置drbd的節點爲主節點
[root@drbd1 ~]# drbdadm primary --force drbd [root@drbd1 ~]# drbd-overview #查看開始同步數據 0:drbd/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r----- [>....................] sync'ed: 2.9% (4984/5128)M [root@drbd1 ~]# drbd-overview #在此查看同步完成 0:drbd/0 Connected Primary/Secondary UpToDate/UpToDate C r----- [root@drbd1 ~]#
八、建立文件系統並格式化
[root@drbd1 ~]# mkdir -pv /mydata [root@drbd1 ~]# mkfs.ext4 /dev/drbd0
九、掛載文件系統,查看狀態
[root@drbd1 ~]# mount /dev/drbd0 /mydata/ [root@drbd1 ~]# drbd-overview 0:drbd/0 Connected Primary/Secondary UpToDate/UpToDate C r----- /mydata ext4 5.0G 139M 4.6G 3% [root@drbd1 ~]#
4、安裝Mysql服務器
一、drbd1上安裝(drbd主節點)
[root@drbd1 ~]# useradd -r -u 306 mysql [root@drbd1 ~]# tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/ [root@drbd1 ~]# cd /usr/local/ [root@drbd1 local]# ln -sv mysql-5.5.33-linux2.6-x86_64/ mysql `mysql' -> `mysql-5.5.33-linux2.6-x86_64/' [root@drbd1 local]# cd mysql [root@drbd1 mysql]# chown root.mysql -R * [root@drbd1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data/ [root@drbd1 mysql]# cp support-files/my-large.cnf /etc/my.cnf [root@drbd1 mysql]# cp support-files/mysql.server /etc/init.d/mysqld [root@drbd1 mysql]# vim /etc/my.cnf thread_concurrency = 8 datadir=/mydata/data #指定數據目錄 [root@drbd1 mysql]# stervice mysqld start
注意:安裝mysql(drbd2)以前須要卸載文件系統並降級
[root@drbd1 mysql]#start mysqld stop [root@drbd1 mysql]# umount /mydata/ [root@drbd1 mysql]# drbdadm secondary drbd #資源降級
二、drbd2安裝
[root@drbd2 ~]# useradd -r -u 306 mysql [root@drbd2 ~]# drbdadm primary drbd [root@drbd2 ~]# mkdir -pv /mydata/ [root@drbd2 ~]# chown -R mysql.mysql /mydata/ [root@drbd2 ~]# mount /dev/drbd0 /mydata/ [root@drbd2 ~]#useradd -r -u 306 mysql [root@drbd2 ~]#tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C /usr/local/ [root@drbd2 ~]# cd /usr/local/ [root@drbd2 local]# ln -sv mysql-5.5.33-linux2.6-x86_64/ mysql [root@drbd2 local]#cd mysql [root@drbd2 mysql]# chown root:mysql * -R [root@nginx2 mysql]# cp support-files/my-large.cnf /etc/my.cnf [root@nginx2 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
三、注意要測試mysql是否能正常啓動
[root@drbd2 ~]# mount /dev/drbd0 /mydata/ [root@drbd2 ~]#service mysqld start
若是啓動正常將繼續下面的操做
[root@drbd2 ~]#service mysqld stop [root@drbd2 ~]#umount /mydata [root@drbd2 ~]#chkconfig mysqld off [root@drbd2 ~]#ssh drbd1 "chkconfig mysqld off"
5、使用corosync+crm定義資源
[root@drbd1 ~]# crm conf crm(live)configure# property no-quorum-policy=ignore crm(live)configure# commit crm(live)configure# primitive webdrbd ocf:linbit:drbd params drbd_resource=drbd op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100 crm(live)configure# verify crm(live)configure# master ms_webdrbd webdrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" crm(live)configure# verify crm(live)configure# commit crm(live)configure# exit
註釋:
行4:定義了drbd的資源,並設置狀態監控
行9:定義了drbd主節點的資源,並設置了狀態監控
一、查看資源的狀態
[root@drbd1 ~]# crm status Last updated: Thu Sep 19 15:49:41 2013 Last change: Thu Sep 19 15:49:31 2013 via cibadmin on drbd1 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 2 Resources configured. Online: [ drbd1 drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd1 ] Slaves: [ drbd2 ]
若是出現上述的結果,表示正常,若有問題,請檢查操做步驟,此時drbd已經能夠自動主從切換了
1.一、測試節點是否能正常切換
[root@drbd1 ~]# crm node standby #drbd成爲備用節點 [root@drbd1 ~]# crm status Last updated: Thu Sep 19 15:54:19 2013 Last change: Thu Sep 19 15:54:13 2013 via crm_attribute on drbd1 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 2 Resources configured. Node drbd1: standby Online: [ drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd2 ] #切換到drbd2上 Stopped: [ webdrbd:1 ] [root@drbd1 ~]#
[root@drbd1 ~]# crm node online #drbd1上線 [root@drbd1 ~]# crm status Last updated: Thu Sep 19 15:55:37 2013 Last change: Thu Sep 19 15:55:31 2013 via crm_attribute on drbd1 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 2 Resources configured. Online: [ drbd1 drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd2 ] Slaves: [ drbd1 ] #drbd1上線成功 [root@drbd1 ~]#
二、定義mysql資源並定義約束
[root@drbd1 ~]# crm confi crm(live)configure# primitive mystore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype=ext4 op monitor interval=40 timeout=40 op start timeout=60 op stop timeout=60 crm(live)configure# verify crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip="172.16.10.8" op monitor interval=20 timeout=20 on-fail=restart crm(live)configure# verify crm(live)configure# primitive myserver lsb:mysqld op monitor interval=20 timeout=20 on-fail=restart crm(live)configure# verify crm(live)configure# exit
註釋:
行2:定義了文件系統資源,用於過載drbd,並設置監控時長
行6:定義虛擬IP,並設置監控
行8:定義mysql服務的資源,並設置監控時長
三、查看資源狀態
[root@drbd1 ~]# crm status Last updated: Thu Sep 19 16:05:58 2013 Last change: Thu Sep 19 16:05:19 2013 via cibadmin on drbd1 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 5 Resources configured. Online: [ drbd1 drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd2 ] Slaves: [ drbd1 ] myip (ocf::heartbeat:IPaddr): Started drbd1 mystore (ocf::heartbeat:Filesystem): Started drbd2 Failed actions: myserver_start_0 (node=drbd1, call=149, rc=1, status=complete): unknown error mystore_start_0 (node=drbd1, call=141, rc=1, status=complete): unknown error myserver_start_0 (node=drbd2, call=143, rc=1, status=complete): unknown error
此時出現了一點點小問題那麼咱們就解決
[root@drbd1 ~]# crm resource cleanup mystore [root@drbd1 ~]# crm resource cleanup myserver
四、此時在查看資源狀態,已然正常
[root@drbd1 ~]# crm status Last updated: Thu Sep 19 16:07:39 2013 Last change: Thu Sep 19 16:07:31 2013 via crmd on drbd2 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 5 Resources configured. Online: [ drbd1 drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd2 ] Slaves: [ drbd1 ] myip (ocf::heartbeat:IPaddr): Started drbd1 myserver (lsb:mysqld): Started drbd2 mystore (ocf::heartbeat:Filesystem): Started drbd2 [root@drbd1 ~]#
此時資源沒有運行在同一個節點上,顯然是不符合實際須要的咱們經過定義排列約束,將全部資源運行在同一個節點上
crm(live)configure# colocation mystore_with_ms_webdrbd inf: mystore ms_webdrbd:Master crm(live)configure# verify crm(live)configure# colocation myserver_with_mystore_with_myip inf: myserver mystore myip crm(live)configure# verify crm(live)configure#
五、再此查看是否運行中同一個節點上
[root@drbd1 ~]# crm status Last updated: Thu Sep 19 16:18:20 2013 Last change: Thu Sep 19 16:17:52 2013 via cibadmin on drbd1 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 5 Resources configured. Online: [ drbd1 drbd2 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd2 ] Slaves: [ drbd1 ] myip (ocf::heartbeat:IPaddr): Started drbd2 myserver (lsb:mysqld): Started drbd2 mystore (ocf::heartbeat:Filesystem): Started drbd2 [root@drbd1 ~]#
由上看出已然運行在同一節點上
六、此時資源之間並無啓動順序,定義排列約束
crm(live)configure# order ms_webdrbd_before_mystore inf: ms_webdrbd:promote mystore:start crm(live)configure# show xml crm(live)configure# verify crm(live)configure# commit crm(live)configure# show xml crm(live)configure# order mystore_before_myserver inf: mystore:start myserver:start crm(live)configure# verify crm(live)configure# order myserver_before_myip inf: myserver:start myip:start crm(live)configure# verify crm(live)configure# commit
6、測試mysql是不是高可用
一、模擬節點故障並查看狀態
[root@drbd1 ~]# ssh drbd2 "crm node standby" #將drbd2設置爲備用節點 [root@drbd1 ~]# crm status Last updated: Thu Sep 19 16:29:10 2013 Last change: Thu Sep 19 16:28:50 2013 via crm_attribute on drbd2 Stack: classic openais (with plugin) Current DC: drbd1 - partition with quorum Version: 1.1.8-7.el6-394e906 2 Nodes configured, 2 expected votes 5 Resources configured. Node drbd2: standby Online: [ drbd1 ] Master/Slave Set: ms_webdrbd [webdrbd] Masters: [ drbd1 ] #資源已經轉移 Stopped: [ webdrbd:1 ] myip (ocf::heartbeat:IPaddr): Started drbd1 myserver (lsb:mysqld): Started drbd1 mystore (ocf::heartbeat:Filesystem): Started drbd1 [root@drbd1 ~]#
二、模擬mysql服務出現故障
[root@drbd1 ~]# service mysqld stop #####使用watch命令動態查看mysql端口#### [root@drbd1 ~]# watch "netstat -anpt | grep 3306" Every 2.0s: netstat -anpt | grep 3306 Thu Sep 19 17:17:45 2013 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 36400/mysqld
當mysql服務中止以後,corosync會自動啓動mysql,實現了mysql的高可用!!!
總結:
爲了保證明驗的成功,提醒你們在建立mysql用戶時應該保證在兩臺服務器的mysql用戶的UID相同,以避免致使Mysql服務不能啓動;在定義資源約束時,必定要定義合理,否則有可能在資源轉移的時候出現問題,致使資源不能啓動。
本博客至此結束,若有問題,望你們多提寶貴意見。歡迎你們一塊兒來探討高可用的相關話題!!!