實驗環境:Centos 6.7_64位php
服務器:node
Master節點:dm1 IP地址:10.0.0.61(eth0) 192.168.3.150(eth1,心跳)
linux
Slave節點:dm2 IP地址:10.0.0.62(eth0) 192.168.3.160(eth1,心跳)算法
VIP地址:192.168.0.180
api
1、DRBD環境搭建服務器
1. host映射網絡
# vi /etc/hostsdom
127.0.0.1 localhostlocalhost.localdomain localhost4 localhost4.localdomain4tcp
::1 localhostlocalhost.localdomain localhost6 localhost6.localdomain6ide
10.0.0.61 dm1
192.168.3.150 dm1
10.0.0.62 dm2
192.168.3.160 dm2
2. 時間同步
# ntpdate 10.0.0.254
18 May 19:49:39 ntpdate[16332]: adjust time server 10.0.0.254 offset-0.023216 sec
3. 添加附加庫
官方網站:http://elrepo.org/tiki/tiki-index.php
如下測試針對Centos6.7 版本:
a)import public key:
#rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
b)安裝庫:
#rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
4. 安裝DRBD
DRBD官方網站:http://drbd.linbit.com/docs/install/
# yum install drbd84 kmod-drbd84 -y
5. 兩臺服務器上的分區/dev/sdb1做爲drbd的網絡mirror分區
# fdisk /dev/sdb
//格式化分區
# mkfs.ext4 /dev/sdb1 //只在主機(dm1)上作
6. 開始配置NFS(兩個節點都要執行)
# yum -y install rpcbind nfs-utils
# mkdir /data
# vi /etc/exports
/data *(rw,no_root_squash,no_all_squash,sync)
# service rpcbind start
# chkconfig rpcbind on
# chkconfig nfs off //NFS不須要啓動,也不須要設置成開機啓動,這些都將由後面的Heartbeat來完成。
# netstat -tunlp|grep rpc //查看rpcbind是否啓動成功
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 16537/rpcbind
tcp 0 0 :::111 :::* LISTEN 16537/rpcbind
udp 0 0 0.0.0.0:600 0.0.0.0:* 16537/rpcbind
udp 0 0 0.0.0.0:111 0.0.0.0:* 16537/rpcbind
udp 0 0 :::600 :::* 16537/rpcbind
udp 0 0 :::111 :::* 16537/rpcbind
7. 開始配置DRBD
# modprobe drbd //加載drbd模塊到內核中(兩個節點都要執行)
# lsmod | grep drbd //查看drbd模塊是否加載成功(兩個節點都要執行)
drbd 372759 3
libcrc32c 1246 1 drbd
說明:顯示以上信息,說明drbd模塊加載成功
# cat /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
說明:主備節點兩端配置文件徹底一致
# cd /etc/drbd.d/
# vi nfs.res
//設定資源名稱爲: nfs
resource nfs {
protocol C;
net {
cram-hmac-algsha1;
shared-secret"abcd";
}
syncer {rate 30M;}
on dm1 {
device /dev/drbd1;
disk /dev/sdb1;
address 10.0.0.61:7788;
meta-disk internal;
}
on dm2 {
device /dev/drbd1;
disk /dev/sdb1;
address 10.0.0.62:7788;
meta-disk internal;
}
}
8. 啓動DRBD
# drbdadm create-md nfs //激活前面配置的DRBD資源nfs(兩個節點都要執行)
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
Success
# service drbd start //(兩個節點都要執行)
# chkconfig drbd on //(兩個節點都要執行)
# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build bymockbuild@Build64R6, 2016-01-12 13:27:11
1: cs:Connectedro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:8bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:6827344
# drbdsetup /dev/drbd1 primary //初始化一個主機(這幾步只在主節點上操做)
# drbdadm primary nfs
# drbdadm -- --overwrite-data-of-peer primary nfs
# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build bymockbuild@Build64R6, 2016-01-12 13:27:11
1: cs:SyncSourcero:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:1624064 nr:0 dw:0dr:1624728 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5203280
[===>................] sync'ed: 23.9% (5080/6664)M
finish: 0:02:03 speed:42,116 (36,088) K/sec
9. DRBD的使用
# mkfs.ext4 /dev/drbd1
# mount /dev/drbd1 /data //如今就可使用DRBD分區了
注意:secondary節點上不容許對DRBD設備進行任何操做,包括只讀。全部的讀寫操做只能在主節點上進行,只有當主節點掛掉時,secondary節點才能提高爲主節點,繼續進行讀寫操做。
2、Heartbeat環境搭建
Heartbeat具體的安裝配置:http://linuxzkq.blog.51cto.com/9379412/1771152
如下僅提供配置文件供參考:
# vi authkeys //打開下面兩項:一共有三種認證方式供選擇,第一種是CRC循環冗餘校驗,第二種是SHA1哈希算法,第三種是MD3哈希算法,其中他們的密碼能夠任意設置,可是兩邊密碼必須保持一致。
auth 3
3 md5 Hello!
chmod 600 authkeys //給認證文件受權爲600
# cd /etc/ha.d/
# cat ha.cf //dm1
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
ucast eth1 192.168.3.160
auto_failback off
node dm1
node dm2
ping 10.0.0.254
respawn hacluster /usr/lib64/heartbeat/ipfail
# cat ha.cf //dm2
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
ucast eth1 192.168.3.150
auto_failback off
node dm1
node dm2
ping 10.0.0.254
respawn hacluster /usr/lib64/heartbeat/ipfail
#tail haresources //兩臺主、備機器上的配置是同樣的
#node1 10.0.0.170 Filesystem::/dev/sda1::/data1::ext2
#
# Regarding the node-names in this file:
#
# They must match the names of the nodes listed in ha.cf, which in turn
# must match the `uname -n` of some node in the cluster. So they aren't
# virtual in any sense of the word.
#
#dm1 IPaddr::192.168.0.180/32/eth0 drbddisk::nfs Filesystem::/dev/drbd1::/data::ext4 nfs
dm1 IPaddr::192.168.0.180/24/eth0 drbd_primary
說明:
drbddisk::data <==啓動drbd data資源,至關於執行/etc/ha.d/resource.d/drbddiskdata stop/start操做
Filesystem::/dev/drbd1::/data::ext4 <==drbd分區掛載到/data目錄,至關於執行/etc/ha.d/resource.d/Filesystem /dev/drbd1 /data ext4stop/start <==至關於系統中執行mount /dev/drbd1 /data
nfs <==啓動nfs服務腳本,至關於/etc/init.d/nfsstop/start
資源切換腳本:
drbd_primary resource-group用來指定須要Heartbeat託管的服務,也就是這些 服務能夠由Heartbeat來啓動和關閉。若是要託管這些服務,就必須將服務寫成能夠經過start/stop來啓動和關閉的腳步,而後放到/etc/init.d/或者/etc/ha.d/resource.d/目錄下,Heartbeat會根據腳本的名稱自動去/etc/init.d或者/etc/ha.d/resource.d/目錄下找到相應腳本進行啓動或關閉操做。
本腳本複製drbd的切換:
# cd /etc/init.d
# cat drbd_primary //兩臺機器上的腳本如出一轍
#!/bin/sh
case "$1" in
start)
drbdadm primary nfs
mount /dev/drbd1 /data
/etc/init.d/nfs start
;;
stop)
/etc/init.d/nfs stop
umount /dev/drbd1
drbdadm secondary nfs
;;
esac
exit 0
# chmod 755 drbd_primary
啓動Heartbeat(先主後從):
# /etc/init.d/heartbeat start
# chkconfig heartbeat on
# netstat -tunlp|grep hear
udp 0 0 0.0.0.0:694 0.0.0.0:* 2447/heartbeat: wri
udp 0 0 0.0.0.0:58330 0.0.0.0:* 2447/heartbeat: wri
# ip a|grep eth0 //dm1
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
inet 10.0.0.61/24 brd 10.0.0.255 scope global eth0
inet 192.168.0.180/32 scope global eth0
# df-h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 2.9G 10G 23% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 52M 129M 29% /boot
/dev/drbd1 6.3G 15M 6.0G 1% /data
# df-h //dm2
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 2.5G 11G 19% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 52M 129M 29% /boot
# ip a|grep eth0
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
inet10.0.0.62/24 brd 10.0.0.255 scope global eth0
# netstat -tunlp|grep hear
udp 0 0 0.0.0.0:694 0.0.0.0:* 1789/heartbeat: wri
udp 0 0 0.0.0.0:18128 0.0.0.0:* 1789/heartbeat: wri
3、測試
1. 測試NFS客戶端讀寫共享目錄是否正常
# cd/data
# mkdirtest //dm1上
[root@lb_stmp]# ls //NFS客戶端
[root@lb_stmp]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 1.6G 12G 12% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 51M 129M 29% /boot
[root@lb_stmp]# yum -y install rpcbind nfs-utils
[root@lb_stmp]# service rpcbind start
[root@lb_stmp]# chkconfig rpcbind on
[root@lb_stmp]# ping 192.168.0.180
PING192.168.0.180 (192.168.0.180) 56(84) bytes of data.
64 bytesfrom 192.168.0.180: icmp_seq=52 ttl=64 time=0.242 ms
64 bytesfrom 192.168.0.180: icmp_seq=53 ttl=64 time=0.434 ms
64 bytesfrom 192.168.0.180: icmp_seq=54 ttl=64 time=0.364 ms
64 bytesfrom 192.168.0.180: icmp_seq=55 ttl=64 time=0.310 ms
64 bytesfrom 192.168.0.180: icmp_seq=56 ttl=64 time=0.308 ms
[root@lb_stmp]# showmount -e 192.168.0.180
Exportlist for 192.168.0.180:
/data(everyone)
[root@lb_stmp]# mount -t nfs 192.168.0.180:/data /media
[root@lb_s~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 1.6G 12G 12% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 51M 129M 29% /boot
192.168.0.180:/data 6.3G 15M 6.0G 1% /media
[root@lb_s~]# ll /media/
0
drwxrwxrwx2 root root 16384 518 22:21 lost+found
drwxrwxrwx2 root root 4096 518 23:18 test
[root@lb_s~]# cd /media/
[root@lb_smedia]# ls
lost+found test
[root@lb_smedia]# touch 333 //NFS客戶端在共享目錄上讀寫數據成功
[root@lb_smedia]# ls
333 lost+found test
[root@dm1~]# ll /data
0
-rw-r--r--1 root root 0 520 20:35 333
drwxrwxrwx2 root root 16384 518 22:21 lost+found
drwxrwxrwx2 root root 4096 518 23:18 test
2. 測試高可用
[root@dm1~]# /etc/init.d/heartbeat stop
StoppingHigh-Availability services: Done.
[root@dm1~]# ip a|grep eth0
2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
inet 10.0.0.61/24 brd 10.0.0.255 scope global eth0
[root@dm1~]# df -h //能夠看到DRBD資源已釋放
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 2.9G 10G 23% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 52M 129M 29% /boot
[root@dm1~]# cat /proc/drbd //咱們看到dm1上的DRBD角色由原來的主,已轉換成備
version:8.4.7-1 (api:1/proto:86-101)
GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11
1: cs:Connected ro:Secondary/Primaryds:UpToDate/UpToDate C r-----
ns:88 nr:16 dw:104 dr:1366 al:3 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1wo:f oos:0
[root@dm2~]# ip a|grep eth0 //已看到VIP飄移成功
2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000
inet 10.0.0.62/24 brd 10.0.0.255 scopeglobal eth0
inet 192.168.0.180/24 scope global eth0
[root@dm2~]# df -h //DRBD資源切換成功,並掛載.
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 14G 2.5G 11G 19% /
tmpfs 238M 0 238M 0% /dev/shm
/dev/sda1 190M 52M 129M 29% /boot
/dev/drbd1 6.3G 15M 6.0G 1% /data
[root@dm2~]# cat /proc/drbd //咱們看到dm2上的DRBD角色由原來的備,已轉換成主.
version:8.4.7-1 (api:1/proto:86-101)
GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDateC r-----
ns:40 nr:112 dw:152 dr:1366 al:2 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1wo:f oos:0
[root@dm2~]# ll /data //咱們看到數據也同步成功
0
-rw-r--r--1 root root 0 520 20:35 333
drwxrwxrwx2 root root 16384 518 22:21 lost+found
drwxrwxrwx2 root root 4096 518 23:18 test
DRBD+Heartbeat+NFS的高可用至此結束,對高可用的測試,咱們上面只是測試了一種狀況,固然還有其它幾種狀況,留給你們去測試吧!