因爲目前線上的兩臺NFS服務器,一臺爲主,一臺爲備。主到備的數據同步,靠rsync來作。因爲數據偏重於圖片業務,而且仍是千萬級的碎圖片。在目前的業務框架下,NFS服務是存在單點的,而且數據的同步也不能作徹底實時性,從而致使不能確保一致性。所以,出於對業務在線率和數據安全的保障,目前須要一套新的架構來解決 NFS 服務單點和數據實時同步的問題。前端
而後,就沒有而後了。node
下面是一個醜到爆的新方案架構圖,已經在公司測試環境的部署,而且進行了不徹底充分的測試。linux
架構拓撲:c++
簡單描述:算法
兩臺 NFS 服務器,經過 em1 網卡與內網的其餘業務服務器進行通訊,em2網卡主要負責兩臺 NFS 服務器之間心跳通訊,em3網卡主要負責drbd數據同步的傳輸。shell
前面的2臺圖片服務器經過 NFS 集羣提供出來的一個VIP 192.168.0.219 來使用 NFS 集羣服務。vim
1、項目基礎設施及信息介紹api
一、設備信息安全
現有的兩臺 NFS 存儲服務器的硬件配置信息: CPU: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz MEM: 16G Raid: RAID 1 Disk: SSD 200G x 2 網卡:集成的 4 個千兆網卡 Link is up at 1000 Mbps, full duplex 前端兩臺靜態圖片服務器硬件配置信息: 略
二、網絡bash
浮動 VIP : 192.168.0.219 # 漂浮在M1和M2上,負責對外提供服務 現有的兩臺 NFS 存儲服務器的網絡配置信息: 主機名:M1.redhat.sx em1:192.168.0.210 內網 em2:172.16.0.210 心跳線 em3:172.16.100.210 DRBD千兆數據傳輸 主機名:M2.redhat.sx em1:192.168.0.211 內網 em2:172.16.0.211 心跳線 em3:172.16.100.211 DRBD千兆數據傳輸
三、系統環境
內核版本:2.6.32-504.el6.x86_64 系統版本:CentOS 6.5 系統位數:x86_64 防火牆規則清空 selinux關閉
四、軟件版本
heartbeat-3.0.4-2.el6.x86_64 drbd-8.4.3 rpcbind-0.2.0-11.el6.x86_64 nfs-utils-1.2.3-54.el6.x86_64
2、基礎服務配置
這裏僅以 M1 服務的配置爲例,M2 服務器配置與此相同。
一、配置時間同步
M1端:
[root@M1 ~]# ntpdate pool.ntp.org 12 Nov 14:45:15 ntpdate[27898]: adjust time server 42.96.167.209 offset 0.044720 sec
M2端:
[root@M2 ~]# ntpdate pool.ntp.org 12 Nov 14:45:06 ntpdate[24447]: adjust time server 42.96.167.209 offset 0.063174 sec
二、配置/etc/hosts文件
M1端:
[root@M1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.210 M1.redhat.sx 192.168.0.211 M2.redhat.sx
M2端:
[root@M2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.210 M1.redhat.sx 192.168.0.211 M2.redhat.sx
三、增長主機間路由
首先先驗證 M1 和 M2 的服務器 IP 是否合乎規劃
M1端:
[root@M1 ~]# ifconfig|egrep 'Link encap|inet addr' # 驗證現有 IP 信息 em1 Link encap:Ethernet HWaddr B8:CA:3A:F1:00:2F inet addr:192.168.0.210 Bcast:192.168.0.255 Mask:255.255.255.0 em2 Link encap:Ethernet HWaddr B8:CA:3A:F1:00:30 inet addr:172.16.0.210 Bcast:172.16.0.255 Mask:255.255.255.0 em3 Link encap:Ethernet HWaddr B8:CA:3A:F1:00:31 inet addr:172.16.100.210 Bcast:172.16.100.255 Mask:255.255.255.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
M2端:
[root@M2 ~]# ifconfig|egrep 'Link encap|inet addr' em1 Link encap:Ethernet HWaddr B8:CA:3A:F1:DE:37 inet addr:192.168.0.211 Bcast:192.168.0.255 Mask:255.255.255.0 em2 Link encap:Ethernet HWaddr B8:CA:3A:F1:DE:38 inet addr:172.16.0.211 Bcast:172.16.0.255 Mask:255.255.255.0 em3 Link encap:Ethernet HWaddr B8:CA:3A:F1:DE:39 inet addr:172.16.100.211 Bcast:172.16.100.255 Mask:255.255.255.0 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
查看現有路由,而後增長相應的心跳線和drbd數據傳輸線路的端到端的靜態路由條目。目的是爲了讓心跳檢測和數據同步不受干擾。
M1端:
[root@M1 network-scripts]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.100.0 0.0.0.0 255.255.255.0 U 0 0 0 em3 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em2 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 em3 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 [root@M1 network-scripts]# /sbin/route add -host 172.16.0.211 dev em2 [root@M1 network-scripts]# /sbin/route add -host 172.16.100.211 dev em3 [root@M1 network-scripts]# echo '/sbin/route add -host 172.16.0.211 dev em2' >> /etc/rc.local [root@M1 network-scripts]# echo '/sbin/route add -host 172.16.100.211 dev em3' >> /etc/rc.local [root@M1 network-scripts]# tail -2 /etc/rc.local /sbin/route add -host 172.16.0.211 dev em1 /sbin/route add -host 172.16.100.211 dev em1 [root@M1 network-scripts]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.0.211 0.0.0.0 255.255.255.255 UH 0 0 0 em2 172.16.100.211 0.0.0.0 255.255.255.255 UH 0 0 0 em3 172.16.100.0 0.0.0.0 255.255.255.0 U 0 0 0 em3 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em2 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 em3 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 [root@M1 network-scripts]# traceroute 172.16.0.211 traceroute to 172.16.0.211 (172.16.0.211), 30 hops max, 60 byte packets 1 172.16.0.211 (172.16.0.211) 0.820 ms 0.846 ms 0.928 ms [root@M1 network-scripts]# traceroute 172.16.100.211 traceroute to 172.16.100.211 (172.16.100.211), 30 hops max, 60 byte packets 1 172.16.100.211 (172.16.100.211) 0.291 ms 0.273 ms 0.257 ms
M2端:
[root@M2 network-scripts]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.100.0 0.0.0.0 255.255.255.0 U 0 0 0 em3 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em2 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 em3 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 [root@M2 network-scripts]# /sbin/route add -host 172.16.0.210 dev em2 [root@M2 network-scripts]# /sbin/route add -host 172.16.100.210 dev em3 [root@M2 network-scripts]# echo '/sbin/route add -host 172.16.0.210 dev em2' >> /etc/rc.local [root@M2 network-scripts]# echo '/sbin/route add -host 172.16.100.210 dev em3' >> /etc/rc.local [root@M2 network-scripts]# tail -2 /etc/rc.local /sbin/route add -host 172.16.0.210 dev em1 /sbin/route add -host 172.16.100.210 dev em1 [root@M2 network-scripts]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.16.0.210 0.0.0.0 255.255.255.255 UH 0 0 0 em2 172.16.100.210 0.0.0.0 255.255.255.255 UH 0 0 0 em3 172.16.100.0 0.0.0.0 255.255.255.0 U 0 0 0 em3 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em2 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 em1 169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 em2 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 em3 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 [root@M2 network-scripts]# traceroute 172.16.0.210 traceroute to 172.16.0.210 (172.16.0.210), 30 hops max, 60 byte packets 1 172.16.0.210 (172.16.0.210) 0.816 ms 0.843 ms 0.922 ms [root@M2 network-scripts]# traceroute 172.16.100.210 traceroute to 172.16.100.210 (172.16.100.210), 30 hops max, 60 byte packets 1 172.16.100.210 (172.16.100.210) 0.256 ms 0.232 ms 0.215 ms
3、部署 heartbeat 服務
此處僅演示 M1 服務端的安裝,M2 的不作複述。
一、安裝heartbeat軟件
[root@M1 ~]# cd /etc/yum.repos.d/ [root@M1 yum.repos.d]# wget http://mirrors.163.com/.help/CentOS6-Base-163.repo [root@M1 yum.repos.d]# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [root@M1 yum.repos.d]# sed -i 's@#baseurl@baseurl@g' * [root@M1 yum.repos.d]# sed -i 's@mirrorlist@#mirrorlist@g' * [root@M1 yum.repos.d]# yum install heartbeat -y # 該命令有時可能須要執行2次
二、配置heartbeat服務
[root@M1 yum.repos.d]# cd /usr/share/doc/heartbeat-3.0.4/ [root@M1 heartbeat-3.0.4]# ll |egrep 'ha.cf|authkeys|haresources' -rw-r--r--. 1 root root 645 Dec 3 2013 authkeys # heartbeat服務的認證文件 -rw-r--r--. 1 root root 10502 Dec 3 2013 ha.cf # heartbeat服務主配置文件 -rw-r--r--. 1 root root 5905 Dec 3 2013 haresources # heartbeat資源文件 [root@M1 heartbeat-3.0.4]# cp ha.cf authkeys haresources /etc/ha.d/ [root@M1 heartbeat-3.0.4]# cd /etc/ha.d/ [root@M1 ha.d]# ls authkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs
注意:主備節點兩端的配置文件(ha.cf,authkeys,haresource)徹底相同,下面是各個節點的文件內容
針對heartbeat的配置,主要就是修改ha.cf、authkeys、haresources這三個文件,下面我列出這三個文件的配置信息,你們僅做參考!
a、ha.cf 文件
[root@M1 ~]# cat /etc/ha.d/ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 warntime 6 #initdead 120 udpport 694 #bcast em2 mcast em2 225.0.0.192 694 1 0 auto_failback on respawn hacluster /usr/lib64/heartbeat/ipfail node M1.redhat.sx node M2.redhat.sx ping 192.168.0.1
b、authkeys 文件
[root@M1 ha.d]# cat authkeys auth 1 # 採用何種加密方式 1 crc # 無加密 #2 sha1 HI! # 啓用sha1的加密方式 #3 md5 Hello! # 採用md5的加密方式 [root@M1 ha.d]# chmod 600 authkeys # 該文件必須設置爲600權限,否則heartbeat啓動會報錯
c、haresources 文件
[root@M1 ha.d]# cat haresources M1.redhat.sx IPaddr::192.168.0.219/24/em1 #NFS IPaddr::192.168.0.219/24/em1 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 rpcbind nfsd
注意:這個裏的nfsd並非heartbeat自帶的,須要本身編寫。
針對該腳本的編寫須要知足一下需求:
一、有可執行權限
二、必須存放在/etc/ha.d/resource.d或/etc/init.d目錄下
三、必須有start、stop這兩個功能
具體腳本信息,下文會寫。
四、啓動heartbeat
[root@M1 ha.d]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. [root@M1 ha.d]# chkconfig heartbeat off
說明:關閉開機自啓動。當服務重啓時,須要人工去啓動。
五、測試heartbeat
在此步測試以前,請先在 M2 上操做如上步驟!
a、正常狀態
[root@M1 ha.d]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 # 以前在heartbeat資源文件中定義的 VIP [root@M2 ha.d]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1
說明:M1主節點擁有vip地址,M2節點沒有。
b、模擬主節點宕機後的狀態
[root@M1 ha.d]# /etc/init.d/heartbeat stop Stopping High-Availability services: Done. [root@M1 ha.d]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 [root@M2 ha.d]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1
說明:M1宕機後,VIP地址漂移到M2節點上,M2節點成爲主節點
c、模擬主節點故障恢復後的狀態
[root@M1 ha.d]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. [root@M1 ha.d]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1
說明:M1節點恢復以後,又搶佔回了VIP資源
4、DRBD安裝部署
一、新添加(初始)硬盤
過程略
二、安裝drbd
針對drbd的安裝,咱們不只可使用yum的方式,還可使用編譯安裝的方式。因爲我在操做的時候,沒法從當前yum源取得drbd的rpm包,所以我就採用了編譯的安裝方式。
[root@M1 ~]# yum -y install gcc gcc-c++ kernel-devel kernel-headers flex make [root@M1 ~]# cd /usr/local/src [root@M1 src]# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz [root@M1 src]# tar zxf drbd-8.4.3.tar.gz [root@M1 src]# cd drbd-8.4.3 [root@M1 ha.d]# ./configure --prefix=/usr/local/drbd --with-km --with-heartbeat [root@M1 ha.d]# make KDIR=/usr/src/kernels/2.6.32-504.el6.x86_64/ [root@M1 ha.d]# make install [root@M1 ha.d]# mkdir -p /usr/local/drbd/var/run/drbd [root@M1 ha.d]# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/init.d/ [root@M1 ha.d]# chmod +x /etc/init.d/drbd [root@M1 ha.d]# modprobe drbd # 執行命令加載drbd模塊到內核 [root@M1 ha.d]# lsmod|grep drbd # 檢查drbd是否被正確的加載到內核 drbd 310236 3 libcrc32c 1246 1 drbd
三、配置DRBD
有關DRBD涉及到的配置文件主要是global_common.conf和用戶自定義的資源文件(固然,該資源文件能夠寫到global_common.conf中)。
注意:M1和M2這兩個主備節點的如下配置文件徹底同樣
[root@M1 ~]# cat /usr/local/drbd/etc/drbd.d/global_common.conf global { usage-count no; } common { protocol C; disk { on-io-error detach; # 配置I/O錯誤處理策略爲分離 no-disk-flushes; no-md-flushes; } net { cram-hmac-alg "sha1"; # 設置加密算法 shared-secret "allendrbd"; # 設置加密密鑰 sndbuf-size 512k; max-buffers 8000; unplug-watermark 1024; max-epoch-size 8000; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 1024M; # 設置主備節點同步時的網絡速率 al-extents 517; } } [root@M1 ~]# cat /usr/local/drbd/etc/drbd.d/drbd.res resource drbd { # 定義一個drbd的資源名 on M1.redhat.sx { # 主機說明以on開頭,後面跟主機名稱 device /dev/drbd0; # drbd設備名稱 disk /dev/mapper/VolGroup-lv_drbd; # drbd0 使用的是邏輯卷/dev/mapper/VolGroup-lv_drbd address 172.16.100.210:7789; # 設置DRBD監聽地址與端口 meta-disk internal; # 設置元數據盤爲內部模式 } on M2.redhat.sx { device /dev/drbd0; disk /dev/mapper/VolGroup-lv_drbd; address 172.16.100.211:7789; meta-disk internal; } }
四、初始化meta分區
[root@M1 drbd]# drbdadm create-md drbd Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created.
五、啓動drbd服務
此處,咱們能夠看下M1 和M2 啓動drbd服務先後,drbd設備發生的變化
M1端:
[root@M1 drbd]# cat /proc/drbd # 啓動前 drbd 設備信息 version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 [root@M1 drbd]# drbdadm up all # 啓動drbd,這裏也可使用腳本去啓動 [root@M1 drbd]# cat /proc/drbd # 啓動後 drbd 設備信息 version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:133615596
M2端:
[root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 [root@M2 ~]# drbdadm up all [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:133615596
六、初始化設備同步,並確立主節點(覆蓋備節點,保持數據一致)
M1端:
[root@M1 drbd]# drbdadm -- --overwrite-data-of-peer primary drbd [root@M1 drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- ns:140132 nr:0 dw:0 dr:144024 al:0 bm:8 lo:0 pe:17 ua:26 ap:0 ep:1 wo:d oos:133477612 [>....................] sync'ed: 0.2% (130348/130480)M finish: 0:16:07 speed: 137,984 (137,984) K/sec
M2端:
[root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----- ns:0 nr:461440 dw:461312 dr:0 al:0 bm:28 lo:2 pe:75 ua:1 ap:0 ep:1 wo:d oos:133154284 [>....................] sync'ed: 0.4% (130032/130480)M finish: 0:19:13 speed: 115,328 (115,328) want: 102,400 K/sec
同步完畢以後狀態:
M1端:
[root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:133615596 nr:0 dw:0 dr:133616260 al:0 bm:8156 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M2端:
[root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:0 nr:133615596 dw:133615596 dr:0 al:0 bm:8156 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
七、掛載drbd分區到data數據目錄
[root@M1 drbd]# mkfs.ext4 /dev/drbd0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 8355840 inodes, 33403899 blocks 1670194 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 1020 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@M1 drbd]# mount /dev/drbd0 /data/ [root@M1 drbd]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 50G 5.6G 42G 12% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 477M 46M 406M 11% /boot /dev/drbd0 126G 60M 119G 1% /data
八、測試主節點寫入,備節點是否能同步
M1端:
[root@M1 drbd]# dd if=/dev/zero of=/data/test bs=1G count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 1.26333 s, 850 MB/s [root@M1 drbd]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:135840788 nr:0 dw:2225192 dr:133617369 al:619 bm:8156 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 [root@M1 drbd]# umount /data/ [root@M1 drbd]# drbdadm down drbd # 關閉名字爲drbd的資源
M2端:
[root@M2 ~]# cat /proc/drbd # 主節點關閉資源以後,查看備節點的信息,能夠看到主節點的角色已經變爲UnKnown version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:136889524 dw:136889524 dr:0 al:0 bm:8156 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 [root@M2 ~]# drbdadm primary drbd # 確立本身的角色爲primary,即主節點 [root@M2 ~]# mount /dev/drbd0 /data [root@M2 ~]# cd /data [root@M2 data]# ls # 發現數據還在 lost+found test [root@M2 data]# du -sh test 1.1G test [root@M2 data]# cat /proc/drbd # 查看當前 drbd 設備信息 version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----- ns:0 nr:136889524 dw:136889548 dr:1045 al:3 bm:8156 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:24
5、NFS安裝部署
該操做依舊僅以M1爲例,M2操做亦如此。
一、安裝nfs
[root@M1 drbd]# yum install nfs-utils rpcbind -y [root@M2 ~]# yum install nfs-utils rpcbind -y
二、配置 nfs 共享目錄
[root@M1 drbd]# cat /etc/exports /data 192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0) [root@M2 ~]# cat /etc/exports /data 192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)
三、啓動 rpcbind 和 nfs 服務
[root@M1 drbd]# /etc/init.d/rpcbind start;chkconfig rpcbind off [root@M1 drbd]# /etc/init.d/nfs start;chkconfig nfs off Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] [root@M2 drbd]# /etc/init.d/rpcbind start;chkconfig rpcbind off [root@M2 drbd]# /etc/init.d/nfs start;chkconfig nfs off Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ]192
四、測試 nfs
[root@C1 ~] # mount -t nfs -o noatime,nodiratime 192.168.0.219:/data /xxxxx/ [root@C1 ~] # df -h|grep data 192.168.0.219:/data 126G 1.1G 118G 1% /data [root@C1 ~] # cd /data [root@C1 data] # ls lost+found test [root@C1 data] # echo 'nolinux' >> nihao [root@C1 data] # ls lost+found nihao test [root@C1 data] # cat nihao nolinux
6、整合Heartbeat、DRBD和NFS服務
注意,一下修改的heartbeat的文件和腳本都須要在M1和M2上保持相同配置!
一、修改 heartbeat 資源定義文件
修改heartbeat的資源定義文件,添加對drbd服務、磁盤掛載、nfs服務的自動管理,修改結果以下:
[root@M1 ~]# cat /etc/ha.d/haresources M1.redhat.sx IPaddr::192.168.0.219/24/em1 drbddisk::drbd Filesystem::/dev/drbd0::/data::ext4 nfsd
這裏須要注意的是,配置文件中使用的IPaddr、drbddisk都是存在於/etc/ha.d/resource.d/目錄下的,該目錄下自帶了不少服務管理腳本,來提供給heartbeat服務調用。然後面的nfsd,默認heartbeat是不帶的,這裏附上該腳本。
[root@M1 /]# vim /etc/ha.d/resource.d/nfsd #!/bin/bash # case $1 in start) /etc/init.d/nfs restart ;; stop) for proc in rpc.mountd rpc.rquotad nfsd nfsd do killall -9 $proc done ;; esac [root@M1 /]# chmod 755 /etc/ha.d/resource.d/nfsd
雖然,系統自帶了nfs的啓動腳本,可是在 heartbeat 調用時沒法完全殺死 nfs 進程,所以才須要咱們本身編寫啓動腳本。
二、重啓heartbeat,啓動 NFS 高可用
一下操做,最好按順序!
[root@M1 ~]# /etc/init.d/heartbeat stop Stopping High-Availability services: Done. [root@M2 ~]# /etc/init.d/heartbeat stop Stopping High-Availability services: Done. [root@M1 ~]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. [root@M2 ~]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. [root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M2 ~]# ip a |grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:24936 nr:13016 dw:37920 dr:17307 al:15 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:84 nr:24 dw:37896 dr:10589 al:14 bm:5 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 C1 端掛載測試: [root@C1 ~] # mount 192.168.0.219:/data /data [root@C1 ~] # df -h |grep data 192.168.0.219:/data 126G 60M 119G 1% /data
OK,能夠看出C1客戶端可以經過VIP成功掛載NFS高可用存儲共享出來的NFS服務。
三、測試
這裏,將進行對NFS高可用集羣進行測試,看遇到故障以後,是否服務可以正常切換。
a、測試關閉heartbeat服務後,nfs服務是否正常
M1端heartbeat服務宕前,M1端狀態:
[root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:8803768 nr:3736832 dw:12540596 dr:5252 al:2578 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服務宕前,M2端狀態:
[root@M2 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:4014352 nr:11417156 dw:15431508 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
宕掉M1端heartbeat服務:
[root@M1 ~]# /etc/init.d/heartbeat stop Stopping High-Availability services: Done.
M1端heartbeat服務宕後,M1端狀態:
[root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:11417152 nr:4014300 dw:15431448 dr:7037 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服務宕後,M2端狀態:
[root@M2 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:4014300 nr:11417152 dw:15431452 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢復M1端的heartbeat服務,看M2是否回切
恢復M1端heartbeat服務:
[root@M1 ~]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done.
M1端heartbeat服務恢復後,M1端狀態:
[root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:11417156 nr:4014352 dw:15431504 dr:7874 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端heartbeat服務恢復後,M2端狀態:
[root@M2 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:4014352 nr:11417156 dw:15431508 dr:5941 al:1168 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
C1端針對NFS切換的受影響效果分析:
[root@C1 ~] # for i in `seq 1 10000`;do dd if=/dev/zero of=/data/test$i bs=10M count=1;stat /data/test$i|grep 'Access: 2014';done # 這裏僅僅截取部分輸出 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 15.1816 s, 691 kB/s Access: 2014-11-12 23:26:15.945546803 +0800 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.20511 s, 51.1 MB/s Access: 2014-11-12 23:28:11.687931979 +0800 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.20316 s, 51.6 MB/s Access: 2014-11-12 23:28:11.900936657 +0800
注意:目測,NFS必須須要2分鐘的延遲。測試了不少方法,這個問題目前還沒有解決!
b、測試關閉心跳線以外的網絡後,nfs服務是否正常
M1端em1網口宕前,M1端狀態:
[root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:11417156 nr:4014352 dw:15431504 dr:7874 al:3221 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
宕掉M1端的em1網口:
[root@M1 ~]# ifdown em1
M1端em1網口宕後,M1端狀態:(在M2端上經過心跳線,SSH到M1端)
[root@M1 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:11993288 nr:4024660 dw:16017944 dr:8890 al:3222 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
M1端em1網口宕後,M2端狀態:
[root@M2 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:4024620 nr:11993288 dw:16017908 dr:7090 al:1171 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢復M1端的em1網口:
[root@M1 ~]# ifup em1
恢復M1端的em1網口,M1端狀態:
[root@M1 ~]# ip a |grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.210/24 brd 192.168.0.255 scope global em1 inet 192.168.0.219/24 brd 192.168.0.255 scope global secondary em1 [root@M1 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M1.redhat.sx, 2014-11-11 16:20:26 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:11993292 nr:4024680 dw:16017968 dr:9727 al:3222 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
恢復M1端的em1網口,M2端狀態:
[root@M2 ~]# ip a|grep em1 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 inet 192.168.0.211/24 brd 192.168.0.255 scope global em1 [root@M2 ~]# cat /proc/drbd version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@M2.redhat.sx, 2014-11-11 16:25:08 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- ns:4024680 nr:11993292 dw:16017972 dr:7102 al:1171 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
有關heartbeat和keepalived的腦裂問題,此處不作描述,後面另起文章去寫。
以上文章是前一段公司存儲改造時,我寫的方案,此處分享給你們。
後來在測試過程當中,因爲NFS是靠RPC機制來進行通訊的,受RPCBIND機制的影響,致使NFS服務端切換以後,NFS的客戶端會受到1-2分的延遲。在NFS客戶端頻繁寫入的狀況下時間可能會更久,在NFS客戶端無寫入時,依舊須要一分鐘多。所以,後來棄用了這種架構。不知道51的博友們,是如何解決NFS服務端切換致使NFS掛載客戶端延時這個問題的呢?