高可用,多路冗餘GFS2集羣文件系統搭建詳解

高可用,多路冗餘GFS2集羣文件系統搭建詳解
2014.06

實驗拓撲圖:html

實驗原理:node

實驗目的:經過RHCS集羣套件搭建GFS2集羣文件系統,保證不一樣節點可以同時對GFS2集羣文件系統進行讀取和寫入,其次經過multipath實現node和FC,FC和Share Storage之間的多路冗餘,最後實現存儲的mirror複製達到高可用。python

GFS2:全局文件系統第二版,GFS2是應用最普遍的集羣文件系統。它是由紅帽公司開發出來的,容許全部集羣節點並行訪問。元數據一般會保存在共享存儲設備或複製存儲設備的一個分區裏或邏輯卷中。web

 

實驗環境:vim

1
2
3
4
5
6
7
8
[root@storage1 ~] # uname -r
2.6.32-279.el6.x86_64
[root@storage1 ~] # cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)
[root@storage1 ~] # /etc/rc.d/init.d/iptables status
iptables: Firewall is not running.
[root@storage1 ~] # getenforce
Disabled

實驗步驟:bash

一、前期準備工做服務器

0)、設置一臺管理端(192.168.100.102manager.rsyslog.org)配置ssh 私鑰、公鑰,將公鑰傳遞到全部節點上session

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@manager ~] # ssh-keygen  \\生成公鑰和私鑰
Generating public /private  rsa key pair.
Enter  file  in  which  to save the key ( /root/ . ssh /id_rsa ):
Enter passphrase (empty  for  no passphrase):
……
[root@manager ~] # for i in {1..6}; do ssh-copy-id -i 192.168.100.17$i; done \\將公鑰傳輸到各節點/root/.ssh/目錄下
root@192.168.100.171's password:
Now try logging into the machine, with  "ssh '192.168.100.171'" , and check  in :
. ssh /authorized_keys
to  make  sure we haven 't added extra keys that you weren' t expecting
..……
[root@manager ~] # ssh node1  \\測試登陸
Last login: Sat Jun  8 17:58:51 2013 from 192.168.100.31
[root@node1 ~] #

1)、配置雙網卡IP,全部節點參考拓撲圖配置雙網卡,並配置相應IP便可app

1
2
3
4
5
[root@storage1 ~] # ifconfig eth0 | grep "inet addr" | awk -F[:" "]+ '{ print $4 }'
192.168.100.171
[root@storage1 ~] # ifconfig eth1 | grep "inet addr" | awk -F[:" "]+ '{ print $4 }'
192.168.200.171
……

2)、配置hosts文件並同步到全部節點去(也能夠配置DNS,不過DNS解析絕對不會有hosts解析快,其次DNS服務器出問題會直接致使節點和節點以及和存儲直接不可以解析而崩潰)less

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@manager ~] # cat /etc/hosts
127.0.0.1 localhost localhost.rsyslog.org
192.168.100.102 manager  manager.rsyslog.org
192.168.100.171 storage1 storage1.rsyslog.org
192.168.200.171 storage1 storage1.rsyslog.org
192.168.100.172 storage2 storage2.rsyslog.org
192.168.200.172 storage2 storage2.rsyslog.org
192.168.100.173 node1 node1.rsyslog.org
192.168.200.173 node1 node1.rsyslog.org
192.168.100.174 node2 node2.rsyslog.org
192.168.200.174 node2 node2.rsyslog.org
192.168.100.175 node3 node3.rsyslog.org
192.168.200.175 node3 node3.rsyslog.org
192.168.100.176 node4 node4.rsyslog.org
192.168.200.176 node4 node4.rsyslog.org
[root@manager ~] # for i in {1..6}; do scp /etc/hosts 192.168.100.17$i:/etc/ ; done
hosts                                                                           100%  591     0.6KB /s    00:00
hosts                                                                           100%  591     0.6KB /s    00:00
hosts                                                                           100%  591     0.6KB /s    00:00
hosts                                                                           100%  591     0.6KB /s    00:00
hosts                                                                           100%  591     0.6KB /s    00:00
hosts                                                                           100%  591     0.6KB /s    00:00

3)、配置yum源(將全部節點光盤掛接到/media/cdrom,若是不方便,也能夠作NFS,將鏡像掛載到NFS裏面,而後節點掛載到NFS共享目錄中便可,注意:不一樣版本的系統,RHCS集羣套件存放位置會有所不一樣,因此yum源的指向位置也會有所不一樣)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@manager ~] # cat /etc/yum.repos.d/rhel-gfs2.repo
[rhel-cdrom]
name=RHEL6U3-cdrom
baseurl= file : ///media/cdrom
enabled=1
gpgcheck=0
[rhel-cdrom-HighAvailability]
name=RHEL6U3-HighAvailability
baseurl= file : ///media/cdrom/HighAvailability
enabled=1
gpgcheck=0
[rhel-cdrom-ResilientStorage]
name=RHEL6U3-ResilientStorage
baseurl= file : ///media/cdrom/ResilientStorage
enabled=1
gpgcheck=0
[rhel-cdrom-LoadBalancer]
name=RHEL6U3-LoadBalancer
baseurl= file : ///media/cdrom/LoadBalancer
enabled=1
gpgcheck=0
[rhel-cdrom-ScalableFileSystem]
name=RHEL6U3-ScalableFileSystem
baseurl= file : ///media/cdrom/ScalableFileSystem
enabled=1
gpgcheck=0
[root@manager ~] # for i in {1..6}; do scp /etc/yum.repos.d/rhel-gfs2.repo  192.168.100.17$i:/etc/yum.repos.d ; done
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
rhel-gfs2.repo                                                                  100%  588     0.6KB /s    00:00
[root@manager ~] # for i in {1..6}; do ssh 192.168.100.17$i "yum clean all && yum makecache"; done
Loaded plugins: product- id , security, subscription-manager
Updating certificate-based repositories.
Unable to  read  consumer identity
……

4)、時間要同步,能夠考慮配置NTP時間服務器,若是聯網能夠考慮同步互聯網時間,固然也能夠經過date命令設置相同時間。

二、安裝luci和ricci(管理端安裝luci,節點安裝ricci)

Luci是運行WEB樣式的Conga服務器端,它能夠經過web界面很容易管理整個RHCS集羣,每一步操做都會在/etc/cluster/cluster.conf生成相應的配置信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@manager ~] # yum install luci –y
[root@manager ~] # /etc/rc.d/init.d/luci start \\生成如下信息,說明配置成功,注意:安裝luci會安裝不少python包,python包儘可能採用光盤自帶的包,不然啓動luci會出現報錯現象。
Adding following auto-detected host IDs (IP addresses /domain  names), corresponding to `manager.rsyslog.org ' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config'  (you can change them by editing ` /var/lib/luci/etc/cacert .config ', removing the generated certificate `/var/lib/luci/certs/host.pem'  and restarting luci):
(none suitable found, you can still  do  it manually as mentioned above)
Generating a 2048 bit RSA private key
writing new private key to  '/var/lib/luci/certs/host.pem'
正在啓動 saslauthd:                                       [肯定]
Start luci...                                              [肯定]
Point your web browser to https: //manager .rsyslog.org:8084 (or equivalent) to access luci
[root@manager ~] # for i in {1..4}; do ssh node$i "yum install ricci -y"; done
[root@manager ~] # for i in {1..4}; do ssh node$i "chkconfig ricci on && /etc/rc.d/init.d/ricci start"; done
[root@manager ~] # for i in {1..4}; do ssh node$i "echo '123.com' | passwd ricci --stdin"; done  \\ricci設置密碼,在Conga web頁面添加節點的時候須要輸入ricci密碼。
更改用戶 ricci 的密碼 。
passwd : 全部的身份驗證令牌已經成功更新。
……

三、經過luci web管理界面安裝RHCS集羣套件

https://manager.rsyslog.org:8084或者https://192.168.100.102:8084

添加節點node1-node3,先設置3個,後期在增長一個節點,password爲各節點ricci的密碼,而後勾選「Download Packages」(在各節點yum配置好的基礎上,自動安裝cman和rgmanager及相關的依賴包),勾選「Enable Shared Storage Support」,安裝存儲相關的包,並支持gfs2文件系統、DLM鎖、clvm邏輯卷等。

安裝過程以下:

如下爲安裝完成以後,全部節點狀態

點開一個節點,能夠看到這個節點上全部相關服務都處於運行狀態。

登陸任意一個節點查看各服務的開機啓動狀況,爲2-5級別自動啓動。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@manager ~] # ssh node1 "chkconfig --list | grep cman"
cman            0:關閉    1:關閉    2:啓用    3:啓用    4:啓用    5:啓用    6:關閉
[root@manager ~] # ssh node1 "chkconfig --list | grep rgmanager"
rgmanager       0:關閉    1:關閉    2:啓用    3:啓用    4:啓用    5:啓用    6:關閉
[root@manager ~] # ssh node1 "chkconfig --list | grep clvmd"
clvmd           0:關閉    1:關閉    2:啓用    3:啓用    4:啓用    5:啓用    6:關閉
[root@node2 ~] # cat /etc/cluster/cluster.conf   \\查看各節點集羣配置信息,各節點這部分必須同樣。
<?xml version= "1.0" ?>
<cluster config_version= "1"  name= "rsyslog" >
<clusternodes>
<clusternode name= "node1"  nodeid= "1" />
<clusternode name= "node2"  nodeid= "2" />
< /clusternodes >
<cman expected_votes= "1"  two_node= "1" />
<fencedevices/>
< rm />
< /cluster >
[root@node2 ~] # clustat  \\查看集羣節點狀態(能夠經過 cluster -i 1 動態查看變化狀態)
Cluster Status  for  rsyslog @ Sat Jun  8 00:03:40 2013
Member Status: Quorate
Member Name                                              ID   Status
------ ----                                              ---- ------
node1                                                        1 Online
node2                                                        2 Online, Local
node3                                                        3 Online

四、安裝存儲管理管理軟件,並導出磁

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
[root@storage1 ~] # fdisk /dev/sda  \\建立一個大小爲2G的邏輯分區並導出
Command (m  for  help): n
Command action
e   extended
p   primary partition (1-4)
e
Selected partition 4
First cylinder (1562-2610, default 1562):
Using default value 1562
Last cylinder, +cylinders or +size{K,M,G} (1562-2610, default 2610): +4G
Command (m  for  help): n
First cylinder (1562-2084, default 1562):
Using default value 1562
Last cylinder, +cylinders or +size{K,M,G} (1562-2084, default 2084): +2G
Command (m  for  help): w
……
[root@storage1 ~] # partx -a /dev/sda
[root@storage1 ~] # ll /dev/sda
sda   sda1  sda2  sda3  sda4  sda5
[root@storage1 ~] # yum install scsi-target-utils –y  \\安裝target管理端
[root@storage1 ~] # vim /etc/tgt/targets.conf \\配置導出磁盤的信息
<target iqn.2013.05.org.rsyslog:storage1.sda5>
<backing-store  /dev/sda5 >
scsi_id storage1_id
scsi_sn storage1_sn
< /backing-store >
incominguser xiaonuo 081ac67e74a6bb13b7a22b8a89e7177b \\設置用戶名及密碼訪問
initiator-address 192.168.100.173  \\設置容許的IP地址
initiator-address 192.168.100.174
initiator-address 192.168.100.175
initiator-address 192.168.100.176
initiator-address 192.168.200.173
initiator-address 192.168.200.174
initiator-address 192.168.200.175
initiator-address 192.168.200.176
< /target >
[root@storage1 ~] # /etc/rc.d/init.d/tgtd start  && chkconfig tgtd on
[root@storage1 ~] # tgtadm --lld iscsi --mode target --op show  \\查看是否導出成功
Target 1: iqn.2013.05.org.rsyslog:storage1.sda5
……
LUN: 1
Type: disk
SCSI ID: storage1_id
SCSI SN: storage1_sn
Size: 2151 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store  type : rdwr
Backing store path:  /dev/sda5
Backing store flags:
Account information:
xiaonuo
ACL information:
192.168.100.173
192.168.100.174
192.168.100.175
192.168.100.176
192.168.200.173
192.168.200.174
192.168.200.175
192.168.200.176
[root@manager ~] # for i in {1..3}; do ssh node$i "yum -y install iscsi-initiator-utils"; done \\節點安裝iscsi客戶端軟件
[root@node1 ~] # vim /etc/iscsi/iscsid.conf  \\全部節點配置文件加上如下3行,設置帳戶密碼
node.session.auth.authmethod = CHAP
node.session.auth.username = xiaonuo
node.session.auth.password = 081ac67e74a6bb13b7a22b8a89e7177b
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.100.171"; done \\發現共享設備
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.100.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.200.171"; done
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
192.168.200.171:3260,1 iqn.2013.05.org.rsyslog:storage1.sda5
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m node -l"; done \\註冊iscsi共享設備
Logging  in  to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.200.171,3260] (multiple)
Logging  in  to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.100.171,3260] (multiple)
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.200.171,3260] successful.
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage1.sda5, portal: 192.168.100.171,3260] successful.
……
[root@storage1 ~] # tgtadm --lld  iscsi --op show --mode conn --tid 1 \\iscsi服務器端查看共享狀況
Session: 12
Connection: 0
Initiator: iqn.1994-05.com.redhat:a12e282371a1
IP Address: 192.168.200.175
Session: 11
Connection: 0
Initiator: iqn.1994-05.com.redhat:a12e282371a1
IP Address: 192.168.100.175
…….
[root@node1 ~] # netstat -nlatp | grep 3260
tcp        0      0 192.168.200.173:37946       192.168.200.171:3260        ESTABLISHED 37565 /iscsid
tcp        0      0 192.168.100.173:54306       192.168.100.171:3260        ESTABLISHED 37565 /iscsid
[root@node1 ~] # ll /dev/sd   \\在各個節點上面都會多出兩個iscsi設備
sda   sda1  sda2  sda3  sdb   sdc

五、安裝配置multipath多路冗餘實現線路冗餘

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@manager ~] # for i in {1..3}; do ssh node$i "yum -y install device-mapper-*"; done
[root@manager ~] # for i in {1..3}; do ssh node$i "mpathconf --enable"; done \\生成配置文件
[root@node1 ~] # /sbin/scsi_id -g -u /dev/sdb \\查看導入設備的WWID
1storage1_id
[root@node1 ~] # /sbin/scsi_id -g -u /dev/sdc
1storage1_id
[root@node1 ~] # vim /etc/multipath.conf
multipaths {
multipath {
wwid                    1storage1_id  \\設置導出設備的WWID
alias                    iscsi1 \\設置別名
path_grouping_policy    multibus
path_selector            "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
}
[root@node1 ~] # /etc/rc.d/init.d/multipathd start
Starting multipathd daemon:                                [  OK  ]
[root@node1 ~] # ll /dev/mapper/iscsi1
lrwxrwxrwx 1 root root 7 Jun  7 23:58  /dev/mapper/iscsi1  -> .. /dm-0
[root@node1 ~] # multipath –ll  \\查看綁定是否成功
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 20:0:0:1 sdb 8:16 active ready running
`- 19:0:0:1 sdc 8:32 active ready running
……\\其餘兩個節點同上

六、在節點上建立clvm邏輯卷並建立gfs2集羣文件系統

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@node1 ~] # pvcreate /dev/mapper/iscsi1  \\將多路冗餘設備建立成pv
Writing physical volume data to disk  "/dev/mapper/iscsi1"
Physical volume  "/dev/mapper/iscsi1"  successfully created
[root@node1 ~] # vgcreate cvg0 /dev/mapper/iscsi1 \\建立vg
Clustered volume group  "cvg0"  successfully created
[root@node1 ~] # lvcreate -L +1G cvg0 -n clv0 \\建立大小爲1G的lv
Logical volume  "clv0"  created
[root@node1 ~] # lvs  \\從node1查看lv狀況
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@node2 ~] # lvs  \\從node2查看lv狀況
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@manager ~] # for i in {1..3}; do ssh node$i "lvmconf --enable-cluster"; done \\打開DLM鎖機制,在web配置時候,若是勾選了「Enable Shared Storage Support」,則默認就打開了。
[root@node2 ~] # mkfs.gfs2 -j 3 -p lock_dlm -t rsyslog:web /dev/cvg0/clv0  \\建立gfs2集羣文件系統,並設置節點爲3個,鎖協議爲lock_dlm
This will destroy any data on  /dev/cvg0/clv0 .
It appears to contain: symbolic link to `.. /dm-1 '
Are you sure you want to proceed? [y /n ] y
Device:                     /dev/cvg0/clv0
Blocksize:                 4096
Device Size                1.00 GB (262144 blocks)
Filesystem Size:           1.00 GB (262142 blocks)
Journals:                  3
Resource Groups:           4
Locking Protocol:           "lock_dlm"
Lock Table:                 "rsyslog:web"
UUID:                      7c293387-b59a-1105-cb26-4ffc41b5ae3b

七、在storage2上建立storage1的mirror,實現備份及高可用

1)、建立跟storage1同樣大的iscs空間2G,並配置targets.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@storage2 ~] # vim /etc/tgt/targets.conf
<target iqn.2013.05.org.rsyslog:storage2.sda5>
<backing-store  /dev/sda5 >
scsi_id storage2_id
scsi_sn storage2_sn
< /backing-store >
incominguser xiaonuo 081ac67e74a6bb13b7a22b8a89e7177b
initiator-address 192.168.100.173
initiator-address 192.168.100.174
initiator-address 192.168.100.175
initiator-address 192.168.100.176
initiator-address 192.168.200.173
initiator-address 192.168.200.174
initiator-address 192.168.200.175
initiator-address 192.168.200.176
< /target >

2)、各節點導入storage1設備

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.100.172"; done
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.100.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m discovery -t st -p 192.168.200.172"; done
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
192.168.200.172:3260,1 iqn.2013.05.org.rsyslog:storage2.sda5
[root@manager ~] # for i in {1..3}; do ssh node$i "iscsiadm -m node -l"; done
Logging  in  to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.100.172,3260] (multiple)
Logging  in  to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.200.172,3260] (multiple)
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.100.172,3260] successful.
Login to [iface: default, target: iqn.2013.05.org.rsyslog:storage2.sda5, portal: 192.168.200.172,3260] successful.
……

 

 

3)、設置multipath

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@node1 ~] # ll /dev/sd
sda   sda1  sda2  sda3  sdb   sdc   sdd   sde
[root@node1 ~] # /sbin/scsi_id -g -u /dev/sdd
1storage2_id
[root@node1 ~] # /sbin/scsi_id -g -u /dev/sde
1storage2_id
[root@node1 ~] # vim /etc/multipath.conf   \\其它兩個節點配置類同
multipaths {
multipath {
wwid                    1storage1_id
alias                    iscsi1
path_grouping_policy    multibus
path_selector            "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
multipath {
wwid                    1storage2_id
alias                    iscsi2
path_grouping_policy    multibus
path_selector            "round-robin 0"
failback                manual
rr_weight               priorities
no_path_retry           5
}
}
[root@node1 ~] # /etc/rc.d/init.d/multipathd reload
Reloading multipathd:                                      [  OK  ]
[root@node1 ~] # multipath -ll
iscsi2 (1storage2_id) dm-2 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
|-+- policy= 'round-robin 0'  prio=1 status=active
| `- 21:0:0:1 sde 8:64 active ready running
`-+- policy= 'round-robin 0'  prio=1 status=enabled
`- 22:0:0:1 sdd 8:48 active ready running
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 20:0:0:1 sdb 8:16 active ready running
`- 19:0:0:1 sdc 8:32 active ready running
4)、將新的iscsi設備加入卷組cvg0。
[root@node3 ~] # pvcreate /dev/mapper/iscsi2
Writing physical volume data to disk  "/dev/mapper/iscsi2"
Physical volume  "/dev/mapper/iscsi2"  successfully created
[root@node3 ~] # vgextend cvg0 /dev/mapper/iscsi2
Volume group  "cvg0"  successfully extended
[root@node3 ~] # vgs
VG    #PV #LV #SN Attr   VSize VFree
cvg0   2   1   0 wz--nc 4.00g 3.00g

 

5)、安裝cmirror,並在節點建立stoarge1的mirror爲stoarage2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@manager ~] # for i in {1..3}; do ssh node$i "yum install cmirror -y"; done
[root@manager ~] # for i in {1..3}; do ssh node$i "/etc/rc.d/init.d/cmirrord start && chkconfig cmirrord on"; done
[root@node3 ~] # dmsetup ls –tree  \\沒有建立mirror以前的情況
iscsi2 (253:2)
├─ (8:48)
└─ (8:64)
cvg0-clv0 (253:1)
└─iscsi1 (253:0)
├─ (8:32)
└─ (8:16)
[root@node3 ~] # lvs  \\沒有建立mirror以前的情況
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-a--- 1.00g
[root@node3 ~] # lvconvert -m 1 /dev/cvg0/clv0 /dev/mapper/iscsi1 /dev/mapper/iscsi2 \\建立先有lv的mirror,如下能夠看到數據在複製
cvg0 /clv0 : Converted: 0.4%
cvg0 /clv0 : Converted: 10.9%
cvg0 /clv0 : Converted: 18.4%
cvg0 /clv0 : Converted: 28.1%
cvg0 /clv0 : Converted: 42.6%
cvg0 /clv0 : Converted: 56.6%
cvg0 /clv0 : Converted: 70.3%
cvg0 /clv0 : Converted: 85.9%
cvg0 /clv0 : Converted: 100.0%
[root@node2 ~] # lvs \\建立過程當中,storage1中的clv0正在想storage2複製相同內容
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.00g                         clv0_mlog   6.64
[root@node2 ~] # lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.00g                         clv0_mlog 100.00
[root@node3 ~] # dmsetup ls –tree \\查看現有iscsi導出設備的狀態爲mirror型
cvg0-clv0 (253:1)
├─cvg0-clv0_mimage_1 (253:5)
│  └─iscsi2 (253:2)
│     ├─ (8:48)
│     └─ (8:64)
├─cvg0-clv0_mimage_0 (253:4)
│  └─iscsi1 (253:0)
│     ├─ (8:32)
│     └─ (8:16)
└─cvg0-clv0_mlog (253:3)
└─iscsi2 (253:2)
├─ (8:48)
└─ (8:64)

 

八、集羣管理

1)、當基於clvm的gfs2文件系統不夠用時,如何增長

1
2
3
4
5
6
7
8
9
10
11
[root@node3 ~] # lvextend -L +200M /dev/cvg0/clv0
Extending 2 mirror images.
Extending logical volume clv0 to 1.20 GiB
Logical volume clv0 successfully resized
[root@node3 ~] # gfs2_grow /opt \\同步文件系統
Error: The device has grown by  less  than one Resource Group (RG).
The device grew by 200MB.  One RG is 255MB  for  this  file  system.
gfs2_grow complete.
[root@node3 ~] # lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-aom- 1.20g                         clv0_mlog  96.08

2)、當節點不夠用時,若是添加一個新的節點加入集羣

步驟以下:

1>、安裝ricci

1
[root@node4 ~] # yum install ricci -y

 

2>、登陸luci web,添加ricci

3>、導入共享存儲設備

1
2
3
4
5
6
[root@node4 ~] # iscsiadm -m discovery -t st -p 192.168.100.171
[root@node4 ~] # iscsiadm -m discovery -t st -p 192.168.100.172
[root@node4 ~] # iscsiadm -m discovery -t st -p 192.168.200.172
[root@node4 ~] # iscsiadm -m discovery -t st -p 192.168.200.171
[root@node2 ~] # scp /etc/iscsi/iscsid.conf node4:/etc/iscsi/
[root@node4 ~] # iscsiadm -m node –l

4>、設置multipath

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@node4 ~] # yum -y install device-mapper-*
[root@node4 ~] # mpathconf –enable  \\生成配置文件
[root@node4 ~] # scp node1:/etc/multipath.conf /etc/ \\也能夠直接從其餘節點複製過來直接使用
[root@node4 ~] # /etc/rc.d/init.d/multipathd start
[root@node4 ~] # lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi---m- 1.20g                         clv0_mlog
[root@node4 ~] # multipath -ll
iscsi2 (1storage2_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 11:0:0:1 sdb 8:16 active ready running
`- 12:0:0:1 sdd 8:48 active ready running
iscsi1 (1storage1_id) dm-1 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 13:0:0:1 sde 8:64 active ready running
`- 14:0:0:1 sdc 8:32 active ready running
5>、安裝cmirror,支持mirror
[root@node4 ~] # yum install cmirror –y
[root@node4 ~] # /etc/rc.d/init.d/cmirrord start && chkconfig cmirrord on

6>、在已成功掛載的節點上增長節點數,並實現掛載使用(注意:若是系統看不到/dev/cvg0/clv0,或者經過lvs看不到mirror的copy或者經過dmsetup ls –tree查看不到mirror結構,則從新啓動節點系統便可生效)

1
2
3
4
5
6
7
8
9
10
11
[root@node4 ~] # mount /dev/cvg0/clv0 /opt/  \\節點數不夠
Too many nodes mounting filesystem, no  free  journals
[root@node2 ~] # gfs2_jadd -j 1 /opt  \\增長一個節點數
Filesystem:             /opt
Old Journals           3
New Journals           4
[root@node4 ~] # mount /dev/cvg0/clv0 /opt/
[root@node4 ~] # ll /opt/
total 4
-rw-r--r-- 1 root root 210 Jun  8 00:42  test .txt
[root@node4 ~] #

一、總體測試

1)、測試多路冗餘是否OK

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@node2 ~] # ifdown eth1 \\關閉某一個網卡,模擬單線路故障
[root@node2 ~] # multipath -ll
iscsi2 (1storage2_id) dm-1 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 4:0:0:1 sde 8:64 failed faulty running  \\導出設備故障
`- 3:0:0:1 sdd 8:48 active ready  running
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 6:0:0:1 sdc 8:32 active ready  running
`- 5:0:0:1 sdb 8:16 failed faulty running \\導出設備故障
[root@node2 opt] # mount | grep opt
/dev/mapper/cvg0-clv0  on  /opt  type  gfs2 (rw,relatime,hostdata=jid=0)
[root@node2 opt] # touch test  \\單線路故障並不影響集羣文件系統正常使用
[root@node2 ~] # ifup eth1 \\恢復網卡
[root@node2 opt] # multipath –ll \\查看多路冗餘是否恢復
iscsi2 (1storage2_id) dm-1 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 4:0:0:1 sde 8:64 active ready running
`- 3:0:0:1 sdd 8:48 active ready running
iscsi1 (1storage1_id) dm-0 IET,VIRTUAL-DISK
size=2.0G features= '1 queue_if_no_path'  hwhandler= '0'  wp=rw
`-+- policy= 'round-robin 0'  prio=1 status=active
|- 6:0:0:1 sdc 8:32 active ready running
`- 5:0:0:1 sdb 8:16 active ready running

2)、測試基於gfs2文件系統的集羣節點是否支持同時讀寫操做

1
2
3
4
5
6
7
8
9
[root@manager ~] # for i in {1..3}; do ssh node$i "mount /dev/cvg0/clv0 /opt"; done
[root@node1 ~] # while :; do echo node1 >>/opt/test.txt;sleep 1; done \\節點1模擬向test.txt文件寫入node1
[root@node2 ~] # while :; do echo node2 >>/opt/test.txt;sleep 1; done \\節點2模擬向test.txt文件寫入node1
[root@node3 ~] # tail -f /opt/test.txt  \\節點3模擬讀出節點1和節點2同時寫入的數據
node1
node2
node1
node2
………

3)、測試Storage損壞一個是否可以正常工做

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
[root@node1 ~] # lvs \\mirror正常狀況下的lv
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.20g                         clv0_mlog 100.00
[root@storage1 ~] # ifdown eth1 && ifdown eth0 \\關閉storage1的兩塊網卡,至關於storage1宕機
[root@node2 opt] # lvs  \\ \mirror在storage1宕機正常狀況下的lv
/dev/mapper/iscsi1 read  failed after 0 of 4096 at 2150563840: Input /output  error
/dev/mapper/iscsi1 read  failed after 0 of 4096 at 2150637568: Input /output  error
/dev/mapper/iscsi1 read  failed after 0 of 4096 at 0: Input /output  error
/dev/mapper/iscsi1 read  failed after 0 of 4096 at 4096: Input /output  error
/dev/sdb read  failed after 0 of 4096 at 0: Input /output  error
/dev/sdb read  failed after 0 of 4096 at 2150563840: Input /output  error
/dev/sdb read  failed after 0 of 4096 at 2150637568: Input /output  error
/dev/sdb read  failed after 0 of 4096 at 4096: Input /output  error
/dev/sdc read  failed after 0 of 4096 at 0: Input /output  error
/dev/sdc read  failed after 0 of 4096 at 2150563840: Input /output  error
/dev/sdc read  failed after 0 of 4096 at 2150637568: Input /output  error
/dev/sdc read  failed after 0 of 4096 at 4096: Input /output  error
Couldn't  find  device with uuid ziwJmg-Si56-l742-R3Nx-h0rK-KggJ-NdCigs.
LV   VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
clv0 cvg0 -wi-ao-- 1.20g
[root@node2 opt] # cp /var/log/messages .\\copy數據到掛載的目錄,發現存儲宕機一個並不影響讀取和寫入。
[root@node2 opt] # ll messages
-rw------- 1 root root 1988955 Jun  8 18:08 messages
[root@node2 opt] # dmsetup ls --tree
cvg0-clv0 (253:5)
└─iscsi2 (253:1)
├─ (8:48)
└─ (8:64)
iscsi1 (253:0)
├─ (8:16)
└─ (8:32)
[root@node2 opt] # vgs \\查看vgs狀況
WARNING: Inconsistent metadata found  for  VG cvg0 - updating to use version 11
Missing device  /dev/mapper/iscsi1  reappeared, updating metadata  for  VG cvg0 to version 11.
VG    #PV #LV #SN Attr   VSize VFree
cvg0   2   1   0 wz--nc 4.00g 2.80g
[root@node2 opt] # lvconvert -m 1 /dev/cvg0/clv0 /dev/mapper/iscsi1 \\恢復mirror
cvg0 /clv0 : Converted: 0.0%
cvg0 /clv0 : Converted: 8.5%
[root@node1 ~] # lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.20g                         clv0_mlog  77.45
[root@node1 ~] # lvs
LV   VG   Attr     LSize Pool Origin Data%  Move Log       Copy%  Convert
clv0 cvg0 mwi-a-m- 1.20g                         clv0_mlog  82.35
[root@node1 ~] # dmsetup ls --tree
cvg0-clv0 (253:5)
├─cvg0-clv0_mimage_1 (253:4)
│  └─iscsi1 (253:1)
│     ├─ (8:64)
│     └─ (8:48)
├─cvg0-clv0_mimage_0 (253:3)
│  └─iscsi2 (253:0)
│     ├─ (8:16)
│     └─ (8:32)
└─cvg0-clv0_mlog (253:2)
└─iscsi1 (253:1)
├─ (8:64)
└─ (8:48)
[root@node1 ~] # ll /opt/messages  \\能夠看到數據還在
-rw------- 1 root root 1988955 Jun  8 18:08  /opt/messages
相關文章
相關標籤/搜索