CentOS 6.5 x64 RHCS GFS配置

本文實驗環境:  
CentOS 6.5 x64 RHCS GFShtml

配置說明: 本文出自: http://koumm.blog.51cto.comnode

1. 經過Openfiler實現iscsi共享存儲  
2. 經過VMware ESXi5 虛擬fence實現fence功能。    
3. 結合Centos 6.5 vmware-fence-soap實現RHCS fence設備功能。    
4. 經過搭建RHCS實驗環境測試GFS2功能。linux

 

RHEL/CentOS5.X版本RHCS配置相關參考:web

IBM x3650M3+GFS+IPMI fence生產環境配置一例
http://koumm.blog.51cto.com/703525/1544971服務器

Redhat 5.8 x64 RHCS Oracle 10gR2 HA實踐配置(注:vmware-fence-soap)
http://koumm.blog.51cto.com/703525/1161791網絡

 

 

1、準備基礎環境

1. 網絡環境準備

node01,node02節點app

# cat /etc/hostsdom

192.168.0.181  node01.abc.com node01    
192.168.0.182  node01.abc.com node02ide

2. 配置YUM安裝源

(1) 掛載光盤ISO

# mount /dev/cdrom /mntsvg

(2) 配置YUM客戶端

說明: 經過本地光盤作爲yum安裝源。

# vi /etc/yum.repos.d/rhel.repo

[rhel]  
name=rhel6    
baseurl=file:///mnt    
enabled=1    
gpgcheck=0

(3) openfiler iscsi存儲配置

配置略,規劃磁盤空間以下:  
qdisk 100MB    
data  150GB

(4) node01,node02掛載存儲

# yum install iscsi-initiator-utils  
# chkconfig iscsid on    
# service iscsid start

# iscsiadm -m discovery -t sendtargets -p 192.168.0.187  
192.168.0.187:3260,1 iqn.2006-01.com.openfiler:tsn.dea898a36535

# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.dea898a36535 -p 192.168.0.187 –l

 

2、RHCS軟件包的安裝

1. 在node01上安裝luci及RHCS軟件包

1) node01(管理節點)安裝RHCS軟件包,luci是管理端軟件包,只在管理端安裝。

yum -y install luci cman odcluster ricci gfs2-utils rgmanager lvm2-cluster

2) node02 安裝RHCS軟件包

yum -y install cman odcluster ricci gfs2-utils rgmanager lvm2-cluster

3) node01, node02 更改個節點ricci用戶密碼

passwd ricci

4) 配置RHCS服務開機啓動

chkconfig ricci on  
chkconfig rgmanager on    
chkconfig cman on    
service ricci start    
service rgmanager start    
service cman start

#啓動過程以下:

正在啓動 oddjobd:                                         [肯定]  
generating SSL certificates...  done    
Generating NSS database...  done    
啓動 ricci:                                               [肯定]    
Starting Cluster Service Manager:                          [肯定]    
Starting cluster:    
   Checking if cluster has been disabled at boot...        [肯定]    
   Checking Network Manager...                             [肯定]    
   Global setup...                                         [肯定]    
   Loading kernel modules...                               [肯定]    
   Mounting configfs...                                    [肯定]    
   Starting cman... xmlconfig cannot find /etc/cluster/cluster.conf    
                                                           [失敗]    
Stopping cluster:    
   Leaving fence domain...                                 [肯定]    
   Stopping gfs_controld...                                [肯定]    
   Stopping dlm_controld...                                [肯定]    
   Stopping fenced...                                      [肯定]    
   Stopping cman...                                        [肯定]    
   Unloading kernel modules...                             [肯定]    
   Unmounting configfs...                                  [肯定]    
#

2. 在node01管理節點上安裝啓動luci服務

1) 啓動luci服務

chkconfig luci on  
service luci start

Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `node01' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):  
        (none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key  
writing new private key to '/var/lib/luci/certs/host.pem'    
Start luci...                                              [肯定]    
Point your web browser to https://node01:8084 (or equivalent) to access luci

2) 配置管理地址, RHCS6版本採用root用戶密碼登陸。

https://node01:8084    
root/111111

 

3、RHCS集羣配置

1. 添加集羣

登陸進管理界面,點擊Manage Clusters --> Create 填入以下內容:

Cluster Name: gfs

NodeName         Password     RicciHostname      Ricci Port  
node01.abc.com   111111       node01.abc.com     11111    
node02.abc.com   111111       node01.abc.com     11111

選中以下選項,而後提交  
Use locally installed packages.

說明:這步會生成集羣配置文件/etc/cluster/cluster.conf

2. 添加Fence Devices

說明:    
RHCS要實現完整的集羣功能,必需要實現fence功能。因爲非物理服務器配置等條件限制,特使用VMware ESXi5.X的虛擬fence來實現fence設備的功能。    
正是因爲有了fence設備可使用,才得以完整測試RHCS功能。

(1)登陸進管理界面,點擊cluster-> Fence Devices->  
(2)選擇"Add 選擇VMware Fencing(SOAP Interface)    
(3)Name "ESXi_fence"    
(4)IP Address or Hostname "192.168.0.21"(ESXi地址)    
(5)Login "root"    
(6)Password "111111"

3. 節點綁定Fence設備

添加節點一fence

1) 點擊node01.abc.com節點,Add Fence Method,這裏填node01_fence;  
2) 添加一個fence instance,選擇"ESXi_fence" VMware Fencing(SOAP Interface)    
3) VM NAME "kvm_node01"    
4) VM UUID "564d6fbf-05fb-1dd1-fb66-7ea3c85dcfdf"  選中ssl

說明: VMNAME: 虛擬機名稱,VM UUID: 虛擬機.vmx文件中"  
uuid.location"值, 採用下面的字符串的格式。

# /usr/sbin/fence_vmware_soap -a 192.168.0.21 -z -l root -p 111111 -n kvm_node2 -o list  
kvm_node2,564d4c42-e7fd-db62-3878-57f77df2475e    
kvm_node1,564d6fbf-05fb-1dd1-fb66-7ea3c85dcfdf

添加節點二fence

1) 點擊node02.abc.com節點,Add Fence Method,這裏填node02_fence;  
2) 添加一個fence instance,選擇"ESXi_fence" VMware Fencing(SOAP Interface)    
3) VM NAME "kvm_node02"    
4) VM UUID "564d4c42-e7fd-db62-3878-57f77df2475e" 選中ssl

#手動測試fence功能示例:

# /usr/sbin/fence_vmware_soap -a 192.168.0.21 -z -l root -p 111111 -n kvm_node02 -o reboot    
Status: ON

選項:  
-o : list,status,reboot等參數

4. 添加Failover Domains配置

Name "gfs_failover"  
Prioritized    
Restricted    
node01.abc.com    1    
node02.abc.com    1

5. 配置GFS服務

(1) GFS服務配置

分別在node01,node02啓動CLVM的集成cluster鎖服務

lvmconf --enable-cluster   
chkconfig clvmd on

service clvmd start    
Activating VG(s):   No volume groups found      [  OK  ]

(2) 在任意一節點對磁盤進行分區,劃分出sdc1。而後格式化成gfs2.

node01節點上:

# pvcreate /dev/sdc1  
  Physical volume "/dev/sdc1" successfully created

# pvs  
  PV         VG       Fmt  Attr PSize   PFree 
  /dev/sda2  vg_node01 lvm2 a--   39.51g      0    
  /dev/sdc1           lvm2 a--  156.25g 156.25g

# vgcreate gfsvg /dev/sdc1  
  Clustered volume group "gfsvg" successfully created

# lvcreate -l +100%FREE -n data gfsvg  
  Logical volume "data" created

node02節點上:  
# /etc/init.d/clvmd start

(3) 格式化GFS文件系統

node01節點上:

[root@node01 ~]# mkfs.gfs2 -p lock_dlm -t gfs:gfs2 -j 2 /dev/gfsvg/data  
This will destroy any data on /dev/gfsvg/data.    
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/gfsvg/data  
Blocksize:                 4096    
Device Size                156.25 GB (40958976 blocks)    
Filesystem Size:           156.25 GB (40958975 blocks)    
Journals:                  2    
Resource Groups:           625    
Locking Protocol:          "lock_dlm"    
Lock Table:                "gfs:gfs2"    
UUID:                      e28655c6-29e6-b813-138f-0b22d3b15321

說明:    
gfs:gfs2這個gfs就是集羣的名字,gfs2是定義的名字,至關於標籤。    
-j是指定掛載這個文件系統的主機個數,不指定默認爲1即爲管理節點的。    
這裏實驗有兩個節點

6. 掛載GFS文件系統

node01,node02 上建立GFS掛載點

# mkdir /vmdata

(1)node01,node02手動掛載測試,掛載成功後,建立文件測試集羣文件系統狀況。  
# mount.gfs2 /dev/gfsvg/data /vmdata

(2)配置開機自動掛載  
# vi /etc/fstab    
/dev/gfsvg/data   /vmdata gfs2 defaults 0 0

[root@node01 vmdata]# df -h

Filesystem                    Size  Used Avail Use% Mounted on  
/dev/mapper/vg_node01-lv_root   36G  3.8G   30G  12% /    
tmpfs                         1.9G   32M  1.9G   2% /dev/shm    
/dev/sda1                     485M   39M  421M   9% /boot    
/dev/gfsvg/data               157G  259M  156G   1% /vmdata

7. 配置表決磁盤

說明:  
#表決磁盤是共享磁盤,無須要太大,本例採用/dev/sdc1 100MB來進行建立。

[root@node01 ~]# fdisk -l

Disk /dev/sdb: 134 MB, 134217728 bytes  
5 heads, 52 sectors/track, 1008 cylinders    
Units = cylinders of 260 * 512 = 133120 bytes    
Sector size (logical/physical): 512 bytes / 512 bytes    
I/O size (minimum/optimal): 512 bytes / 512 bytes    
Disk identifier: 0x80cdfae9

   Device Boot      Start         End      Blocks   Id  System  
/dev/sdb1               1        1008      131014   83  Linux

(1) 建立表決磁盤

[root@node01 ~]# mkqdisk -c /dev/sdb1 -l myqdisk  
mkqdisk v0.6.0    
Writing new quorum disk label 'myqdisk' to /dev/sdc1.    
WARNING: About to destroy all data on /dev/sdc1; proceed [N/y] ? y    
Initializing status block for node 1...    
Initializing status block for node 2...    
Initializing status block for node 3...    
Initializing status block for node 4...    
Initializing status block for node 5...    
Initializing status block for node 6...    
Initializing status block for node 7...    
Initializing status block for node 8...    
Initializing status block for node 9...    
Initializing status block for node 10...    
Initializing status block for node 11...    
Initializing status block for node 12...    
Initializing status block for node 13...    
Initializing status block for node 14...    
Initializing status block for node 15...    
Initializing status block for node 16...

(2) 查看錶決磁盤信息

[root@node01 ~]# mkqdisk -L  
mkqdisk v3.0.12.1

/dev/block/8:17:  
/dev/disk/by-id/scsi-14f504e46494c455242553273306c2d4b72697a2d544e6b4f-part1:    
/dev/disk/by-path/ip-192.168.0.187:3260-iscsi-iqn.2006-01.com.openfiler:tsn.dea898a36535-lun-0-part1:    
/dev/sdb1:    
        Magic:                eb7a62c2    
        Label:                myqdisk    
        Created:              Thu Jan  1 23:42:00 2015    
        Host:                 node02.abc.com    
        Kernel Sector Size:   512    
        Recorded Sector Size: 512

(3) 配置表決磁盤qdisk

# 進入管理界面Manage Clusters -->  gfs  -->  Configure  -->  QDisk

Device       : /dev/sdc1

Path to program : ping -c3 -t2 192.168.0.253  
Interval        : 3    
Score           : 2    
TKO             : 10    
Minimum Score   : 1

# 點擊apply

(4) 啓動qdisk服務

chkconfig qdiskd on  
service qdiskd start    
clustat -l

[root@node01 ~]# clustat -l

Cluster Status for gfs @ Thu Jan  1 23:50:53 2015  
Member Status: Quorate

Member Name                                                     ID   Status  
------ ----                                                     ---- ------    
node01.abc.com                                                      1 Online, Local    
node02.abc.com                                                      2 Online    
/dev/sdb1                                                           0 Online, Quorum Disk

[root@node01 ~]#

 

8. 測試GFS

1)node02節點上執行

# echo c > /proc/sysrq-trigger

2)node01節點上查看日誌記錄

# tail -f /var/log/messages    
Jan  2 01:37:47 node01 ricci: startup succeeded    
Jan  2 01:37:47 node01 rgmanager[2196]: I am node #1    
Jan  2 01:37:47 node01 rgmanager[2196]: Resource Group Manager Starting    
Jan  2 01:37:47 node01 rgmanager[2196]: Loading Service Data    
Jan  2 01:37:49 node01 rgmanager[2196]: Initializing Services    
Jan  2 01:37:49 node01 rgmanager[2196]: Services Initialized    
Jan  2 01:37:49 node01 rgmanager[2196]: State change: Local UP    
Jan  2 01:37:49 node01 rgmanager[2196]: State change: node02.abc.com UP    
Jan  2 01:37:52 node01 polkitd[3125]: started daemon version 0.96 using authority implementation `local' version `0.96'    
Jan  2 01:37:52 node01 rtkit-daemon[3131]: Sucessfully made thread 3129 of process 3129 (/usr/bin/pulseaudio) owned by '42' high priority at nice level -11.    
Jan  2 01:40:52 node01 qdiskd[1430]: Assuming master role    
Jan  2 01:40:53 node01 qdiskd[1430]: Writing eviction notice for node 2    
Jan  2 01:40:54 node01 qdiskd[1430]: Node 2 evicted    
Jan  2 01:40:55 node01 corosync[1378]:   [TOTEM ] A processor failed, forming new configuration.    
Jan  2 01:40:57 node01 corosync[1378]:   [QUORUM] Members[1]: 1    
Jan  2 01:40:57 node01 corosync[1378]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.    
Jan  2 01:40:57 node01 kernel: dlm: closing connection to node 2    
Jan  2 01:40:57 node01 corosync[1378]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.181) ; members(old:2 left:1)    
Jan  2 01:40:57 node01 corosync[1378]:   [MAIN  ] Completed service synchronization, ready to provide service.    
Jan  2 01:40:57 node01 kernel: GFS2: fsid=gfs:gfs2.1: jid=0: Trying to acquire journal lock...    
Jan  2 01:40:57 node01 fenced[1522]: fencing node node02.abc.com    
Jan  2 01:40:57 node01 rgmanager[2196]: State change: node02.abc.com DOWN    
Jan  2 01:41:11 node01 fenced[1522]: fence node02.abc.com success    
Jan  2 01:41:12 node01 kernel: GFS2: fsid=gfs:gfs2.1: jid=0: Looking at journal...    
Jan  2 01:41:12 node01 kernel: GFS2: fsid=gfs:gfs2.1: jid=0: Done    
Jan  2 01:41:30 node01 corosync[1378]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.    
Jan  2 01:41:30 node01 corosync[1378]:   [QUORUM] Members[2]: 1 2    
Jan  2 01:41:30 node01 corosync[1378]:   [QUORUM] Members[2]: 1 2    
Jan  2 01:41:30 node01 corosync[1378]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.0.181) ; members(old:1 left:0)    
Jan  2 01:41:30 node01 corosync[1378]:   [MAIN  ] Completed service synchronization, ready to provide service.    
Jan  2 01:41:38 node01 qdiskd[1430]: Node 2 shutdown    
Jan  2 01:41:50 node01 kernel: dlm: got connection from 2    
Jan  2 01:41:59 node01 rgmanager[2196]: State change: node02.abc.com UP

說明:fence功能正常,期間GFS文件系統正常。

9. 配置文件

cat /etc/cluster/cluster.conf

<?xml version="1.0"?>  
<cluster config_version="9" name="gfs">    
        <clusternodes>    
                <clusternode name="node01.abc.com" nodeid="1">    
                        <fence>    
                                <method name="node01_fence">    
                                        <device name="ESXi_fence" port="kvm_node1" ssl="on" uuid="564d6fbf-05fb-1dd1-fb66-7ea3c85dcfdf"/>    
                                </method>    
                        </fence>    
                </clusternode>    
                <clusternode name="node02.abc.com" nodeid="2">    
                        <fence>    
                                <method name="node02_fence">    
                                        <device name="ESXi_fence" port="kvm_node2" ssl="on" uuid="564d4c42-e7fd-db62-3878-57f77df2475e"/>    
                                </method>    
                        </fence>    
                </clusternode>    
        </clusternodes>    
        <cman expected_votes="3"/>    
        <fencedevices>    
                <fencedevice agent="fence_vmware_soap" ipaddr="192.168.0.21" login="root" name="ESXi_fence" passwd="111111"/>    
        </fencedevices>    
        <rm>    
                <failoverdomains>    
                        <failoverdomain name="gfs_failover" nofailback="1" ordered="1">    
                                <failoverdomainnode name="node01.abc.com" priority="1"/>    
                                <failoverdomainnode name="node02.abc.com" priority="1"/>    
                        </failoverdomain>    
                </failoverdomains>    
        </rm>    
        <quorumd device="/dev/sdb1" min_score="1">    
                <heuristic interval="3" program="ping -c3 -t2 192.168.0.253" score="2" tko="10"/>    
        </quorumd>    
</cluster>

 

配置小結:

1,  若是採用FC存儲,就不會出現IP SAN因網絡出現的問題。

2,  IP SAN配置時,最好採用專用的網卡與網段。在安裝成功後,因配置KVM虛擬環境,調整網卡爲橋接接口,雖然網絡是通的,可是沒法啓動GFS, 最終在國外網站上找到一句話,才解釋了緣由,最終鏈接IPSAN網卡接口爲固定接口,GFS環境恢復正常。

https://www.mail-archive.com/linux-cluster@redhat.com/msg03800.html

You'll need to check the routing of the interfaces. The most common cause of this sort of error is having two interfaces on the same physical (or internal) network.

3,  實際環境能夠採用真實fence設備來帶替fence_vmware_soap,實現fence功能。

相關文章
相關標籤/搜索