RHCS+ISCSI+CLVM+GFS2 實現共享存儲

環境介紹:

node1 (10.11.8.187) : target節點 --> 安裝: corosync scsi-target-utilsnode

node2 (10.11.8.186), node3(10.11.8.200) : initiator節點 --> 安裝: corosync, iscsi-initiator-utils, gfs2-utils, lvm2-clusterapp

安裝過程再也不詳述, 參照前文工具

target 配置:

[root@node1 ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2016.com.shiina:storage.disk1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.2016.com.shiina:node2
            Connection: 0
                IP Address: 10.11.8.186
        I_T nexus: 2
            Initiator: iqn.1994-05.com.redhat:e6175c7b6952
            Connection: 0
                IP Address: 10.11.8.200
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8590 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags:
    Account information:
    ACL information:
        10.11.0.0/16

node2, node3 上發現並鏈接:

[root@node2 ~]# iscsiadm -m discovery -t sendtargets -p 10.11.8.187
10.11.8.187:3260,1 iqn.2016.com.shiina:storage.disk1
[root@node2 ~]# iscsiadm -m node -T iqn.2016.com.shiina:storage.disk1 -p 10.11.8.187 -l
Logging in to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] (multiple)
Login to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] successful.
[root@node2 ~]# fdisk -l
...
...
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1044     8385898+  83  Linux

配置RHCS, 添加一個集羣: 詳細操做見前文測試

配置CLVM:

lvm2-cluster 使用的已然是 lvm 程序, 但要讓 lvm 支持集羣須要修改 lvm 的配置日誌

[root@node2 ~]# lvmconf --enable-cluster

也能夠直接修改/etc/lvm/lvm.conf locking_type = 3code

1.啓動 clvmd 服務:

[root@node2 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   2 logical volume(s) in volume group "vol0" now active
  clvmd not running on node node3
                                                           [  OK  ]
[root@node3 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   2 logical volume(s) in volume group "vol0" now active
                                                           [  OK  ]

2.建立邏輯卷:

[root@node2 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
[root@node2 ~]# vgcreate cluster_vg /dev/sdb
  Clustered volume group "cluster_vg" successfully created
[root@node2 ~]# lvcreate -L 2G -n clusterlv cluster_vg
  Logical volume "clusterlv" created.
[root@node2 ~]# lvs
  LV        VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  clusterlv cluster_vg -wi-a-----  2.00g                                                    
  root      vol0       -wi-ao----  4.88g                                                    
  usr       vol0       -wi-ao---- 13.92g
[root@node3 ~]# lvs
  LV        VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  clusterlv cluster_vg -wi-a-----  2.00g                                                    
  root      vol0       -wi-ao----  4.88g                                                    
  usr       vol0       -wi-ao---- 13.92g
mkfs.gfs2
  -j #: 指定日誌區域的個數,有幾個就可以被幾個節點所掛載;
  -J #: 指定日誌區域的大小,默認爲128MB;
  -p {lock_dlm|lock_nolock}: 指定使用的鎖類型
  -t <name>: 鎖表的名稱,格式爲clustername:locktablename, clustername爲當前節點所在的集羣的名稱,locktablename要在當前集羣唯一;
      當集羣內有多個共享存儲時, 用以區別鎖的位置
gfs2-tool #工具, 查看調整參數等
gfs2_jadd -j #增長的日誌區域數量
gfs2_grow #擴展gfs文件系統
[root@node2 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t cluster:lvmstor /dev/cluster_vg/clusterlv
This will destroy any data on /dev/cluster_vg/clusterlv.
It appears to contain: symbolic link to `../dm-2'
 
Are you sure you want to proceed? [y/n] y
 
Device:                    /dev/cluster_vg/clusterlv
Blocksize:                 4096
Device Size                2.00 GB (524288 blocks)
Filesystem Size:           2.00 GB (524288 blocks)
Journals:                  2
Resource Groups:           8
Locking Protocol:          "lock_dlm"
Lock Table:                "cluster:lvmstor"
UUID:                      a3462622-2403-f3e4-b2ae-136b9274fa1d

掛載並測試文件系統:

[root@node2 ~]# mount -t gfs2 /dev/cluster_vg/clusterlv /mnt
[root@node2 ~]# touch /mnt/1.txt
[root@node2 ~]# ls /mnt/
1.txt
[root@node3 ~]# mount -t gfs2 /dev/cluster_vg/clusterlv /mnt
[root@node3 ~]# ls /mnt/
1.txt
 
[root@node3 ~]# touch /mnt/2.txt #node2,node3同時掛載時建立文件
[root@node2 ~]# ls /mnt/
1.txt  2.txt #正常查看
相關文章
相關標籤/搜索