一,GFS2簡介node
GFS2是一個基於GFS的先進的集羣文件系統,可以同步每臺主機的集羣文件系統的metadata,可以進行文件鎖的管理,而且必需要redhat cluster suite支持,GFS2能夠grow,進行容量的調整,不過這是在disk動態容量調整的支持下,也就是本文所要實現的CLVM。linux
實驗環境:後端
192.168.30.119 tgtd.luojianlong.com OS:Centos 6.4 x86_64 管理服務器 iscsi-target-serverbash
192.168.30.115 node1.luojianlong.com OS:Centos 6.4 x86_64 iscsi-initiator服務器
192.168.30.116 node2.luojianlong.com OS:Centos 6.4 x86_64 iscsi-initiator網絡
192.168.30.117 node3.luojianlong.com OS:Centos 6.4 x86_64 iscsi-initiator架構
原理:app
node1,node2,node3分別經過ISCSI-initiator登陸並掛載tgtd服務器的存儲設備,利用RHCS搭建GFS2高可用集羣文件系統,且保證3個節點對存儲設備可以同時讀寫訪問。dom
下面是拓撲圖:ssh
二,準備工做
分別設置4臺服務器的hosts文件,以便可以解析對應節點,設置管理節點到各集羣節點的ssh密鑰無密碼登陸,關閉NetworkManager,設置開機不自動啓動。
[root@tgtd ~]# cat /etc/hosts 192.168.30.115 node1.luojianlong.com node1 192.168.30.116 node2.luojianlong.com node2 192.168.30.117 node3.luojianlong.com node3 [root@tgtd ~]# ssh-copy-id -i node1 The authenticity of host 'node1 (192.168.30.115)' can't be established. RSA key fingerprint is 66:2e:28:75:ba:34:5e:b1:40:66:af:ba:37:80:20:3f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.30.115' (RSA) to the list of known hosts. root@node1's password: Now try logging into the machine, with "ssh 'node1'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [root@tgtd ~]# ssh-copy-id -i node2 The authenticity of host 'node2 (192.168.30.116)' can't be established. RSA key fingerprint is 66:2e:28:75:ba:34:5e:b1:40:66:af:ba:37:80:20:3f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.30.116' (RSA) to the list of known hosts. root@node2's password: Now try logging into the machine, with "ssh 'node2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [root@tgtd ~]# ssh-copy-id -i node3 The authenticity of host 'node3 (192.168.30.117)' can't be established. RSA key fingerprint is 66:2e:28:75:ba:34:5e:b1:40:66:af:ba:37:80:20:3f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node3,192.168.30.117' (RSA) to the list of known hosts. root@node3's password: Now try logging into the machine, with "ssh 'node3'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. [root@tgtd ~]# for I in {1..3}; do scp /etc/hosts node$I:/etc/; done hosts 100% 129 0.1KB/s 00:00 hosts 100% 129 0.1KB/s 00:00 hosts
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'service NetworkManager stop'; done Stopping NetworkManager daemon: [ OK ] Stopping NetworkManager daemon: [ OK ] Stopping NetworkManager daemon: [ OK ] [root@tgtd ~]# for I in {1..3}; do ssh node$I 'chkconfig NetworkManager off'; done
關閉各節點的iptables,selinux服務
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'service iptables stop'; done [root@tgtd ~]# for I in {1..3}; do ssh node$I 'setenforce 0'; done
集羣安裝
RHCS的核心組件爲cman和rgmanager,其中cman爲基於openais的「集羣基礎架構層」,rgmanager爲資源管理器。RHCS的集羣中資源的配置須要修改其主配置文件/etc/cluster/cluster.xml實現,其僅安裝在集羣中的某一節點上便可,而cman和rgmanager須要分別安裝在集羣中的每一個節點上。這裏選擇將此三個rpm包分別安裝在了集羣中的每一個節點上
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'yum -y install cman rgmanager'; done
爲集羣建立配置文件
RHCS的配置文件/etc/cluster/cluster.conf,其在每一個節點上都必須有一份,且內容均相同,其默認不存在,所以須要事先建立,ccs_tool命令能夠完成此任務。另外,每一個集羣經過集羣ID來標識自身,所以,在建立集羣配置文件時須要爲其選定一個集羣名稱,這裏假設其爲tcluster。此命令須要在集羣中的某個節點上執行
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'ccs_tool create tcluster'; done
查看生成的配置文件的內容
[root@node1 cluster]# cat cluster.conf <?xml version="1.0"?> <cluster name="tcluster" config_version="1"> <clusternodes> </clusternodes> <fencedevices> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> </cluster> #ccs_tool命令用於在線更新CCS的配置文件
爲集羣添加節點
RHCS集羣須要配置好各節點及相關的fence設備後才能啓動,所以,這裏須要事先將各節點添加進集羣配置文件。每一個節點在添加進集羣時,須要至少爲其配置node id(每一個節點的id必須唯一),ccs_tool的addnode子命令能夠完成節點添加。將前面規劃的三個集羣節點添加至集羣中,可使用以下命令實現。
[root@node1 ~]# ccs_tool addnode -n 1 node1.luojianlong.com [root@node1 ~]# ccs_tool addnode -n 2 node2.luojianlong.com [root@node1 ~]# ccs_tool addnode -n 3 node3.luojianlong.com
查看已經添加完成的節點及相關信息:
[root@node1 ~]# ccs_tool lsnode Cluster name: tcluster, config_version: 4 Nodename Votes Nodeid Fencetype node1.luojianlong.com 1 1 node2.luojianlong.com 1 2 node3.luojianlong.com 1 3
複製配置文件到其餘2個節點
[root@node1 ~]# scp /etc/cluster/cluster.conf node2:/etc/cluster/ [root@node1 ~]# scp /etc/cluster/cluster.conf node3:/etc/cluster/
啓動集羣
RHCS集羣會等待各節點都啓動後方才進入正常工做狀態,所以,須要把集羣各節點上的cman服務同時啓動起來。這分別須要在各節點上執行以下命令
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'service cman start'; done Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ]
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'service rgmanager start'; done Starting Cluster Service Manager: [ OK ] Starting Cluster Service Manager: [ OK ] Starting Cluster Service Manager: [ OK ]
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'chkconfig rgmanager on'; done [root@tgtd ~]# for I in {1..3}; do ssh node$I 'chkconfig cman on'; done
查看集羣狀態信息
[root@node1 ~]# clustat Cluster Status for tcluster @ Tue Apr 1 16:45:23 2014 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node1.luojianlong.com 1 Online, Local node2.luojianlong.com 2 Online node3.luojianlong.com 3 Online
cman_tool的status子命令則以當前節點爲視角來顯示集羣的相關信息
[root@node1 ~]# cman_tool status Version: 6.2.0 Config Version: 4 Cluster Name: tcluster Cluster Id: 10646 Cluster Member: Yes Cluster Generation: 28 Membership state: Cluster-Member Nodes: 3 Expected votes: 3 Total votes: 3 Node votes: 1 Quorum: 2 Active subsystems: 8 Flags: Ports Bound: 0 177 Node name: node1.luojianlong.com Node ID: 1 Multicast addresses: 239.192.41.191 Node addresses: 192.168.30.115
cman_tool的nodes子命令則能夠列出集羣中每一個節點的相關信息
[root@node1 ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 24 2014-04-01 16:37:57 node1.luojianlong.com 2 M 24 2014-04-01 16:37:57 node2.luojianlong.com 3 M 28 2014-04-01 16:38:11 node3.luojianlong.com
cman_tool的services子命令則能夠列出集羣中每一個服務的相關信息
[root@node1 ~]# cman_tool services fence domain member count 3 victim count 0 victim now 0 master nodeid 1 wait state none members 1 2 3 dlm lockspaces name rgmanager id 0x5231f3eb flags 0x00000000 change member 3 joined 1 remove 0 failed 0 seq 3,3 members 1 2 3
在tgtd server上安裝scsi-target-utils
[root@tgtd ~]# yum -y install scsi-target-utils [root@tgtd ~]# cp /etc/tgt/targets.conf /etc/tgt/targets.conf.bak
編輯target配置文件,定義target
[root@tgtd ~]# vi /etc/tgt/targets.conf # 添加以下內容 <target iqn.2014-04.com.luojianlong:target1> backing-store /dev/sdb initiator-address 192.168.30.0/24 </target> [root@tgtd ~]# service tgtd restart
backing-store:指定後端要共享的磁盤編號
initiator-address:受權客戶端訪問的網絡地址
incominguser:設置登陸用戶的帳號密碼
啓動target並查看
[root@tgtd ~]# tgtadm -L iscsi -m target -o show Target 1: iqn.2014-04.com.luojianlong:target1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10737 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/sdb Backing store flags: Account information: ACL information: 192.168.30.0/24
配置3個節點,使用iscsi-initiator登陸tgtd服務的存儲設備
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'iscsiadm -m discovery -t st -p 192.168.30.119'; done 192.168.30.119:3260,1 iqn.2014-04.com.luojianlong:target1 192.168.30.119:3260,1 iqn.2014-04.com.luojianlong:target1 192.168.30.119:3260,1 iqn.2014-04.com.luojianlong:target1 [root@tgtd ~]# for I in {1..3}; do ssh node$I 'iscsiadm -m node -T iqn.2014-04.com.luojianlong:target1 -p 192.168.30.119:3260 -l'; done Logging in to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] (multiple) Login to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] successful. Logging in to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] (multiple) Login to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] successful. Logging in to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] (multiple) Login to [iface: default, target: iqn.2014-04.com.luojianlong:target1, portal: 192.168.30.119,3260] successful. [root@tgtd ~]# for I in {1..3}; do ssh node$I 'fdisk -l /dev/sdb'; done Disk /dev/sdb: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
在其中一個節點上格式化一個分區
[root@node1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x7ac42a91. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-10240, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-10240, default 10240): +5G Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (5122-10240, default 5122): Using default value 5122 Last cylinder, +cylinders or +size{K,M,G} (5122-10240, default 10240): +5G Value out of range. Last cylinder, +cylinders or +size{K,M,G} (5122-10240, default 10240): +4G Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7ac42a91 Device Boot Start End Blocks Id System /dev/sdb1 1 5121 5243888 83 Linux /dev/sdb2 5122 9218 4195328 83 Linux
配置使用gfs2文件系統
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'yum -y install gfs2-utils'; done
使用gfs2命令工具在以前建立好的/dev/sdb1上建立集羣文件系統gfs2,可使用以下命令
[root@node1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t tcluster:sdb1 /dev/sdb1 This will destroy any data on /dev/sdb1. It appears to contain: Linux GFS2 Filesystem (blocksize 4096, lockproto lock_dlm) Are you sure you want to proceed? [y/n] y Device: /dev/sdb1 Blocksize: 4096 Device Size 5.00 GB (1310972 blocks) Filesystem Size: 5.00 GB (1310970 blocks) Journals: 3 Resource Groups: 21 Locking Protocol: "lock_dlm" Lock Table: "tcluster:sdb1" UUID: 478dac97-c25f-5bc8-a719-0d385fea23e3
mkfs.gfs2爲gfs2文件系統建立工具,其通常經常使用的選項有:
-b BlockSize:指定文件系統塊大小,最小爲512,默認爲4096;
-J MegaBytes:指定gfs2日誌區域大小,默認爲128MB,最小值爲8MB;
-j Number:指定建立gfs2文件系統時所建立的日誌區域個數,通常須要爲每一個掛載的客戶端指定一個日誌區域;
-p LockProtoName:所使用的鎖協議名稱,一般爲lock_dlm或lock_nolock之一;
-t LockTableName:鎖表名稱,通常來講一個集羣文件系統需一個鎖表名以便讓集羣節點在施加文件鎖時得悉其所關聯到的集羣文件系統,鎖表名稱爲clustername:fsname,其中的clustername必須跟集羣配置文件中的集羣名稱保持一致,所以,也僅有此集羣內的節點可訪問此集羣文件系統;此外,同一個集羣內,每一個文件系統的名稱必須唯一。
格式化完成後,重啓node1,node2,node3,否則沒法掛載剛纔建立的GFS2分區
[root@node1 ~]# mount /dev/sdb1 /mnt/ [root@node1 ~]# cp /etc/fstab /mnt/ # 在node2,node3上面也同時掛載/dev/sdb1 [root@node2 ~]# mount /dev/sdb1 /mnt/ [root@node3 ~]# mount /dev/sdb1 /mnt/ # 在node1上掛載目錄中寫入數據,檢測node2,node3的掛載目錄數據狀況 [root@node2 mnt]# tail -f fstab # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=db4bad23-32a8-44a6-bdee-1585ce9e13ac /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@node3 mnt]# tail -f fstab # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=db4bad23-32a8-44a6-bdee-1585ce9e13ac /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@node1 mnt]# echo "hello" >> fstab [root@node2 mnt]# tail -f fstab # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=db4bad23-32a8-44a6-bdee-1585ce9e13ac /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 hello [root@node3 mnt]# tail -f fstab # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=db4bad23-32a8-44a6-bdee-1585ce9e13ac /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 hello
以上信息發現,node2,node3已經發現數據發生變化。
三,配置使用CLVM(集羣邏輯卷)
在RHCS集羣節點上安裝lvm2-cluster
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'yum -y install lvm2-cluster'; done
在RHCS的各節點上,爲lvm啓用集羣功能
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'lvmconf --enable-cluster'; done
爲RHCS各節點啓動clvmd服務
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'service clvmd start'; done Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup" now active clvmd not running on node node3.luojianlong.com clvmd not running on node node2.luojianlong.com [ OK ] Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup" now active clvmd not running on node node3.luojianlong.com [ OK ] Starting clvmd: Activating VG(s): 2 logical volume(s) in volume group "VolGroup" now active [ OK ]
建立物理卷、卷組和邏輯卷,使用管理單機邏輯卷的相關命令便可
[root@node1 ~]# pvcreate /dev/sdb2 Physical volume "/dev/sdb2" successfully created [root@node1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 VolGroup lvm2 a-- 29.51g 0 /dev/sdb2 lvm2 a-- 4.00g 4.00g # 此時,在另外的其它節點上也可以看到剛剛建立的物理卷
建立卷組和邏輯卷
[root@node1 ~]# vgcreate clustervg /dev/sdb2 Clustered volume group "clustervg" successfully created [root@node1 ~]# lvcreate -L 2G -n clusterlv clustervg Logical volume "clusterlv" created [root@node1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 25.63g lv_swap VolGroup -wi-ao---- 3.88g clusterlv clustervg -wi-a----- 2.00g
在其餘節點也能看到對應的邏輯卷
[root@tgtd ~]# for I in {1..3}; do ssh node$I 'lvs'; done LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 25.63g lv_swap VolGroup -wi-ao---- 3.88g clusterlv clustervg -wi-a----- 2.00g LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 25.63g lv_swap VolGroup -wi-ao---- 3.88g clusterlv clustervg -wi-a----- 2.00g LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 25.63g lv_swap VolGroup -wi-ao---- 3.88g clusterlv clustervg -wi-a----- 2.00g
格式化邏輯卷
[root@node1 ~]# lvcreate -L 2G -n clusterlv clustervg Logical volume "clusterlv" created [root@node1 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao---- 25.63g lv_swap VolGroup -wi-ao---- 3.88g clusterlv clustervg -wi-a----- 2.00g [root@node1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t tcluster:clusterlv /dev/clustervg/clusterlv This will destroy any data on /dev/clustervg/clusterlv. It appears to contain: symbolic link to `../dm-2' Are you sure you want to proceed? [y/n] y Device: /dev/clustervg/clusterlv Blocksize: 4096 Device Size 2.00 GB (524288 blocks) Filesystem Size: 2.00 GB (524288 blocks) Journals: 2 Resource Groups: 8 Locking Protocol: "lock_dlm" Lock Table: "tcluster:clusterlv" UUID: c8fbef88-970d-92c4-7b66-72499406fa9c
掛載邏輯卷
[root@node1 ~]# mount /dev/clustervg/clusterlv /media/ [root@node2 ~]# mount /dev/clustervg/clusterlv /media/ [root@node3 ~]# mount /dev/clustervg/clusterlv /media/ Too many nodes mounting filesystem, no free journals # 發現node3掛載不了,由於剛纔建立了2個journal,須要再添加一個 [root@node1 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv Filesystem: /media Old Journals 2 New Journals 3 # 而後掛載node3 [root@node3 ~]# mount /dev/clustervg/clusterlv /media/ [root@node1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root ext4 26G 6.1G 18G 26% / tmpfs tmpfs 1.9G 38M 1.9G 2% /dev/shm /dev/sda1 ext4 485M 65M 395M 15% /boot /dev/sdb1 gfs2 5.1G 388M 4.7G 8% /mnt /dev/mapper/clustervg-clusterlv gfs2 2.0G 388M 1.7G 19% /media
擴展邏輯卷
[root@node1 ~]# lvextend -L +2G /dev/clustervg/clusterlv Extending logical volume clusterlv to 4.00 GiB Logical volume clusterlv successfully resized [root@node1 ~]# gfs2_grow /dev/clustervg/clusterlv FS: Mount Point: /media FS: Device: /dev/dm-2 FS: Size: 524288 (0x80000) FS: RG size: 65533 (0xfffd) DEV: Size: 1048576 (0x100000) The file system grew by 2048MB. gfs2_grow complete. [root@node1 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root ext4 26G 6.1G 18G 26% / tmpfs tmpfs 1.9G 38M 1.9G 2% /dev/shm /dev/sda1 ext4 485M 65M 395M 15% /boot /dev/sdb1 gfs2 5.1G 388M 4.7G 8% /mnt /dev/mapper/clustervg-clusterlv gfs2 4.0G 388M 3.7G 10% /media
發現邏輯卷已經被擴展
到此,RHCS,GFS2,ISCSI,CLVM實現共享存儲配置完畢。