咱們上面經過圖形界面實現了GFS,咱們這裏使用字符界面實現html
5臺節點均採用相同配置。node
配置/etc/hosts文件linux
# vi /etc/hosts網絡
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4app
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6dom
192.168.1.130 t-lg-kvm-001tcp
192.168.1.132 t-lg-kvm-002ide
192.168.1.134 t-lg-kvm-003oop
192.168.1.138 t-lg-kvm-005url
192.168.1.140 t-lg-kvm-006
網絡設置
關閉NetworkManager:
# service NetworkManager stop
# chkconfig NetworkManager off
關閉SELinux
修改/etc/selinux/config文件中設置SELINUX=disabled :
# cat /etc/selinux/config
# This file hctrls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux securitypolicy is enforced.
# permissive - SELinux printswarnings instead of enforcing.
# disabled - No SELinux policyis loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processesare protected,
# mls - Multi Level Securityprotection.
SELINUXTYPE=targeted
設置當前生效:
# setenforce 0
配置時間同步
5臺節點已配置時間同步。
Gfs2相關軟件直接存放在CentOS系統鏡像中,按照如下步驟進行操做:
1、在192.168.1.130上掛載iso文件
#mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD1.iso /var/www/html/DVD1
#mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD2.iso /var/www/html/DVD2
2、在192.168.1.130修改/etc/yum.repos.d/CentOS-Media.repo:
#vi /etc/yum.repos.d/CentOS-Media.repo
[c6-media]
name=CentOS-$releasever - Media
baseurl=file:///var/www/html/DVD1
file:///var/www/html/DVD2
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
3、在192.168.1.130上啓動httpd服務,以提供其餘計算節點使用
# service httpd start
4、在其餘4臺計算節點上配置yum源
#vi/etc/yum.repos.d/CentOS-Media.repo
[c6-media]
name=CentOS-$releasever - Media
baseurl=http://192.168.1.130/DVD1
http://192.168.1.130/DVD2
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
在5臺計算節點上分別執行如下命令安裝gfs2軟件:
安裝cman和rgmanager:
# yuminstall -y rgmanager cman
安裝clvm
# yuminstall -y lvm2-cluster
安裝gfs2:
# yuminstall -y gfs*
在5臺計算節點上分別執行如下命令配置防火牆策略:
#iptables-A INPUT -p udp -m udp --dport 5404 -j ACCEPT
#iptables-A INPUT -p udp -m udp --dport 5405 -j ACCEPT
#iptables-A INPUT -p tcp -m tcp --dport 21064 -j ACCEPT
#serviceiptables save
以上過程執行完成後,建議從新啓動計算節點,不然有可能會出現cman服務啓動不成功的問題。
配置集羣在一臺計算節點上執行便可,配置完成後同步到其餘計算節點上,例如在192.168.1.130上進行配置:
1、建立集羣
在192.168.1.130上執行:
root@t-lg-kvm-001:/#ccs_toolcreate kvmcluster
2、配置集羣節點
總共有5臺計算節點,因1臺網卡問題暫未使用,目前配置過程當中只有5臺計算節點,將計算節點添加到集羣中,在192.168.1.130上執行:
root@t-lg-kvm-001:/#ccs_tooladdnode -n 1 t-lg-kvm-001
root@t-lg-kvm-001:/#ccs_tooladdnode -n 2 t-lg-kvm-002
root@t-lg-kvm-001:/#ccs_tooladdnode -n 3 t-lg-kvm-003
root@t-lg-kvm-001:/#ccs_tooladdnode -n 4 t-lg-kvm-005
root@t-lg-kvm-001:/#ccs_tooladdnode -n 5 t-lg-kvm-006
查看集羣:
root@t-lg-kvm-001:/root#ccs_toollsnode
Clustername: kvmcluster, config_version: 24
Nodename Votes Nodeid Fencetype
t-lg-kvm-001 1 1
t-lg-kvm-002 1 2
t-lg-kvm-003 1 3
t-lg-kvm-005 1 4
t-lg-kvm-006 1 5
3、同步192.168.1.130上的配置文件到各節點
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.132:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.134:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.138:/etc/cluster/
root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.140:/etc/cluster/
4、啓動各個節點上的cman服務
5臺計算節點上均執行:
#servicecman start
集羣配置完成,接下來配置clvm.
啓用集羣LVM
在集羣中的每一個節點上均執行如下命令開啓集羣lvm:
#lvmconf--enable-cluster
驗證集羣lvm是否啓用:
#cat/etc/lvm/lvm.conf | grep "locking_type = 3"
locking_type= 3
有返回值locking_type = 3證實集羣lvm已啓動。
啓動clvm服務
在各節點上啓動clvm服務:
#serviceclvmd start
在集羣節點上建立lvm
此步驟在一臺節點上執行便可,例如在192.168.1.130上執行:
查看共享存儲:
#fdisk-l
Disk/dev/sda: 599.0 GB, 598999040000 bytes
255heads, 63 sectors/track, 72824 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x000de0e7
Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 524288 83 Linux
Partition1 does not end on cylinder boundary.
/dev/sda2 66 72825 584434688 8e Linux LVM
Disk/dev/mapper/vg01-lv01: 53.7 GB, 53687091200 bytes
255heads, 63 sectors/track, 6527 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk/dev/mapper/vg01-lv_swap: 537.7 GB, 537676218368 bytes
255heads, 63 sectors/track, 65368 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sdd: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sde: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sdf: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes
255heads, 63 sectors/track, 130541 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
Disk/dev/mapper/vg01-lv_bmc: 5368 MB, 5368709120 bytes
255heads, 63 sectors/track, 652 cylinders
Units= cylinders of 16065 * 512 = 8225280 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/Osize (minimum/optimal): 512 bytes / 512 bytes
Diskidentifier: 0x00000000
共6個lun,每一個1TB。
建立集羣物理卷:
root@t-lg-kvm-001:/root#pvcreate/dev/sdb
root@t-lg-kvm-001:/root#pvcreate/dev/sdc
root@t-lg-kvm-001:/root#pvcreate/dev/sdd
root@t-lg-kvm-001:/root#pvcreate/dev/sde
root@t-lg-kvm-001:/root#pvcreate/dev/sdf
root@t-lg-kvm-001:/root#pvcreate/dev/sdg
建立集羣卷組:
root@t-lg-kvm-001:/root#vgcreatekvmvg /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
Clustered volume group "kvmvg"successfully created
root@t-lg-kvm-001:/root#vgs
VG #PV #LV #SN Attr VSize VFree
kvmvg 6 0 0 wz--nc 5.86t 5.86t
vg01 1 3 0 wz--n- 557.36g 1.61g
建立集羣邏輯卷:
root@t-lg-kvm-001:/root#lvcreate -L 5998G -n kvmlv kvmvg
Logical volume "kvmlv" created
root@t-lg-kvm-001:/root#lvs
LV VG Attr LSize Pool Origin Data% Move LogCpy%Sync Convert
kvmlv kvmvg -wi-a----- 5.86t
lv01 vg01 -wi-ao---- 50.00g
lv_bmc vg01 -wi-ao---- 5.00g
lv_swap vg01 -wi-ao---- 500.75g
到此集羣的邏輯卷建立完成,邏輯卷在一臺節點上建立完成後,在其餘節點上都能看到。
可登錄到其餘節點上,使用lvs都能查看到該邏輯卷,驗證是否成功。
1、將邏輯卷格式化成集羣文件系統
僅在一臺機器上執行便可,例如在192.168.1.130上執行:
root@t-lg-kvm-001:/root#mkfs.gfs2 -j 7 -p lock_dlm -t kvmcluster:sharedstorage/dev/kvmvg/kvmlv
Thiswill destroy any data on /dev/kvmvg/kvmlv.
Itappears to contain: symbolic link to `../dm-3'
Areyou sure you want to proceed? [y/n] y
Device: /dev/kvmvg/kvmlv
Blocksize: 4096
DeviceSize 5998.00 GB(1572339712 blocks)
FilesystemSize: 5998.00 GB (1572339710blocks)
Journals: 7
ResourceGroups: 7998
LockingProtocol: "lock_dlm"
LockTable: "kvmcluster:sharedstorage"
UUID: 39f35f4a-e42a-164f-9438-967679e48f9f
2、將集羣文件系統掛載到/openstack/instances目錄下
該步驟在集羣中的每一個節點上都須要執行掛載命令:
#mount-t gfs2 /dev/kvmvg/kvmlv /openstack/instances/
查看掛載狀況:
#df-h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-lv01 50G 12G 35G 26% /
tmpfs 379G 29M 379G 1% /dev/shm
/dev/mapper/vg01-lv_bmc 5.0G 138M 4.6G 3% /bmc
/dev/sda1 504M 47M 433M 10% /boot
/dev/mapper/kvmvg-kvmlv 5.9T 906M 5.9T 1% /openstack/instances
設置開機自動掛載:
#echo"/dev/kvmvg/kvmlv /openstack/instances gfs2 defaults 0 0" >>/etc/fstab
啓動rgmanager服務:
#servicergmanager start
設置開機自啓動:
#chkconfigclvmd on
#chkconfigcman on
#chkconfigrgmanager on
#chkconfiggfs2 on
3、設置掛載目錄權限
因掛載目錄用於openstack存放虛擬機,目錄的權限須要設置成nova:nova.
在集羣中的任意節點上執行:
#chown -R nova:nova /openstack/instances/
在各節點上查看目錄權限是否正確:
#ls-lh /openstack/
總用量 4.0K
drwxr-xr-x7 nova nova 3.8K 5月 26 14:12 instances