紅帽GFS集羣文件系統配置指南

本節中將簡單的介紹下redhat的集羣文件系統GFS的配置,集羣文件系統同普通的文件系統,例如:ext3,ufs,ntfs不同,集羣文件系統採用分佈式鎖管理,能夠實現多個節點同時讀寫文件。主流的集羣文件主要有IBM的GPFS,ORACLE公司出品的ocfs以及紅帽公司出品的GFS。說來慚愧,從766.com離職前,在技術上一直有3大心願沒完成,包括oracle dataguard,grid control和gfs,時至今日,終於基本上實現了這三大指標!接下來就能夠將rac環境下的歸檔日誌存儲在GFS上了node

一:環境介紹
節點1 IP:192.168.1.51/24
操做系統:rhel5.4 64位 (kvm虛擬機)
主機名:dg51.yang.comweb

節點2 IP:192.168.1.52/24
操做系統:rhel5.4 64位 (kvm虛擬機)
主機名:dg51.yang.com瀏覽器

共享存儲IP:192.168.1.100/24
操做系統:rhel6.0 64位
主機名:rhel6.yang.comoracle

二: 配置共享存儲並分區
[root@dg51 ~]# fdisk -l /dev/sdaapp

Disk /dev/sda: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytesdom

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       10240    10485744   83  Linux分佈式

這部分的配置請參考:http://ylw6006.blog.51cto.com/470441/580568

三:安裝集羣存儲包組,兩個節點上均須要配置 ide

[root@dg51 ~]# cat /etc/yum.repos.d/base.repo
[base]
name=base
baseurl=ftp://192.168.1.100/pub/iso5/Server
gpgcheck=0
enable=1
post

[Cluster]
name=Cluster
baseurl=ftp://192.168.1.100/pub/iso5/Cluster
gpgcheck=0
enable=1
性能

[ClusterStorage]
name=ClusterStorage
baseurl=ftp://192.168.1.100/pub/iso5/ClusterStorage
gpgcheck=0
enable=1

[root@dg51 ~]# yum -y groupinstall "Cluster Storage"  "Clustering"

三:建立配置文件,啓動相關進程,兩個節點作一樣的配置
[root@dg51 ~]# system-config-cluster

[root@dg51 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="2" name="dg_gfs">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="dg51.yang.com" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="dg52.yang.com" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

這裏直接將配置文件複製到節點2
[root@dg51 ~]# scp /etc/cluster/cluster.conf dg52:/etc/cluster/
[root@dg51 ~]# lvmconf --enable-cluster

[root@dg51 ~]# service rgmanager start
Starting Cluster Service Manager: [  OK  ]
[root@dg51 ~]# chkconfig rgmanager on

[root@dg51 ~]# service ricci start
Starting oddjobd: [  OK  ]
generating SSL certificates...  done
Starting ricci: [  OK  ]
[root@dg51 ~]# chkconfig ricci on

root@dg51 ~]# service cman start
Starting cluster:
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done  [  OK  ]
[root@dg51 ~]# chkconfig cman on

[root@dg51 ~]# service clvmd start
Starting clvmd: [  OK  ]
Activating VGs: [  OK  ]
[root@dg51 ~]# chkconfig clvmd on

[root@dg51 ~]# clustat
Cluster Status for dg_gfs @ Sat Dec 10 17:25:09 2011
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 dg51.yang.com                               1 Online, Local
 dg52.yang.com                               2 Online

四:在共享存儲上劃分LVM,在一個節點上操做便可

[root@dg51 ~]# fdisk -l /dev/sda (在開始以前,須要將分區改成8e)

Disk /dev/sda: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       10240    10485744   8e  Linux LVM

[root@dg51 ~]# pvcreate /dev/sda1
  Physical volume "/dev/sda1" successfully created
[root@dg51 ~]# vgcreate dg_gfs /dev/sda1
  Clustered volume group "dg_gfs" successfully created
[root@dg51 ~]# vgdisplay dg_gfs
  --- Volume group ---
  VG Name               dg_gfs
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.00 GB
  PE Size               4.00 MB
  Total PE              2559
  Alloc PE / Size       0 / 0  
  Free  PE / Size       2559 / 10.00 GB
  VG UUID               hMivT2-FIuF-QX2N-EXU9-CaZF-wR5a-8QS7t4

[root@dg51 ~]# lvcreate -n gfs1 -l 2559 dg_gfs
  Logical volume "gfs1" created

若是出現下面的錯誤,在兩個節點上重啓下clvmd進程便可
[root@dg51 ~]# lvcreate -n gfs1 -l 2559 dg_gfs
  Error locking on node dg52.yang.com: Volume group for uuid not found:

hMivT2FIuFQX2NEXU9CaZFwR5a8QS7t4Ft4RMjI9V6a3jUudYQe0i1IygtIlaHxc
  Aborting. Failed to activate new LV to wipe the start of it.

[root@dg51 ~]# service clvmd restart
Deactivating VG dg_gfs:   0 logical volume(s) in volume group "dg_gfs" now active [  OK  ]
Stopping clvm:  [  OK  ]
Starting clvmd: [  OK  ]
Activating VGs:   0 logical volume(s) in volume group "dg_gfs" now active

[root@dg51 ~]# service clvmd status
clvmd (pid 5494) is running...
active volumes: gfs1 [  OK  ]

[root@dg51 ~]# lvscan
  ACTIVE            '/dev/dg_gfs/gfs1' [10.00 GB] inherit

[root@dg52 ~]# lvscan
  ACTIVE            '/dev/dg_gfs/gfs1' [10.00 GB] inherit

五:格式化lvm卷

[root@dg51 ~]# gfs_mkfs -h
Usage:

gfs_mkfs [options] <device>

Options:

  -b <bytes>       Filesystem block size
  -D               Enable debugging code
  -h               Print this help, then exit
  -J <MB>          Size of journals
  -j <num>         Number of journals
  -O               Don't ask for confirmation
  -p <name>        Name of the locking protocol
  -q               Don't print anything
  -r <MB>          Resource Group Size
  -s <blocks>      Journal segment size
  -t <name>        Name of the lock table
  -V               Print program version information, then exit

[root@dg51 ~]# gfs_mkfs -p lock_dlm  -t dg_gfs:gfs -j 2 /dev/dg_gfs/gfs1
This will destroy any data on /dev/dg_gfs/gfs1.
  It appears to contain a gfs filesystem.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/dg_gfs/gfs1
Blocksize:                 4096
Filesystem Size:           2554644
Journals:                  2
Resource Groups:           40
Locking Protocol:          lock_dlm
Lock Table:                dg_gfs:gfs

Syncing...
All Done

六:在兩個節點上分別掛載,並測試寫入數據

[root@dg51 ~]# mount -t gfs /dev/mapper/dg_gfs-gfs1 /dg_archivelog/
[root@dg51 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda3              28G  8.6G   18G  34% /
/dev/vda1              99M   12M   83M  13% /boot
tmpfs                 391M     0  391M   0% /dev/shm
/dev/mapper/dg_gfs-gfs1
                      9.8G   20K  9.8G   1% /dg_archivelog

[root@dg52 ~]# mkdir /dg_archivelog2
[root@dg52 ~]# mount -t gfs /dev/dg_gfs/gfs1 /dg_archivelog2/
[root@dg52 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda3              28G  9.1G   17G  36% /
/dev/vda1              99M   12M   83M  13% /boot
tmpfs                 391M     0  391M   0% /dev/shm
/dev/mapper/dg_gfs-gfs1
                      9.8G   20K  9.8G   1% /dg_archivelog2

[root@dg52 ~]# cp /etc/hosts /dg_archivelog2/
[root@dg51 ~]# cat /dg_archivelog/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.1.51            dg51.yang.com   dg51
192.168.1.52            dg52.yang.com   dg52
192.168.1.55            grid5.yang.com  grid5

若是要開機自動掛載,可在/etc/fstab文件中添加開機自動掛載項(經測試無效,能夠寫到/etc/rc.local文件中實現)
[root@dg51 ~]# tail -1 /etc/fstab
/dev/mapper/dg_gfs-gfs1  /dg_archivelog         gfs     defaults        0 0

七:性能測試 

[root@dg51 ~]# dd if=/dev/zero of=/dg_archivelog/gfs_test bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 9.11253 seconds, 115 MB/s
iscsi模擬出來的共享存儲,I/0性能通常,只能用於學習

八:使用瀏覽器進行管理

[root@dg51 ~]# luci_admin  init
Initializing the luci server

Creating the 'admin' user

Enter password:
Confirm password:

Please wait...
The admin password has been successfully set.
Generating SSL certificates...
The luci server has been successfully initialized
You must restart the luci server for changes to take effect.
Run "service luci restart" to do so

[root@dg51 ~]# service luci restart
Shutting down luci: [  OK  ]
Starting luci: Generating https SSL certificates...  done [  OK  ]

Point your web browser to https://dg51.yang.com:8084 to access luci

 

相關文章
相關標籤/搜索