Centos7下GlusterFS分佈式存儲集羣環境部署記錄

0)環境準備python

GlusterFS至少須要兩臺服務器搭建,服務器配置最好相同,每一個服務器兩塊磁盤,一塊是用於安裝系統,一塊是用於GlusterFS。linux

192.168.10.239 GlusterFS-master(主節點) Centos7.4
192.168.10.212 GlusterFS-slave (從節點) Centos7.4
192.168.10.213 Client (客戶端)
----------------------------------------------------------------------------------------c++

因爲GlusterFS須要使用網絡,所以還必須事先根據環境設置防火牆規則,關閉SELinux。
這裏我將上面三臺服務器的防火牆和Selinux所有關閉
[root@GlusterFS-master ~]# setenforce 0
[root@GlusterFS-master ~]# getenforce
[root@GlusterFS-master ~]# cat /etc/sysconfig/selinux |grep "SELINUX=disabled"
SELINUX=disabledsql

[root@GlusterFS-master ~]# systemctl stop firewalld
[root@GlusterFS-master ~]# systemctl disable firewalld
[root@GlusterFS-master ~]# firewall-cmd --state
not runningdocker

------------------------------------------------------------------------------------------
因爲GlusterFS並無服務器與元數據等概念,所以全部服務器的設置都相同。首先要作主機名的設置(若是操做時都用ip地址,不使用主機名,那麼就不須要作hosts綁定):
[root@GlusterFS-master ~]# hostnamectl --static set-hostname GlusterFS-master
[root@GlusterFS-master ~]# cat /etc/hostname
GlusterFS-master
[root@GlusterFS-master ~]# vim /etc/hosts
.....
192.168.10.239 GlusterFS-master
192.168.10.212 GlusterFS-slavebootstrap

[root@GlusterFS-slave ~]# hostnamectl --static set-hostname GlusterFS-slave
[root@GlusterFS-slave ~]# cat /etc/hostname
GlusterFS-slave
[root@GlusterFS-slave ~]# vim /etc/hosts
......
192.168.10.239 GlusterFS-master
192.168.10.212 GlusterFS-slavevim

------------------------------------------------------------------------------------------
時鐘同步
這個問題是集羣內部的時間很是重要,若是服務器間的時間有偏差,可能會給集羣間的通訊帶來麻煩,
進而致使集羣失效。這裏採用網絡同步時鐘的方法,確保兩臺服務器的時間一致:
[root@GlusterFS-master ~]# yum install -y ntpdate
[root@GlusterFS-master ~]# ntpdate ntp1.aliyun.com
[root@GlusterFS-master ~]# datecentos

[root@GlusterFS-slave ~]# yum install -y ntpdate
[root@GlusterFS-slave ~]# ntpdate ntp1.aliyun.com
[root@GlusterFS-slave ~]# date
1)安裝依賴(在GlusterFS-master和GlusterFS-slave兩臺機器上都要操做)緩存

[root@GlusterFS-master ~]# yum install -y flex bison openssl openssl-devel acl libacl libacl-devel sqlite-devel \
libxml2-devel python-devel make cmake gcc gcc-c++ autoconf automake libtool unzip zip服務器

2)查看集羣狀態:安裝userspace-rcu-master和userspace-rcu-master(在GlusterFS-master和GlusterFS-slave兩臺機器上都要操做)

1)下載glusterfs-3.6.9.tar.gz和userspace-rcu-master.zip
百度雲盤下載地址:https://pan.baidu.com/s/1DyKxt0TnO3aNx59mVfJCZA
提取密碼:ywq8

將這兩個安裝包放到/usr/local/src目錄下
[root@GlusterFS-master ~]# cd /usr/local/src/
[root@GlusterFS-master src]# ll
total 6444
-rw-r--r--. 1 root root 6106554 Feb 29 2016 glusterfs-3.6.9.tar.gz
-rw-r--r--. 1 root root 490091 Apr 8 09:58 userspace-rcu-master.zip

2)安裝userspace-rcu-master
[root@GlusterFS-master src]# unzip /usr/local/src/userspace-rcu-master.zip -d /usr/local/
[root@GlusterFS-master src]# cd /usr/local/userspace-rcu-master/
[root@GlusterFS-master userspace-rcu-master]# ./bootstrap
[root@GlusterFS-master userspace-rcu-master]# ./configure
[root@GlusterFS-master userspace-rcu-master]# make && make install
[root@GlusterFS-master userspace-rcu-master]# ldconfig

3)安裝userspace-rcu-master
[root@GlusterFS-master userspace-rcu-master]# tar -zxvf /usr/local/src/glusterfs-3.6.9.tar.gz -C /usr/local/
[root@GlusterFS-master userspace-rcu-master]# cd /usr/local/glusterfs-3.6.9/
[root@GlusterFS-master glusterfs-3.6.9]# ./configure --prefix=/usr/local/glusterfs
[root@GlusterFS-master glusterfs-3.6.9]# make && make install

添加環境變量
[root@GlusterFS-master glusterfs-3.6.9]# vim /etc/profile //在文件最底部添加以下內容
......
export GLUSTERFS_HOME=/usr/local/glusterfs
export PATH=$PATH:$GLUSTERFS_HOME/sbin

[root@GlusterFS-master glusterfs-3.6.9]# source /etc/profile

4)啓動glusterfs
[root@GlusterFS-master ~]# /usr/local/glusterfs/sbin/glusterd
[root@GlusterFS-master ~]# ps -ef|grep glusterd
root 852 1 0 10:14 ? 00:00:00 /usr/local/glusterfs/sbin/glusterd
root 984 26217 0 10:14 pts/1 00:00:00 grep --color=auto glusterd
[root@GlusterFS-master ~]# lsof -i:24007
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
glusterd 852 root 9u IPv4 123605 0t0 TCP *:24007 (LISTEN)

3)創建GlusterFS分佈式存儲集羣(這裏選擇在GlusterFS-master上操做。其實在任意一個節點上操做均可以)

1)執行如下命令,將192.168.10.212(可使用ip地址,也可使用節點的主機名)節點加入到集羣,有多少個節點須要加入集羣,就執行多少個下面的命令:
[root@GlusterFS-master ~]# gluster peer probe 192.168.10.212
peer probe: success.

2)查看集羣狀態:
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

3)查看 volume 信息(因爲尚未建立volume因此顯示的是暫無信息):
[root@GlusterFS-master ~]# gluster volume info
No volumes present

4)建立數據存儲目錄(在GlusterFS-master和GlusterFS-slave節點上都要操做)
[root@GlusterFS-master ~]# mkdir -p /opt/gluster/data

5)建立複製卷 models,指定剛剛建立的目錄(replica 2代表存儲2個備份,即有多少個節點就存儲多少個備份;後面指定服務器的存儲目錄)
[root@GlusterFS-master ~]# gluster volume create models replica 2 192.168.10.239:/opt/gluster/data 192.168.10.212:/opt/gluster/data force

6)再次查看 volume 信息
[root@GlusterFS-master ~]# gluster volume info

Volume Name: models
Type: Replicate
Volume ID: f1945b0b-67d6-4202-9198-639244ab0a6a
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/opt/gluster/data
Brick2: 192.168.10.212:/opt/gluster/data

7)啓動 models
[root@GlusterFS-master ~]# gluster volume start models

8)gluster 性能調優
a)首先開啓指定volume的配額
[root@GlusterFS-master ~]# gluster volume quota models enable

b)限制 models 總目錄最大使用 5GB 空間(5GB並不是絕對,須要根據實際硬盤大小配置)
[root@GlusterFS-master ~]# gluster volume quota models limit-usage / 5GB

c)設置 cache 大小(128MB並不是絕對,須要根據實際硬盤大小配置)
[root@GlusterFS-master ~]# gluster volume set models performance.cache-size 128MB

d)開啓異步,後臺操做
[root@GlusterFS-master ~]# gluster volume set models performance.flush-behind on

e)設置 io 線程 32
[root@GlusterFS-master ~]# gluster volume set models performance.io-thread-count 32

f)設置 回寫 (寫數據時間,先寫入緩存內,再寫入硬盤)
[root@GlusterFS-master ~]# gluster volume set models performance.write-behind on

g)查看調優以後的volume信息
[root@GlusterFS-master ~]# gluster volume info

Volume Name: models
Type: Replicate
Volume ID: f1945b0b-67d6-4202-9198-639244ab0a6a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.10.239:/opt/gluster/data
Brick2: 192.168.10.212:/opt/gluster/data
Options Reconfigured:
performance.write-behind: on
performance.io-thread-count: 32
performance.flush-behind: on
performance.cache-size: 128MB
features.quota: on
4)部署客戶端並掛載GlusterFS文件系統的bricks(存儲單元)(在Client機器上操做)

到目前爲止,GlusterFS分佈式存儲集羣的大部分工做已經作完了,接下來就是掛載一個目錄,而後經過對這個掛載目錄操做,
實現數據同步至文件系統。而後寫文件測試下:

1)安裝gluster-client
[root@Client ~]# yum install -y glusterfs glusterfs-fuse

2)創建掛載點目錄
[root@Client ~]# mkdir -p /opt/gfsmount

3)掛載GlusterFS
[root@Client ~]# mount -t glusterfs 192.168.10.239:models /opt/gfsmount/

4)檢查掛載狀況
[root@Client ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 38G 4.3G 33G 12% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.6M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda1 1014M 143M 872M 15% /boot
/dev/mapper/centos-home 19G 33M 19G 1% /home
tmpfs 380M 0 380M 0% /run/user/0
overlay 38G 4.3G 33G 12% /var/lib/docker/overlay2/9904ac8cbcba967de3262dc0d5e230c64ad3c1c53b588048e263767d36df8c1a/merged
shm 64M 0 64M 0% /var/lib/docker/containers/222ec7f21b2495591613e0d1061e4405cd57f99ffaf41dbba1a98c350cd70f60/mounts/shm
192.168.10.239:models 38G 3.9G 34G 11% /opt/gfsmount

5)測試。分別建立30M、300M的兩個大文件,發現速度很快。
[root@Client ~]# time dd if=/dev/zero of=/opt/gfsmount/kevin bs=30M count=1
1+0 records in
1+0 records out
31457280 bytes (31 MB) copied, 0.140109 s, 225 MB/s

real 0m0.152s
user 0m0.001s
sys 0m0.036s

[root@Client ~]# time dd if=/dev/zero of=/opt/gfsmount/grace bs=300M count=1
1+0 records in
1+0 records out
314572800 bytes (315 MB) copied, 1.07577 s, 292 MB/s

real 0m1.106s
user 0m0.001s
sys 0m0.351s

[root@Client ~]# cd /opt/gfsmount/
[root@Client gfsmount]# du -sh *
300M grace
30M kevin
[root@Client gfsmount]# mkdir test
[root@Client gfsmount]# ll
total 337924
-rw-r--r--. 1 root root 314572800 Apr 7 22:41 grace
-rw-r--r--. 1 root root 31457280 Apr 7 22:41 kevin
drwxr-xr-x. 2 root root 4096 Apr 7 22:43 test

6)查看集羣存儲狀況(在GlusterFS-master和GlusterFS-slave節點上操做)
[root@GlusterFS-master ~]# cd /opt/gluster/data/
[root@GlusterFS-master data]# ll
total 337920
-rw-r--r--. 2 root root 314572800 Apr 8 10:41 grace
-rw-r--r--. 2 root root 31457280 Apr 8 10:41 kevin
drwxr-xr-x. 2 root root 6 Apr 8 10:43 test

[root@GlusterFS-slave ~]# cd /opt/gluster/data/
[root@GlusterFS-slave data]# ll
total 337920
-rw-r--r--. 2 root root 314572800 Apr 7 22:41 grace
-rw-r--r--. 2 root root 31457280 Apr 7 22:41 kevin
drwxr-xr-x. 2 root root 6 Apr 7 22:43 test
備註:查看得知gluster服務器的每一個節點上都有備份,符合上面步驟,即:建立複製卷 models,指定剛剛建立的目錄(replica 2代表存儲2個備份)

5)GlusterFS相關命令

1)查看GlusterFS中全部的volume
[root@GlusterFS-master ~]# gluster volume list
models

2)啓動磁盤。好比啓動名字爲 models 的磁盤
[root@GlusterFS-master ~]# gluster volume start models

3)中止磁盤。好比中止名字爲 models 的磁盤
[root@GlusterFS-master ~]# gluster volume stop models

4)刪除磁盤。好比刪除名字爲 models 的磁盤
[root@GlusterFS-master ~]# gluster volume delete models

5)驗證GlusterFS集羣。可使用下面三個命令
[root@GlusterFS-master ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.10.212
Uuid: f8e69297-4690-488e-b765-c1c404810d6a
State: Peer in Cluster (Connected)

[root@GlusterFS-master ~]# gluster pool list
UUID Hostname State
f8e69297-4690-488e-b765-c1c404810d6a 192.168.10.212 Connected
5dfd40e2-096b-40b5-bee3-003b57a39007 localhost Connected

[root@GlusterFS-master ~]# gluster volume status
Status of volume: models
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.10.239:/opt/gluster/data 49152 Y 1055
Brick 192.168.10.212:/opt/gluster/data 49152 Y 32586
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A Y 1074
Quota Daemon on localhost N/A Y 1108
NFS Server on 192.168.10.212 N/A N N/A
Self-heal Daemon on 192.168.10.212 N/A Y 32605
Quota Daemon on 192.168.10.212 N/A Y 32614

Task Status of Volume models
------------------------------------------------------------------------------
There are no active volume tasks


6)將節點移出GlusterFS集羣,能夠批量移除。以下將glusterfs3和glusterfs4兩個節點移除集羣。
[root@GlusterFS-master ~]# gluster peer detach glusterfs3 glusterfs4

7)卷擴容(因爲副本數設置爲2,至少要添加2(四、六、8..)臺機器)
好比添加glusterfs三、glusterfs4兩個節點,並將這兩個節點的卷(即)合併,合併後的卷名稱爲glusterfs_data。
[root@GlusterFS-master ~]# gluster peer probe glusterfs3
[root@GlusterFS-master ~]# gluster peer probe glusterfs4
[root@GlusterFS-master ~]# gluster volume add-brick glusterfs_data glusterfs3:/opt/gluster/data glusterfs4:/opt/gluster/data force

8)從新均衡卷(glusterfs_data爲卷名)
[root@GlusterFS-master ~]# gluster volume rebalance glusterfs_data start
[root@GlusterFS-master ~]# gluster volume rebalance glusterfs_data status
[root@GlusterFS-master ~]# gluster volume rebalance glusterfs_data stop

均衡卷的前提是至少有兩個brick存儲單元(即至少3個節點集羣)。
上面的例子中,models卷中只有一個brick存儲單元,故不能進行均衡卷操做:
[root@GlusterFS-master ~]# gluster volume list
models
[root@GlusterFS-master ~]# gluster volume rebalance models start
volume rebalance: models: failed: Volume models is not a distribute volume or contains only 1 brick.
Not performing rebalance
[root@GlusterFS-master ~]#

9)收縮卷(收縮卷前gluster須要先移動數據到其餘位置)(gv0爲卷名)
[root@GlusterFS-master ~]# gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 start //開始遷移
[root@GlusterFS-master ~]# gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 status //查看遷移狀態
[root@GlusterFS-master ~]# gluster volume remove-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs4:/data/brick1/gv0 commit //遷移完成後提交

10)遷移卷

#將glusterfs3的數據遷移到glusterfs5,先將glusterfs5加入集羣
[root@GlusterFS-master ~]# gluster peer probe glusterfs5

#開始遷移
[root@GlusterFS-master ~]# gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 start

#查看遷移狀態
[root@GlusterFS-master ~]# gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 status

#數據遷移完畢後提交
[root@GlusterFS-master ~]# gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 commit

#若是機器agent31.kisops.org出現故障已經不能運行,執行強制提交
[root@GlusterFS-master ~]# gluster volume replace-brick gv0 glusterfs3:/data/brick1/gv0 glusterfs5:/data/brick1/gv0 commit -force

#同步整個卷
[root@GlusterFS-master ~]# gluster volume heal gfs full

11)受權訪問。以下受權192.168網段的客戶機能訪問此glusterfs存儲卷。
[root@GlusterFS-master ~]# gluster volume set gfs auth.allow 192.168.*
6)總結幾點

如上操做後,GlusterFS的分佈式存儲集羣環境已經搭建完成。這裏總結幾點以下:
1)若是Glusterfs節點機器重啓,那麼重啓後:
a)glusterFS服務須要啓動
b)磁盤models(即存儲卷)須要啓動
c)目錄/opt/gfsmount/須要從新掛載
d)掛載完目錄/opt/gfsmount/須要從新進入

2)注意:兩個分區掛到同一個分區,第一個掛的那個不是被覆蓋,而是被暫時隱藏。好比:先掛載的"mount /dev/sda1 /opt/gfsmount/",接着又掛載的"mount /dev/sda2 /opt/gfsmount/",那麼/dev/sda1內的就暫時被隱藏,只要"umount /dev/sda2",把第二個分區卸載了,在"cd /opt/gfsmount/"就能夠看到掛的第一個分區的內容了。

相關文章
相關標籤/搜索