實驗需求:node
·4臺機器安裝GlusterFS組成一個集羣linux
·客戶端把docker registry存儲到文件系統裏docker
·4個節點的硬盤空間不整合成一個硬盤空間,要求每一個節點都存儲一份,保證數據安全安全
環境規劃tcp
serveride
node1:192.168.0.165 主機名:glusterfs1spa
node2:192.168.0.157 主機名:glusterfs2orm
node3:192.168.0.166 主機名:glusterfs3server
node4:192.168.0.150 主機名:glusterfs4rem
client
192.168.0.164 主機名:master3
實驗前準備
·全部主機關閉防火牆,SElinux
·修改hosts文件,可以互相解析
192.168.0.165 glusterfs1 192.168.0.157 glusterfs2 192.168.0.166 glusterfs3 192.168.0.150 glusterfs4 192.168.0.164 master3
安裝
服務端
1.在glusterfs {1-4}節點上安裝GlusrerFS軟件包
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo # yum install -y glusterfs glusterfs-server glusterfs-fuse 出現librcu的錯誤須要安裝 userspace-rcu-0.7.9-1.el7.x86_64 # service glusterd start # chkconfig glusterd on
2.在glusterfs1節點上配置整個GlusterFS集羣,把各個節點加入到集羣
[root@glusterfs1 ~]# gluster peer probe glusterfs1 1 peer probe: success: on localhost not needed [root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success [root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success [root@glusterfs1 ~]# gluster peer probe glusterfs2 1 peer probe: success
3.查看節點狀態
[root@glusterfs1 ~]#gluster peer status
4.在glusterfs{1-4}上建立數據存儲目錄
# mkdir -p /usr/local/share/models
5.查看卷
[root@glusterfs1 ~]# gluster volume info Volume Name: models Type: Distribute Volume ID: b81587ff-5dd6-49b9-b46b-afe5df38d8c7 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: glusterfs1:/usr/local/share/models Brick2: glusterfs2:/usr/local/share/models Brick3: glusterfs3:/usr/local/share/models Brick4: glusterfs4:/usr/local/share/models Options Reconfigured: performance.readdir-ahead: on
5.在glusterfs1上建立GlusterFS磁盤
注意:
加上replica 4就是4個節點中,每一個節點都要把數據存儲一次,就是一個數據存儲4份,每一個節點一份
若是不加replica 4,就是4個節點的磁盤空間整合成一個硬盤,
[root@glusterfs1 ~]#gluster volume create models replica 4 glusterfs1:/usr/local/share/models glusterfs2:/usr/local/share/models glusterfs3:/usr/local/share/models glusterfs4:/usr/local/share/models force 1 volume create: models: success: please start the volume to access data
6.啓動
[root@glusterfs1 ~]# gluster volume start models
客戶端
1.部署GlusterFS客戶端並mount GlusterFS文件系統
[root@master3 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo [root@master3 ~]# yum install -y glusterfs glusterfs-fuse [root@master3 ~]# mkdir -p /mnt/models [root@master3 ~]# mount -t glusterfs -o ro glusterfs1:models /mnt/models/ #加上 -o ro 的意思是 只讀
2.查看效果
[root@master3 ~]#df -h Filesystem Size Used Avail Use% Mounted on /dev/vda3 289G 5.6G 284G 2% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 80K 3.9G 1% /dev/shm tmpfs 3.9G 169M 3.7G 5% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/vda1 1014M 128M 887M 13% /boot glusterfs1:models 189G 3.5G 186G 2% /mnt/models
其餘操做命令
刪除GlusterFS磁盤
# gluster volume stop models #先中止 # gluster volume delete models #再刪除
卸載GlusterFS磁盤
gluster peer detach glusterfs4
ACL訪問控制
gluster volume set models auth.allow 10.60.1.*,10.70.1.*
添加GlusterFS節點
# gluster peer probe sc2-log5 # gluster peer probe sc2-log6 # gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster
遷移GlusterFS數據
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start # gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status # gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit
修復GlusterFS數據(在節點1宕機的狀況下)
# gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit -force # gluster volume heal models full