分佈式存儲系統-glusterfs

分佈式文件系統Glusterfsnode

Gluster File System 是自由軟件,主要由 Z RESEARCH 公司負責開發,十幾名開發者,最近很是活躍。主要應用在集羣系統中,具備很好的可擴展性。軟件的結構設計良好,易於擴展和配置,經過各個模塊的靈活搭配以獲得針對性的解決方案。可解決如下問題:網絡存儲,聯合存儲(融合多個節點上的存儲空間),冗餘備份,大文件的負載均衡(分塊)服務器

1.1系統環境

準備三臺機器:網絡

[root@node1 data]# cat /etc/redhat-release負載均衡

CentOS release 6.8 (Final)tcp

 

Node1 分佈式

192.168.70.71  ide

serverui

Node2url

192.168.70.72spa

server

Node3

192.168.70.73

client

1.2設置防火牆

vi /etc/sysconfig/iptables:

    -A INPUT -m state--state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT

    -A INPUT -m state --stateNEW -m tcp -p tcp --dport 49152:49162 -j ACCEPT

service iptables restart

 

1.3Glusterfs安裝

Server1server2下都執行如下操做

安裝方法1

· 

wget -l 1 -nd -nc -r -A.rpmhttp://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/x86_64/ 

 

wget -l 1 -nd -nc -r -A.rpmhttp://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/noarch/

wget -nc

http://download.gluster.org/pub/gluster/nfs-ganesha/2.3.0/EPEL.repo/epel-6Server/x86_64/nfs-ganesha-gluster-2.3.0-1.el6.x86_64.rpm

yum install * -y

方法二:(不一樣版本)

mkdir tools

cd /tools

wget -l 1 -nd -nc -r -A.rpm http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.5/RHEL/epel-6/x86_64/

yum install *.rpm

開啓gluster服務

/etc/init.d/glusterd start

設置glusterFS服務開機啓動

chkconfig glusterd on

1.4gluster服務器設置

1.4.1配置存儲池

1.4.2添加可信任的存儲池

[root@node1 /]# gluster peer probe 192.168.70.72

peer probe: success.

1.4.3查看狀態

[root@node1 /]# gluster peer status

Number of Peers: 1

 

Hostname: 192.168.70.72

Uuid: fdc6c52d-8393-458a-bf02-c1ff60a0ac1b

State: Accepted peer request (Connected)

1.4.4移除節點

[root@node1 /]# gluster peer detach 192.168.70.72

peer detach: success

[root@node1 /]# gluster peer status

Number of Peers: 0

1.5 建立GlusterFS邏輯卷(Volume)

node1node2分別創建mkdir /data/gfs/

mkdir /data/gfs/

建立邏輯卷、

gluster volume create vg0 replica 2192.168.70.71:/data/gfs 192.168.70.72:/data/gfs force

volume create: vg0: success: please startthe volume to access data

查看邏輯卷信息

[root@node1 /]# gluster volume info

 

Volume Name: vg0

Type: Replicate

Volume ID: 6aff1f4f-8efe-4ed0-879e-95df483a86a2

Status: Created

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: 192.168.70.71:/data/gfs

Brick2: 192.168.70.72:/data/gfs

[root@node1 /]# gluster volume status

Volume vg0 is not started

開啓邏輯卷

[root@node1 /]# gluster volume start vg0

volume start: vg0: success

1.6客戶端安裝

yum -y install glusterfs glusterfs-fuse

mount -t glusterfs 192.168.70.71:/gv0/mnt/gfs

# 卷擴容(因爲副本數設置爲2,至少要添加2468..)臺機器)

gluster peer probe 192.168.70.74 # 加節點

gluster peer probe 192.168.70.75 # 加節點

gluster volume add-brick gv0 192.168.70.74:/data/glusterfs192.168.70.75:/data/glusterfs # 合併卷

收縮卷(收縮卷前gluster須要先移動數據到其餘位置)

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs 192.168.70.75:/data/glusterfsstart # 開始遷移

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfsstatus # 查看遷移狀態

gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfs commit # 遷移完成後提交

# 遷移卷

複製代碼

gluster peer probe 192.168.70.75# 192.168.70.76數據遷移到192.168.70.75先將192.168.70.75加入集羣

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstart # 開始遷移

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstatus # 查看遷移狀態

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfscommit # 數據遷移完畢後提交

gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfs commit -force # 若是機器192.168.70.76出現故障已經不能運行,執行強制提交

gluster volume heal gv0full # 同步整個卷

相關文章
相關標籤/搜索