環境準備:python
系統版本:CentOS Linux release 7.5.1804 (Core)c++
glusterfs:3.6.9git
userspace-rcu-master:github
硬件資源:算法
10.200.22.152 GlusterFS-master 如下簡稱152sql
10.200.22.151 GlusterFS-slave 如下簡稱151bootstrap
yum install -y flex bison openssl openssl-devel acl libacl libacl-devel sqlite-devel libxml2-devel python-devel make cmake gcc gcc-c++ autoconf automake libtool unzip zip wget
1)進入/usr/local/src目錄並下載userspace-rcu-master.zip緩存
cd /usr/local/src && wget https://github.com/urcu/userspace-rcu/archive/master.zip
2)解壓並編譯安裝,命令以下:服務器
unzip /usr/local/src/master -d /usr/local/ cd /usr/local/userspace-rcu-master/ ./bootstrap ./configure make && make install ldconfig
1)進入/usr/local/src目錄並下載glusterfs-3.6.9.tar.gz異步
cd /usr/local/src && wget https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.6/3.6.9/glusterfs-3.6.9.tar.gz
2)解壓並編譯安裝,命令以下:
tar -zxvf /usr/local/src/glusterfs-3.6.9.tar.gz -C /usr/local/ cd /usr/local/glusterfs-3.6.9/ ./configure --prefix=/usr/local/glusterfs make && make install
3)添加環境變量
vi /etc/profile #最上面添加以下配置 export GLUSTERFS_HOME=/usr/local/glusterfs export PATH=$PATH:$GLUSTERFS_HOME/sbin
示例圖以下:
source /etc/profile #刷新配置使之生效
4)啓動glusterfs
/usr/local/glusterfs/sbin/glusterd
5)關閉防火牆
systemctl stop firewalld.service systemctl disable firewalld.service
附:YUM源安裝gluster
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm wget -P /etc/yum.repos.d https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.19/CentOS/glusterfs-epel.repo (根據實際須要選擇合適版本) yum -y install glusterfs glusterfs-fuse glusterfs-server systemctl start glusterd.service systemctl enable glusterd.service
1)執行如下命令,將151
節點加入到集羣:
gluster peer probe 20.200.22.151
2)查看集羣狀態:
[root@GlusterFS-master ~]# gluster peer status Number of Peers: 1 Hostname: 10.200.22.151 Uuid: d2426768-81e9-486c-808b-d4716b1cd8ec State: Peer in Cluster (Connected)
3)查看 volume
信息
[root@GlusterFS-master ~]# gluster volume info No volumes present
4)建立數據存儲目錄(152&151) (集羣的全部節點都要配置)
mkdir -p /data
5)建立複製卷 models
,指定剛剛建立的目錄(replica 2代表存儲2個備份,後面指定服務器的存儲目錄)
[root@GlusterFS-master ~]# gluster volume create models replica 2 10.200.22.152:/data 10.200.22.151:/data force volume create: models: success: please start the volume to access data
GlusterFS 幾種volume模式說明:
連接中比較直觀:https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/
默認模式,既DHT, 也叫分佈卷: 將文件已hash算法隨機分佈到 一臺服務器節點中存儲。
命令格式:gluster volume create test-volume server1:/exp1 server2:/exp2
複製模式,既AFR, 建立volume 時帶 replica x 數量: 將文件複製到 replica x 個節點中,如今已經推薦3節點仲裁者複製模式,由於2節點可能產生腦裂。
命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
gluster volume create test-volume replica 3 arbiter 1 transport tcp server1:/exp1 server2:/exp2 server3:/exp3
分佈式複製模式,至少4節點。
命令格式:gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
分散模式,最少須要3節點
命令格式:gluster volume create test-volume disperse 3 server{1..3}:/bricks/test-volume
分佈式分散模式,建立一個分佈式分散體積,分散關鍵字和<數量>是強制性的,指定的磚塊在命令行中的數量必須是分散數的倍數
命令格式:gluster volume create <volname> disperse 3 server1:/brick{1..6}
6)再次查看 volume
信息
[root@GlusterFS-master ~]# gluster volume info Volume Name: models Type: Replicate Volume ID: f2792167-cbab-4279-9d6d-77dc6559afa7 Status: Created Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.200.22.152:/data Brick2: 10.200.22.151:/data
7)啓動 models
[root@GlusterFS-master ~]# gluster volume start models volume start: models: success
8)gluster 性能調優
開啓 指定 volume 的配額 gluster volume quota models enable 限制 models 總目錄最大使用 10GB 空間(可根據實際硬盤大小配置) gluster volume quota models limit-usage / 10GB 設置 cache 大小(128MB並不是絕對,您可根據實際硬盤大小配置) gluster volume set models performance.cache-size 128MB 開啓異步,後臺操做 gluster volume set models performance.flush-behind on 設置 io 線程 32 gluster volume set models performance.io-thread-count 32 設置 回寫 (寫數據時間,先寫入緩存內,再寫入硬盤) gluster volume set models performance.write-behind on
9)查看調優以後的volume信息
[root@GlusterFS-master ~]# gluster volume info Volume Name: models Type: Replicate Volume ID: f2792167-cbab-4279-9d6d-77dc6559afa7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.200.22.152:/data Brick2: 10.200.22.151:/data Options Reconfigured: performance.write-behind: on performance.io-thread-count: 32 performance.flush-behind: on performance.cache-size: 128MB features.quota: on
1)安裝gluster-client,命令以下:
yum install -y glusterfs glusterfs-fuse
2)創建掛載點目錄,命令以下:
mkdir -p /opt/gfsmount
3)掛載,命令以下:
mount -t glusterfs 10.200.22.151:models /opt/gfsmount/
4)令檢查掛載狀況,命令以下:
df -h
5)測試
time dd if=/dev/zero of=/opt/gfsmount/hello bs=10M count=1
6)查看集羣存儲狀況 (在53和54兩個節點上都運行):
cd /opt/gluster/data/ && ll
1)刪除卷
gluster volume stop models gluster volume delete models
2)將機器移出集羣
gluster peer detach glusterfs3 glusterfs4
3)卷擴容(因爲副本數設置爲2,至少要添加2(四、六、8..)臺機器)
gluster peer probe glusterfs3 # 加節點 gluster peer probe glusterfs4 # 加節點 gluster volume add-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models force# 合併卷
4)從新均衡卷
gluster volume rebalance models start gluster volume rebalance models status gluster volume rebalance models stop
5)收縮卷(收縮卷前gluster須要先移動數據到其餘位置)
gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models start # 開始遷移 gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models status # 查看遷移狀態 gluster volume remove-brick models glusterfs3:/data/brick1/models glusterfs4:/data/brick1/models commit # 遷移完成後提交
6)遷移卷
gluster peer probe glusterfs5 # 將glusterfs3的數據遷移到glusterfs5,先將glusterfs5加入集羣 gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models start # 開始遷移 gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models status # 查看遷移狀態 gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models commit # 數據遷移完畢後提交 gluster volume replace-brick models glusterfs3:/data/brick1/models glusterfs5:/data/brick1/models commit -force # 若是機器agent31.kisops.org出現故障已經不能運行,執行強制提交 gluster volume heal gfs full # 同步整個卷
7)受權訪問
gluster volume set gfs auth.allow 10.200.*
查看GlusterFS中全部的volume: gluster volume list
查看
GlusterFS中全部的volume的信息:
gluster volume status
gluster volume info
啓動磁盤: gluster volume start models //啓動名字爲 models 的磁盤
中止磁盤: gluster volume stop models //中止名字爲 models 的磁盤
刪除磁盤: gluster volume delete models //刪除名字爲 models 的磁盤
查看集羣節點:
gluster pool list
1.glusterFS服務須要啓動 2.磁盤models須要啓動 3.目錄/opt/gfsmount/須要從新掛載 4.掛載完目錄/opt/gfsmount/須要從新進入 systemctl stop firewalld.service gluster volume start models mount -t glusterfs 10.200.22.151:models /opt/gfsmount/ cd /opt/gfsmount/