關於GlusterFS文件系統的部署、維護腳本,這一篇就夠了!

關於GlusterFS的基礎知識,這裏再也不闡述,畢竟當你搜到這篇文章時,你必定對它有所瞭解。node

離線部署GlusterFS的準備工做

沒錯,要的就是離線部署!由於不排除客戶環境是局域網。爲了能離線部署,首要的即是構建GlusterFS安裝包。這裏以GlusterFS4.1.0爲例,部署環境是Centos7.4.
福利:若是不想倒騰這一切,作個伸手黨,直接移步下一節《編寫部署腳本》
安裝rpmbuild工具python

yum -y install rpm-build

安裝編譯工具和相關依賴ios

yum install -y flex bison openssl-devel libacl-devel sqlite-devel libxml2-devel libtool automake autoconf gcc attr python-devel unzip

userspace-rcu-master這個依賴有點特殊,只找到了源碼包,須要本身編譯sql

cp userspace-rcu-master.zip /tmp
cd /tmp
unzip userspace-rcu-master.zip
cd userspace-rcu-master
./bootstrap
./configure
make
make install
ldconfig

安裝glusterfs源碼包(這一步將會安裝glusterfs-4.1.0.tar.gz和glusterfs.spec這兩個文件,分別位於如下目錄:/root/rpmbuild/SOURCES 和 /root/rpmbuild/SPECS.)bootstrap

cp glusterfs-4.1.0-1.el7.centos.src.rpm /tmp
rpm -i glusterfs-4.1.0-1.el7.centos.src.rpm

產生RPM安裝包centos

cd /root/rpmbuild/SPECS
rpmbuild -bb glusterfs.spec

通過這一步,你就會在/root/rpmbuild/RPMS/x86_64這個目錄下,看到安裝包了:glusterfs-4.1.0-1.el7.centos.x86_64.rpmbash

編寫部署腳本

這個腳本採用的部署方式爲分佈+複製,每一個文件都有兩個副本,也就意味着,做爲服務節點,必須是偶數臺服務器。執行如下腳本時,首先要將
glusterfs-4.1.0-install.sh(就是下面的腳本文件)
libsqlite3-devel-3.25.2-alt2.x86_64.rpm
libuserspace-rcu-0.10.1-alt1.x86_64.rpm
glusterfs-4.1.0-1.el7.centos.x86_64.rpm
這四個文件放到/tmp目錄下。.sh文件就是下面的腳本,另三個安裝包我上傳到了百度網盤,連接:百度網盤
提取碼:1rym
話很少說,先上代碼:服務器

#!/bin/sh
# Author haotaro

# 各個節點的IP地址,以英文逗號分隔,偶數個
NODES=$1
# 跑這個腳本的服務器IP,這個服務器也會做爲客戶端
MY_IP=$2
# 各節點的密碼,以英文逗號分隔,次序與NODES一致
PASSWORDS=$3

NODES=${NODES//,/ }
PASSWORDS=(${PASSWORDS//,/ })

nodes_arr=($NODES)
# 若是非偶數節點,會強制刪除其中一個服務器
if [ $[${#nodes_arr[@]} % 2] != 0  ]; then
    for ((i=0;i<${#nodes_arr[@]};i++)); do
        if [ ${nodes_arr[$i]} != $MY_IP  ]; then
            unset nodes_arr[$i]
            break
        fi
    done
fi
NODES="${nodes_arr[*]}"

NODES_STR=${NODES// /,}
###############Initialization################
# ROOT_PASS=$PASSWORD
# Gluster peers
# NODES=(192.168.0.108 192.168.0.105 192.168.0.188 192.168.0.157 192.168.0.167 192.168.0.149 192.168.0.178 192.168.0.181)

# Gluster volumes
#卷名稱爲test-volume,每一個節點的/data/aiosdata是數據實際存放位置
volume=(test-volume /data/gfsdata $NODES_STR)
VOLUMES=(volume)
#客戶端掛載點
MOUNT_POINT=/mnt/gfs
#############################################
 
# Get MY_IP
# if [ "${MY_IP}" == "" ];then
#         MY_IP=$(python -c "import socket;socket=socket.socket();socket.connect(('8.8.8.8',53));print socket.getsockname()[0];")
# fi
 
# Step 1. 安裝sshpass
sudo yum -y install sshpass
 
# Step 2. 在每一個節點上安裝.
 
sudo cat > /tmp/tmp_install_gfs.sh << _wrtend_
#!/bin/bash
yum -y install rpcbind
rpm -i --force --nodeps /tmp/libuserspace-rcu-0.10.1-alt1.x86_64.rpm
rpm -i --force --nodeps /tmp/libsqlite3-devel-3.25.2-alt2.x86_64.rpm
yum -y install /tmp/glusterfs-4.1.0-1.el7.centos.x86_64.rpm
mkdir -p /var/log/glusterfs /data/aiosdata
systemctl daemon-reload
systemctl start glusterd.service
systemctl enable glusterd.service

sleep 5
_wrtend_

sudo chmod +x /tmp/tmp_install_gfs.sh

i=0 
for node in ${NODES[@]}; do
    if [ "${MY_IP}" != "$node" ];then
        echo $node install start
        sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/glusterfs-4.1.0-1.el7.centos.x86_64.rpm ${node}:/tmp/
        sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/libuserspace-rcu-0.10.1-alt1.x86_64.rpm ${node}:/tmp/
        sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/libsqlite3-devel-3.25.2-alt2.x86_64.rpm ${node}:/tmp/
        sudo sshpass -p ${PASSWORDS[$i]} scp -o StrictHostKeyChecking=no /tmp/tmp_install_gfs.sh ${node}:/tmp/
        sudo sshpass -p ${PASSWORDS[$i]} ssh -o StrictHostKeyChecking=no root@${node} /tmp/tmp_install_gfs.sh
        echo $node install end
    fi
    let i+=1
done
 
sudo /tmp/tmp_install_gfs.sh
 
# Step 3. 進行peer操做
k=0
for node in ${NODES[@]}; do
    if [ "${MY_IP}" != "$node" ];then
        sudo gluster peer probe ${node}
        sudo sshpass -p ${PASSWORDS[$k]} ssh root@${node} gluster peer probe ${MY_IP}
    fi 
    let k+=1    
done
 
sleep 2
 
# Step 4. 確認peer狀態,建立並開啓卷
conn_peer_num=`gluster peer status | grep Connected | wc -l`
conn_peer_num=`expr $conn_peer_num + 1`
 
if [ ${conn_peer_num} -ge ${#NODES[@]} ];then
    echo "All peers have been attached."
    for vol in ${VOLUMES[@]};do
        eval vol_info=(\${$vol[@]})
        eval vol_nodes=(${vol_info[2]//,/ })
        vol_path=""
        for node in ${vol_nodes[@]};do
            vol_path=$vol_path$node:${vol_info[1]}" "
        done
 
        # create volume
        sudo gluster volume create ${vol_info[0]} replica 2 transport tcp ${vol_path} force
        # start volume
        sudo gluster volume start ${vol_info[0]}
    done 
else
    echo "Attach peers error"
    exit 0
fi

# 建立客戶端
sudo mkdir -p ${MOUNT_POINT}
sudo mount -t glusterfs ${MY_IP}:${vol_info[0]} ${MOUNT_POINT}

echo "mount success"

執行命令:
sh /tmp/glusterfs-4.1.0-install.sh <server1>,<server2>,...,<serverN> <thisServer> <password1>,<password2>,...<passwordN>
其中,thisServer是執行該腳本的服務器,而且是前面服務器列表其中一員,它除了做爲服務節點,還部署了客戶端。執行完後,能夠運行gluster peer status來檢查集羣情況。app

防止客戶端中途掛載失效

這裏是個笨方法,每隔十秒,檢測一次掛載狀況,若是失效,則從新掛載。ssh

#!/bin/sh
HISTFILE=/root/.bash_history
set -o history

echo "export PROMPT_COMMAND='history -a; history -c; history -r; $PROMPT_COMMAND'" >> /root/.bashrc
shopt -s histappend
echo "export HISTTIMEFORMAT='%F %T '" >> /root/.bashrc
source /root/.bashrc

# Get GlusterFS master url
GLUSTERFS_SERVER=$1

# Start to test GlusterFS mount 
while true
do
test_result=`df -h | grep "% /mnt/gfs"`
if [ -z "$test_result" ]
then
date=`date`
echo "["$date"]Unmounted...restart mounting..." >> /var/log/test_mount_log

# Get history,check if someone umounted GlusterFS client
history -w
echo "-------------------HISTORY--------------------" >> /var/log/test_mount_log
history >> /var/log/test_mount_log
echo "---------------------END----------------------" >> /var/log/test_mount_log

# Start to mount
mount -t glusterfs $GLUSTERFS_SERVER:test-volume /raid/aios-data

echo "["$date"]Mounted" >> /var/log/test_mount_log

fi
sleep 10
done

執行完後,能夠用df -h命令查看掛載是否成功。

GlusterFS清理

在每一個服務節點執行如下命令
清理數據

systemctl stop glusterd
rm -rf /data/gfsdata
rm -rf /var/lib/glusterd

刪除安裝包

rpm -qa | grep gluster
yum -y remove 安裝包名稱

其餘

GlusterFS其餘的維護技巧,如處理腦裂問題,直接進入官網,搜split關鍵字,就會出現不少很方便的技巧,這裏就再也不贅述。

相關文章
相關標籤/搜索