做者:獨筆孤行@TaoCloudhtml
DRBD(Distributed Replicated Block Device)是一個用軟件實現的、無共享的、服務器之間鏡像塊設備內容的存儲複製解決方案。能夠簡單的理解爲網絡RAID。node
DRBD的核心功能經過Linux的內核實現,最接近系統的IO棧,DRBD的位置處於文件系統如下,比文件系統更加靠近操做系統內核及IO棧。linux
節點 | 主機名 | IP地址 | 磁盤 | 操做系統 |
---|---|---|---|---|
節點1 | node1 | 172.16.201.53 | sda,sdb | centos7.6 |
節點2 | node2 | 172.16.201.54 | sda,sdb | centos7.6 |
關閉防火牆和selinuxc++
#2節點都須要配置 systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
配置epel源centos
#2節點都須要配置 yum install epel-release
若是yum源中有完整的drbd軟件,可直接經過yum進行安裝,若是yum沒法找到部分軟件包,可經過編譯安裝。如下2中方法二選一便可。bash
yum install drbd drbd-bash-completion drbd-udev drbd-utils kmod-drbd
yum方式進行安裝可能沒法找到kmod-drbd軟件包,所以須要編譯安裝。服務器
2.1準備編譯環境微信
yum update yum -y install gcc gcc-c++ make automake autoconf help2man libxslt libxslt-devel flex rpm-build kernel-devel pygobject2 pygobject2-devel reboot
2.2在官網下載源碼包,網絡
在官網 https://www.linbit.com/en/drbd-community/drbd-download/中獲取源碼包下載地址,並進行下載。app
wget https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.21-1.tar.gz wget https://www.linbit.com/downloads/drbd/utils/drbd-utils-9.13.0.tar.gz wget https://www.linbit.com/downloads/drbdmanage/drbdmanage-0.99.18.tar.gz mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} mkdir DRBD9
2.3.編譯生成rpm包
tar xvf drbd-9.0.21-1.tar.gz cd drbd-9.0.21-1 make kmp-rpm cp /root/rpmbuild/RPMS/x86_64/*.rpm /root/DRBD9/
tar xvf drbdmanage-0.99.18.tar.gz cd drbdmanage-0.99.18 make rpm cp dist/drbdmanage-0.99.18*.rpm /root/DRBD9/
2.4.開始安裝drbd
#2節點都須要安裝 cd /root/DRBD9 yum install drbd-kernel-debuginfo-9.0.21-1.x86_64.rpm drbdmanage-0.99.18-1.noarch.rpm drbdmanage-0.99.18-1.src.rpm kmod-drbd-9.0.21_3.10.0_1160.6.1-1.x86_64.rpm
1.主節點劃分vg
#節點1操做 pvcreate /dev/sdb1 vgcreate drbdpool /dev/sdb1
2.初始化DRBD集羣並添加節點
#節點1操做 [root@node1 ~]# drbdmanage init 172.16.201.53 You are going to initialize a new drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Empty drbdmanage control volume initialized on '/dev/drbd0'. Empty drbdmanage control volume initialized on '/dev/drbd1'. Waiting for server: . Operation completed successfully #添加節點2 [root@node1 ~]# drbdmanage add-node node2 172.16.201.54 Operation completed successfully Operation completed successfully Host key verification failed. Give leader time to contact the new node Operation completed successfully Operation completed successfully Join command for node node2: drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE
記錄返回結果中的最後一行:「drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE」 並在節點2中執行,以加入集羣。
3.從節點劃分vg
#節點2操做 pvcreate /dev/sdb vgcreate drbdpool /dev/sdb
4.從節點加入集羣
#節點2操做 [root@node2 ~]# drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Waiting for server to start up (can take up to 1 min) Operation completed successfully
5.檢查集羣狀態
#節點1操做,如下返回結果爲正常狀態 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate
6.建立資源
#節點1操做 #建立資源test01 [root@node1 ~]# drbdmanage add-resource test01 Operation completed successfully [root@node1 ~]# drbdmanage list-resources +----------------+ | Name | State | |----------------| | test01 | ok | +----------------+
7.建立卷
#節點1操做 #建立5GB的卷test01 [root@node1 ~]# drbdmanage add-volume test01 5GB Operation completed successfully [root@node1 ~]# drbdmanage list-volumes +-----------------------------------------------------------------------------+ | Name | Vol ID | Size | Minor | | State | |-----------------------------------------------------------------------------| | test01 | 0 | 4.66 GiB | 100 | | ok | +-----------------------------------------------------------------------------+ [root@node1 ~]#
8.部署資源
末尾數字 「2」 表示節點數量
#節點1操做 [root@node1 ~]# drbdmanage deploy-resource test01 2 Operation completed successfully #建立完時,狀態爲Inconsistent,正在進行同步 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate test01 role:Secondary disk:UpToDate node2 role:Secondary replication:SyncSource peer-disk:Inconsistent done:5.70 #同步完成後,狀態內容以下 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate test01 role:Secondary disk:UpToDate node2 role:Secondary peer-disk:UpToDate
9.配置DRBD設備完成後,建立文件系統並進行掛載
#節點1操做 # [/dev/drbd***]的數字,是經過命令[drbdmanage list-volumes]獲取的[Minor]值 [root@node1 ~]# mkfs.xfs /dev/drbd100 meta-data=/dev/drbd100 isize=512 agcount=4, agsize=305176 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1220703, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@node1 ~]# mount /dev/drbd100 /mnt/ [root@node1 ~]# echo "Hello World" > /mnt/test.txt [root@node1 ~]# ll /mnt/ total 4 -rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt [root@node1 ~]# cat /mnt/test.txt Hello World
10.在節點2上掛載DRBD設備,可進行以下操做:
#在節點1操做 #卸載/mnt目錄,配置爲從節點 [root@node1 ~]# umount /mnt/ [root@node1 ~]# drbdadm secondary test01 #在節點2操做 #配置爲主節點 [root@node2 ~]# drbdadm primary test01 [root@node2 ~]# mount /dev/drbd100 /mnt/ [root@node2 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 8.9M 3.9G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 35G 1.5G 34G 5% / /dev/sda1 xfs 1014M 190M 825M 19% /boot tmpfs tmpfs 783M 0 783M 0% /run/user/0 /dev/drbd100 xfs 4.7G 33M 4.7G 1% /mnt [root@node2 ~]# ls -l /mnt/ total 4 -rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt [root@node2 ~]# cat /mnt/test.txt Hello World
關注微信公衆號「雲實戰」,歡迎更多問題諮詢