新增三臺centos7.2機器,linux
1 最小化安裝系統環境配置git
ip分配是 服務端192.168.0.191 192.168.0.192 192.168.0.193 192.168.0.190
vim
1.1 主機名分別爲 gluster01 gluster02 gluster03centos
hostnamectl set-hostname gluster01bash
hostnamectl set-hostname gluster02服務器
hostnamectl set-hostname gluster03tcp
hostnamectl set-hostname gluster00分佈式
1.2 關閉防火牆和selinuxide
systemctl stop firewalld測試
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/g' /etc/selinux/config
1.3 配置yum源
cd /etc/yum.repos.d/
wget http://mirrors.aliyun.com/repo/Centos-7.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo
yum -y install epel-release
yum install -y vim lrzsz wget
1.4 修改全部機器的hosts文件,添加對應的ip主機名解析
vim /etc/hosts
192.168.0.191 gluster01
192.168.0.192 gluster02
192.168.0.193 gluster03
192.168.0.190 gluster00
1.5 三臺機器--數據盤格式化---xfs
[root@localhost ~]# fdisk -l 磁盤 /dev/sda:32.2 GB, 32212254720 字節,62914560 個扇區 Units = 扇區 of 1 * 512 = 512 bytes 扇區大小(邏輯/物理):512 字節 / 512 字節 I/O 大小(最小/最佳):512 字節 / 512 字節 磁盤標籤類型:dos 磁盤標識符:0x0006241e 設備 Boot Start End Blocks Id System /dev/sda1 * 2048 616447 307200 83 Linux /dev/sda2 616448 4810751 2097152 82 Linux swap / Solaris /dev/sda3 4810752 62914559 29051904 83 Linux 磁盤 /dev/sdb:32.2 GB, 32212254720 字節,62914560 個扇區 Units = 扇區 of 1 * 512 = 512 bytes 扇區大小(邏輯/物理):512 字節 / 512 字節 I/O 大小(最小/最佳):512 字節 / 512 字節 [root@localhost ~]# fdisk /dev/sdb 歡迎使用 fdisk (util-linux 2.23.2)。 更改將停留在內存中,直到您決定將更改寫入磁盤。 使用寫入命令前請三思。 Device does not contain a recognized partition table 使用磁盤標識符 0x1cc01f75 建立新的 DOS 磁盤標籤。 命令(輸入 m 獲取幫助):n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p 分區號 (1-4,默認 1):1 起始 扇區 (2048-62914559,默認爲 2048): 將使用默認值 2048 Last 扇區, +扇區 or +size{K,M,G} (2048-62914559,默認爲 62914559): 將使用默認值 62914559 分區 1 已設置爲 Linux 類型,大小設爲 30 GiB 命令(輸入 m 獲取幫助):w The partition table has been altered! Calling ioctl() to re-read partition table. 正在同步磁盤。
分區完成後執行xfs格式化
[root@localhost ~]# mkfs -t xfs /dev/sdb1 meta-data=/dev/sdb1 isize=256 agcount=4, agsize=1966016 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=7864064, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=3839, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
掛載數據盤/dev/sdb1
[root@localhost ~]# mkdir -p /data/k8s [root@localhost ~]# echo '/dev/sdb1 /data/k8s xfs defaults 1 2' >> /etc/fstab [root@localhost ~]# mount -a
[root@localhost ~]# df -h 文件系統 容量 已用 可用 已用% 掛載點 /dev/sda3 28G 993M 27G 4% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 /dev/sdb1 30G 33M 30G 1% /data/k8s
2 安裝配置GlusterFS服務端
yum install centos-release-gluster -y
yum install -y glusterfs glusterfs-server glusterfs-fuse
yum install glusterfs-rdma -y
systemctl start glusterd
systemctl enable glusterd
查看版本 (glusterfs 6.1是最新版本,已經棄用stripe模式)
[root@gluster01 k8s]# gluster --version glusterfs 6.1 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.
選擇任意節點,好比在節點gluster01上,配置整個GlusterFS集羣,把各個節點加入到集羣(在節點1上,加入節點2和3)
gluster peer probe gluster01 本節點能夠不執行
gluster peer probe gluster02
gluster peer probe gluster03
gluster peer probe gluster00
查看集羣狀態
[root@gluster01 ~]# gluster peer status Number of Peers: 2 Hostname: gluster02 Uuid: 710e5f59-9f93-451a-b092-43acd480dd4b State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: f7664912-0038-4b86-8aee-88644df1221c State: Peer in Cluster (Connected) Hostname: gluster00 Uuid: d90cabcd-5afb-49d2-a50b-197f10cb00bf State: Peer in Cluster (Connected) [root@gluster01 ~]#
###刪除節點操做 命令
gluster peer detach gluster00
3 不一樣的卷模式示例
https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-dispersed-volumes
--------------------------------------------------------------------------------------------------------------------
示例1:建立複製卷
gluster volume create k8s-data replica 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s force
[root@gluster01 ~]# gluster volume create k8s-data replica 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s force volume create: k8s-data: success: please start the volume to access data [root@gluster01 ~]# gluster volume info k8s-data Volume Name: k8s-data Type: Replicate Volume ID: 7c70dd22-2e6c-4f7b-b2a4-d9ce579fe506 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root@gluster01 ~]# gluster volume status Status of volume: k8s-data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01:/data/k8s 49152 0 Y 2300 Brick gluster02:/data/k8s 49152 0 Y 2249 Brick gluster03:/data/k8s 49152 0 Y 2246 Self-heal Daemon on localhost N/A N/A Y 2321 Self-heal Daemon on gluster03 N/A N/A Y 2267 Self-heal Daemon on gluster02 N/A N/A Y 2270 Task Status of Volume k8s-data ------------------------------------------------------------------------------ There are no active volume tasks
刪除複製卷
[root@gluster01 ~]# gluster volume stop k8s-data
[root@gluster01 ~]# gluster volume delete k8s-data
[root@gluster01 ~]# gluster volume stop k8s-data Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: k8s-data: success [root@gluster01 ~]# gluster volume delete k8s-data Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: k8s-data: success [root@gluster01 ~]#
--------------------------------------------------------------------------------------------------------------------
示例2:建立分佈式複製卷 (該模式的replica爲2 ,則後面須要跟2的整數倍個文件系統或者目錄,1倍是複製卷,2倍及以上是分佈式複製卷)
建立時指定文件系統或者塊設備的順序對數據保護有很大影響。
每一個replica_count連續磚將造成一個副本集,全部副本集合併爲一個卷範圍的分發集,
要確保副本集成員未放置在同一節點上,每一個服務器上的第一個塊,而後列出每一個服務器上相同順序的塊,依此類推。
gluster volume create k8s-data replica 2 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force
[root@gluster01 k8s]# gluster volume create k8s-data replica 2 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force volume create: k8s-data: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info k8s-data Volume Name: k8s-data Type: Distributed-Replicate Volume ID: b9e31332-56b9-4fe1-988e-2d01ce236fc8 Status: Created Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
分佈式複製卷官網示例:
例如,具備雙向鏡像的四節點分佈式(複製)卷: # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data. 例如,要使用雙向鏡像建立六節點分佈式(複製)卷: # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 Creation of test-volume has been successful Please start the volume to access data.
-----------------------------------------------------------------------------------------------------------------
示例:建立分散卷
建立分散卷命令:
gluster volume create test-volume disperse 4 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force
[root@gluster01 k8s]# gluster volume create test-volume disperse 4 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s force There isn't an optimal redundancy value for this configuration. Do you want to create the volume with redundancy 1 ? (y/n) y volume create: test-volume: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info test-volume Volume Name: test-volume Type: Disperse Volume ID: 57de6cdf-36d4-4ba1-b30f-931a70999024 Status: Created Snapshot Count: 0 Number of Bricks: 1 x (3 + 1) = 4 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on
---------------------------------------------------------------------------------------------
示例:分佈式分散卷 disperse必須大於2 (disperse 爲 3 後面的文件系統必須是6個)
因爲機器不足或者磁盤不足,用根目錄的/k8s目錄 做爲兩個文件系統,生產上請使用新增機器,或者新數據盤(副本集不能再同一個磁盤中)
gluster volume create test-volume disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster01:/k8s gluster02:/k8s force
[root@gluster01 k8s]# gluster volume create test-volume disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster01:/k8s gluster02:/k8s force volume create: test-volume: success: please start the volume to access data [root@gluster01 k8s]# gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Volume ID: c6bda235-8c92-43be-b524-ff6567988008 Status: Created Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: gluster01:/data/k8s Brick2: gluster02:/data/k8s Brick3: gluster03:/data/k8s Brick4: gluster00:/data/k8s Brick5: gluster01:/k8s Brick6: gluster02:/k8s Options Reconfigured: transport.address-family: inet nfs.disable: on [root@gluster01 k8s]# gluster volume start test-volume volume start: test-volume: success
4 安裝客戶端
找一個機器192.168.0.195上
yum install centos-release-gluster -y ##必須升級這個 保證版本對應一直,版本不一致可能形成掛載不上。
yum install -y glusterfs glusterfs-fuse glusterfs-rdma
增長hosts解析vim /etc/hosts
192.168.0.191 gluster01
192.168.0.192 gluster02
192.168.0.190 gluster00
192.168.0.193 gluster03
建立掛載目錄
mkdir /data
執行掛載
mount -t glusterfs gluster01:/test-volume /data
[root@localhost ~]# mount -t glusterfs gluster01:test-volume /data [root@localhost ~]# df -h 文件系統 容量 已用 可用 已用% 掛載點 /dev/sda3 28G 1.1G 27G 4% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 gluster01:test-volume 116G 4.1G 112G 4% /data
測試
複製 /var/log/message 到/data
去server機器查看
[root@gluster01 k8s]# ll /data/k8s/ 總用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages [root@gluster02 k8s]# ll /data/k8s/ 總用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages [root@gluster03 k8s]# ll /data/k8s/ 總用量 156 -rw------- 2 root root 153600 5月 29 21:27 messages
已經成功複製到三個機器上
注:全部的分佈式複製分佈式分散,都是就近指定數的文件系統爲複製副本集羣
好比disperse 3 gluster01:/data/k8s gluster02:/data/k8s gluster03:/data/k8s gluster00:/data/k8s gluster05:/data/k8s gluster06:/data/k8s
前三個爲一套副本集羣,後三個是另外一套副本集羣。掛載gluster01 02 03 中任意一個都會複製到剩餘兩個機器文件系統中;掛載gluster00 05 06中任意一個,都會複製到剩餘兩個機器文件系統中。
---------------------------------------------------------------------------------------------------
新增示例供參考:
三臺機器建立文件系統目錄 [root@gluster01 k8s]# mkdir -p /data/k8s/conf_data [root@gluster01 k8s]# mkdir -p /data/k8s/log_data [root@gluster01 k8s]# mkdir -p /k8s/log_data [root@gluster01 k8s]# mkdir -p /k8s/conf_data 在gluster01上執行建立volume:k8s-log k8s-conf [root@gluster01 k8s]# gluster volume create k8s-log disperse 3 gluster01:/data/k8s/log_data gluster02:/data/k8s/log_data gluster03:/data/k8s/log_data gluster01:/k8s/log_data gluster02:/k8s/log_data gluster03:/k8s/log_data force volume create: k8s-log: success: please start the volume to access data [root@gluster01 k8s]# gluster volume start k8s-log volume start: k8s-log: success [root@gluster01 k8s]# gluster volume create k8s-conf disperse 3 gluster01:/data/k8s/conf_data gluster02:/data/k8s/conf_data gluster03:/data/k8s/conf_data gluster01:/k8s/conf_data gluster02:/k8s/conf_data gluster03:/k8s/conf_data force volume create: k8s-conf: success: please start the volume to access data [root@gluster01 k8s]# gluster volume start k8s-conf volume start: k8s-conf: success 在有安裝客戶端的機器上掛載 [root@gluster02 k8s]# mkdir -p /mnt/log [root@gluster02 k8s]# mount -t glusterfs gluster01:/k8s-log /mnt/log [root@gluster02 k8s]# df -h 文件系統 容量 已用 可用 已用% 掛載點 /dev/sda3 28G 1.2G 27G 5% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sdb1 30G 33M 30G 1% /data/k8s /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 gluster02:/k8s-log 116G 3.6G 112G 4% /mnt/log [root@gluster03 k8s]# mkdir -p /mnt/conf [root@gluster03 k8s]# mount -t glusterfs gluster01:/k8s-conf /mnt/conf [root@gluster03 k8s]# df -h 文件系統 容量 已用 可用 已用% 掛載點 /dev/sda3 28G 1.2G 27G 5% / devtmpfs 904M 0 904M 0% /dev tmpfs 913M 0 913M 0% /dev/shm tmpfs 913M 8.6M 904M 1% /run tmpfs 913M 0 913M 0% /sys/fs/cgroup /dev/sdb1 30G 33M 30G 1% /data/k8s /dev/sda1 297M 108M 189M 37% /boot tmpfs 183M 0 183M 0% /run/user/0 gluster01:/k8s-conf 116G 3.6G 112G 4% /mnt/conf
------------------------------------------------------------------------------