因爲作雲存儲,使用到glusterfs,簡單的記錄下。java
查看peer的狀況,當前的glusterfs集羣中,有當前節點和十一、12三個節點組成。node
List-1bash
[root@master1 /]# gluster peer status Number of Peers: 2 Hostname: 192.168.33.11 Uuid: 8c22b08f-7232-4ac9-b5d8-8262db2d4ee7 State: Peer in Cluster (Connected) Hostname: 192.168.33.12 Uuid: 7906f9a9-c58b-4c6e-93af-f4d9960b6220 State: Peer in Cluster (Connected)
也能夠用peer list來查看,以下List-2tcp
List-2ui
[root@master1 /]# gluster pool list UUID Hostname State 8c22b08f-7232-4ac9-b5d8-8262db2d4ee7 192.168.33.11 Connected 7906f9a9-c58b-4c6e-93af-f4d9960b6220 192.168.33.12 Connected a2d23b65-381e-45ea-a488-e9fee45e5928 localhost Connected
加入工做節點,以下List-3,將13這個工做節點加入進來,可使用hostname,也可使用IP地址。spa
List-3code
[root@master1 /]# gluster peer probe -h -h is an invalid address Usage: peer probe { <HOSTNAME> | <IP-address> } [root@master1 /]# gluster peer probe 192.168.33.13
取消工做節點,以下List-4所示,使用detach命令來取消,執行以後,再使用peer status來查看,就會看到效果了。orm
List-4rem
gluster peer detach HOSTNAME gluster peer detach 192.168.33.13
建立卷,以下List-5所示,10/11/12上的/data_gluster目錄必須存在,否則會報錯目錄不存在,若是有警告之類的信息,能夠加上force。要注意的是不加replica則,集羣中只會保留一份,不會複製到其它節點上。同步
List-5
gluster volume create hive_db_volume replica 3 192.168.33.10:/data_gluster/hive_db_volume \ 192.168.33.11:/data_gluster/hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume #加上force gluster volume create hive_db_volume replica 3 192.168.33.10:/data_gluster/hive_db_volume \ 192.168.33.11:/data_gluster/hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume force
啓用數據卷,用volume start啓動卷,以下List-6,由於我已經啓動那個捲了,因此提示卷已經啓動。
List-6
[root@master1 /]# gluster volume start hive_db_volume volume start: hive_db_volume: failed: Volume hive_db_volume already started #查看卷的信息 [root@master1 /]# gluster volume info hive_db_volume Volume Name: hive_db_volume Type: Replicate Volume ID: b34d2970-27b9-421a-8680-c242b38946e5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.33.10:/data_gluster/hive_db_volume Brick2: 192.168.33.11:/data_gluster/hive_db_volume Brick3: 192.168.33.12:/data_gluster/hive_db_volume Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
以後要先掛載才能使用,不能直接操做/data_gluster/hive_db_volume,以下List-7,掛載以後,在10這臺機器的/mnt/gluster/hive_db下,咱們就能夠存儲數據了,注意是咱們手動寫數據是到/mnt/gluster/hive_db,glusterfs會自動同步到/data_gluster/hive_db_volume下,不能直接操做/data_gluster/hive_db_volume這個目錄,更不要手動刪除/data_gluster/hive_db_volume裏面的數據。
List-7
#在/mnt下建立目錄用於掛載 mkdir -p /mnt/gluster/hive_db #以下命令進行掛載,hive_db_volume是咱們以前建立的卷 mount -t glusterfs 192.168.33.10:/hive_db_volume /mnt/gluster/hive_db
在10這臺機器的/mnt/gluster/hive_db下操做文件,看其它機器上的狀況,以下List-8,建立文件hello,寫入"hello world",以後進入/data_gluster/hive_db_volume——由於List-5咱們指定了路徑是/data_gluster/hive_db_volume,看到的就是咱們剛纔建立的那個文件。咱們去看11和12上的/data_gluster/hive_db_volume。
咱們只能在10這臺機器的/mnt/gluster/hive_db下操做纔有效,在11/12的這個目錄下操做,沒有做用,由於List-7中掛載到10這臺上了。
List-8
[root@master1 hive_db]# pwd /mnt/gluster/hive_db [root@master1 hive_db]# more hello hello world #以後進入/data_gluster/hive_db_volume查看,以下 [root@master1 hive_db_volume]# pwd /data_gluster/hive_db_volume [root@master1 hive_db_volume]# more hello hello world #在11上查看 [root@node1 hive_db_volume]# pwd /data_gluster/hive_db_volume [root@node1 hive_db_volume]# more hello hello world #在12上查看也是同樣的
刪除brick,以下List-9,
List-9
[root@master1 /]# gluster volume remove-brick hive_db_volume replica 2 192.168.33.12:/data_gluster/hive_db_volume force Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume. Do you want to continue? (y/n) y volume remove-brick commit force: success
List-9的操做以後,再查看該volume的詳情,以下List-10,發現對比List-6,少了一個brick,這樣大體應該瞭解brick是什麼了,大致能夠理解爲卷的數據存儲在這三個brick中,glusterfs自動幫咱們保持各個brick的同步,咱們也能夠刪除brick,這樣存儲數據的brick個數就減小了。
List-10
[root@master1 hive_db_volume]# gluster volume info hive_db_volume Volume Name: hive_db_volume Type: Replicate Volume ID: b34d2970-27b9-421a-8680-c242b38946e5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.33.10:/data_gluster/hive_db_volume Brick2: 192.168.33.11:/data_gluster/hive_db_volume Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
增長brick,以前刪除brick以後,還能夠增長上去,以下List-11所示,使用add-brick增長卷,以後再查看卷信息,發現對比List-10,brick多了個,即多了192.168.33.12:/data_gluster/hive_db_volume這個brick。
List-11
[root@master1 /]# gluster volume add-brick hive_db_volume replica 3 192.168.33.12:/data_gluster/hive_db_volume force volume add-brick: success [root@master1 /]# gluster volume info hive_db_volume Volume Name: hive_db_volume Type: Replicate Volume ID: b34d2970-27b9-421a-8680-c242b38946e5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.33.10:/data_gluster/hive_db_volume Brick2: 192.168.33.11:/data_gluster/hive_db_volume Brick3: 192.168.33.12:/data_gluster/hive_db_volume Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
還能夠替換brick,用replace-brick命令,以下List-12
List-12
gluster volume replace-brick hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume 192.168.33.12:/data_gluster/hive_db_volume2 commit force
卷啓動以後纔可使用,咱們能夠中止卷,使用stop,以下List-13,中止以後不能使用了。
List-13
[root@master1 hive_db_volume]# gluster volume stop hive_b_volume
刪除卷,使用delete命令刪除卷
List-14
[root@master1 hive_db_volume]# gluster volume delete hive_b_volume