openstack運維實戰系列(九)之cinder與glusterfs結合

1. 概述vim

    cinder做爲openstack的快存儲服務,爲instance提供永久的volume服務,cinder做爲一種可插拔式的服務,可以支持各類存儲類型,包括專業的FC存儲,如EMC,NetApp,HP,IBM,huawei等商場的專業存儲服務器,存儲廠商只要開發對應的驅動和cinder對接便可;此外,cinder還支持開源的分佈式存儲,如glusterfs,ceph,sheepdog,nfs等,經過開源的分佈式存儲方案,可以達到廉價的IP-SAN存儲。本文以glusterfs構建分佈式存儲,以供cinder使用。
後端

2. 構建glusterfs存儲api

    glusterfs是一種開源的分佈式存儲解決方案,可以支持集中方式:1. replicate複製(相似於RAID1),2.stripe分片(相似於RAID0),3. distribute-replicate分佈式複製,4. distribute-replicate-stripe分佈式複製和分片(相似於RAID10),本文采用的方式。
服務器

  1. 環境說明架構

    本文有兩臺機器組件glusterfs集羣,分別是:10.1.112.55和10.1.112.56,兩臺機器分別有11塊盤,每塊3T,磁盤名字從/dev/sdb至/dev/sdl,掛載至/data2-/data12,以下:負載均衡

[root@YiZhuang_10_1_112_55 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.9G  2.6G  6.9G  27% /
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/sda1            1008M   82M  876M   9% /boot
/dev/sda4             257G  188M  244G   1% /data1
/dev/sdb1             2.8T  118M  2.8T   1% /data2
/dev/sdc1             2.8T  118M  2.8T   1% /data3
/dev/sdd1             2.8T  118M  2.8T   1% /data4
/dev/sde1             2.8T  118M  2.8T   1% /data5
/dev/sdf1             2.8T  118M  2.8T   1% /data6
/dev/sdg1             2.8T  118M  2.8T   1% /data7
/dev/sdh1             2.8T  118M  2.8T   1% /data8
/dev/sdi1             2.8T  117M  2.8T   1% /data9
/dev/sdj1             2.8T  118M  2.8T   1% /data10
/dev/sdk1             2.8T  118M  2.8T   1% /data11
/dev/sdl1             2.8T  118M  2.8T   1% /data12

架構以下:tcp

wKiom1ah6q2SCFwzAAC7Fcrs0wc342.png

2. 激活glusterfs鄰居peer分佈式

[root@YiZhuang_10_1_112_55 ~]# gluster peer probe 10.1.112.56
peer probe: success. 
#查看
[root@YiZhuang_10_1_112_55 ~]# gluster peer status
Number of Peers: 1
Hostname: 10.1.112.56
Uuid: a720fd05-4fa7-4ff7-924e-2d8a40e48c18
State: Peer in Cluster (Connected)


3. 基於brick建立volume,切割成11份,分別存儲在兩臺機器(相似於RAID10)ide

[root@YiZhuang_10_1_112_55 ~]# gluster volume create openstack_cinder stripe 11 replica 2 transport tcp \
10.1.112.55:/data2/cinder 10.1.112.56:/data2/cinder \
10.1.112.55:/data3/cinder 10.1.112.56:/data3/cinder \
10.1.112.55:/data4/cinder 10.1.112.56:/data4/cinder \
10.1.112.55:/data5/cinder 10.1.112.56:/data5/cinder \
10.1.112.55:/data6/cinder 10.1.112.56:/data6/cinder \
10.1.112.55:/data7/cinder 10.1.112.56:/data7/cinder \
10.1.112.55:/data8/cinder 10.1.112.56:/data8/cinder \
10.1.112.55:/data9/cinder 10.1.112.56:/data9/cinder \
10.1.112.55:/data10/cinder 10.1.112.56:/data10/cinder \
10.1.112.55:/data11/cinder 10.1.112.56:/data11/cinder \
10.1.112.55:/data12/cinder 10.1.112.56:/data12/cinder

4. 查看glusterfs volume的結構
性能

[root@YiZhuang_10_1_112_55 ~]# gluster volume info
 
Volume Name: openstack_cinder
Type: Striped-Replicate
Volume ID: c55ff01b-3be0-4514-b622-83677f95924a
Status: Started
Number of Bricks: 1 x 11 x 2 = 22
Transport-type: tcp
Bricks:
Brick1: 10.1.112.55:/data2/cinder
Brick2: 10.1.112.56:/data2/cinder
Brick3: 10.1.112.55:/data3/cinder
Brick4: 10.1.112.56:/data3/cinder
Brick5: 10.1.112.55:/data4/cinder
Brick6: 10.1.112.56:/data4/cinder
Brick7: 10.1.112.55:/data5/cinder
Brick8: 10.1.112.56:/data5/cinder
Brick9: 10.1.112.55:/data6/cinder
Brick10: 10.1.112.56:/data6/cinder
Brick11: 10.1.112.55:/data7/cinder
Brick12: 10.1.112.56:/data7/cinder
Brick13: 10.1.112.55:/data8/cinder
Brick14: 10.1.112.56:/data8/cinder
Brick15: 10.1.112.55:/data9/cinder
Brick16: 10.1.112.56:/data9/cinder
Brick17: 10.1.112.55:/data10/cinder
Brick18: 10.1.112.56:/data10/cinder
Brick19: 10.1.112.55:/data11/cinder
Brick20: 10.1.112.56:/data11/cinder
Brick21: 10.1.112.55:/data12/cinder
Brick22: 10.1.112.56:/data12/cinder

5. 啓動glusterfs volume

[root@YiZhuang_10_1_112_55 ~]# gluster volume start openstack_cinder        #開啓glusterfs volume

#查看glusterfs volume的狀態
[root@YiZhuang_10_1_112_55 ~]# gluster volume status
Status of volume: openstack_cinder
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.1.112.55:/data2/cinder                         59152   Y       4121
Brick 10.1.112.56:/data2/cinder                         59152   Y       43596
Brick 10.1.112.55:/data3/cinder                         59153   Y       4132
Brick 10.1.112.56:/data3/cinder                         59153   Y       43607
Brick 10.1.112.55:/data4/cinder                         59154   Y       4143
Brick 10.1.112.56:/data4/cinder                         59154   Y       43618
Brick 10.1.112.55:/data5/cinder                         59155   Y       4154
Brick 10.1.112.56:/data5/cinder                         59155   Y       43629
Brick 10.1.112.55:/data6/cinder                         59156   Y       4165
Brick 10.1.112.56:/data6/cinder                         59156   Y       43640
Brick 10.1.112.55:/data7/cinder                         59157   Y       4176
Brick 10.1.112.56:/data7/cinder                         59157   Y       43651
Brick 10.1.112.55:/data8/cinder                         59158   Y       4187
Brick 10.1.112.56:/data8/cinder                         59158   Y       43662
Brick 10.1.112.55:/data9/cinder                         59159   Y       4198
Brick 10.1.112.56:/data9/cinder                         59159   Y       43673
Brick 10.1.112.55:/data10/cinder                        59160   Y       4209
Brick 10.1.112.56:/data10/cinder                        59160   Y       43684
Brick 10.1.112.55:/data11/cinder                        59161   Y       4220
Brick 10.1.112.56:/data11/cinder                        59161   Y       43695
Brick 10.1.112.55:/data12/cinder                        59162   Y       4231
Brick 10.1.112.56:/data12/cinder                        59162   Y       43706
NFS Server on localhost                                 2049    Y       4244
Self-heal Daemon on localhost                           N/A     Y       4251
NFS Server on 10.1.112.56                               2049    Y       43718
Self-heal Daemon on 10.1.112.56                         N/A     Y       43727
 
Task Status of Volume openstack_cinder
------------------------------------------------------------------------------

6. 掛載測試

[root@YiZhuang_10_1_112_56 ~]# mount.glusterfs 10.1.112.56:openstack_cinder /media/ 
[root@YiZhuang_10_1_112_56 ~]# df
Filesystem             1K-blocks    Used   Available Use% Mounted on
/dev/sda2               10321208 2488348     7308572  26% /
tmpfs                    8140364       0     8140364   0% /dev/shm
/dev/sda1                1032088   83596      896064   9% /boot
/dev/sda4              268751588  191660   254908060   1% /data1
/dev/sdb1             2928834296   32972  2928801324   1% /data2
/dev/sdc1             2928834296   32972  2928801324   1% /data3
/dev/sdd1             2928834296   32972  2928801324   1% /data4
/dev/sde1             2928834296   32972  2928801324   1% /data5
/dev/sdf1             2928834296   32972  2928801324   1% /data6
/dev/sdg1             2928834296   32972  2928801324   1% /data7
/dev/sdh1             2928834296   32972  2928801324   1% /data8
/dev/sdi1             2928834296   32972  2928801324   1% /data9
/dev/sdj1             2928834296   32972  2928801324   1% /data10
/dev/sdk1             2928834296   32972  2928801324   1% /data11
/dev/sdl1             2928834296   32972  2928801324   1% /data12
10.1.112.56:openstack_cinder
                     32217177216  362752 32216814464   1% /media        #已經掛載

3. cinder和glusterfs結合

  1. cinder-volume端配置內容以下

[DEFAULT]
enabled_backends = glusterfs
[glusterfs]                                                          #最後添加
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver      #驅動  
glusterfs_shares_config = /etc/cinder/shares.conf                    #glusterfs存儲
glusterfs_mount_point_base = /var/lib/cinder/volumes                 #掛載點
volume_backend_name = glusterfs                                      #後端名字,用於在controller上和cinder的type結合@@

2. 配置glusterfs存儲配置

[root@YiZhuang_10_1_112_55 ~]# vim /etc/cinder/shares.conf 
10.1.112.55:/openstack_cinder

3. 重啓cinder-volume服務

[root@YiZhuang_10_1_112_55 init.d]# chkconfig openstack-cinder-volume on
[root@YiZhuang_10_1_112_55 init.d]# service  openstack-cinder-volume restart
Stopping openstack-cinder-volume:                          [  OK  ]
Starting openstack-cinder-volume:                          [  OK  ]

@@@兩臺機器,執行相同的操做,並檢查日誌信息,看是否有錯誤/var/log/cinder/volume.log@@@@

4. controller節點檢查服務狀態

[root@controller ~]# cinder service-list
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |              Host              | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |        controller              | nova | enabled |   up  | 2016-01-22T08:52:14.000000 |       None      |
|  cinder-volume   | YiZhuang_10_1_112_55@glusterfs | nova | enabled |   up  | 2016-01-22T08:52:17.000000 |       None      |    #說明正常
|  cinder-volume   | YiZhuang_10_1_112_56@glusterfs | nova | enabled |   up  | 2016-01-22T08:52:04.000000 |       None      |
+------------------+--------------------------------+------+---------+-------+----------------------------+-----------------+

5. controller創建type

[root@controller ~]# cinder type-create glusterfs
+--------------------------------------+------------+
|                  ID                  |    Name    |
+--------------------------------------+------------+
| 6688e8f9-e744-4c21-b570-fd81b099d4c0 | glusterfs  |
+--------------------------------------+------------+

6. controller配置cinder-type和volume_backend_name聯動

[root@controller ~]# cinder type-key set  6688e8f9-e744-4c21-b570-fd81b099d4c0 volume_backend_name=glusterfs
#查看type的設置狀況

[root@controller~]# cinder extra-specs-list
+--------------------------------------+-----------+----------------------------------------+
|                  ID                  |    Name   |              extra_specs               |
+--------------------------------------+-----------+----------------------------------------+
| 6688e8f9-e744-4c21-b570-fd81b099d4c0 | glusterfs | {u'volume_backend_name': u'glusterfs'} |    #關聯完畢
+--------------------------------------+-----------+----------------------------------------+

7. 重啓controller的cinder服務

[root@LuGu_10_1_81_205 ~]# /etc/init.d/openstack-cinder-api  restart
Stopping openstack-cinder-api:                             [  OK  ]
Starting openstack-cinder-api:                             [  OK  ]
[root@LuGu_10_1_81_205 ~]# /etc/init.d/openstack-cinder-scheduler restart
Stopping openstack-cinder-scheduler:                       [  OK  ]
Starting openstack-cinder-scheduler:                       [  OK  ]

4. 功能測試

  1. 建立cinder volume

[root@controller ~]# cinder create --display-name "test1" --volume-type glusterfs 10        #執行cinder type的類型
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     p_w_uploads     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-22T09:01:48.978864      |
| display_description |                 None                 |
|     display_name    |                test1                 |
|      encrypted      |                False                 |
|          id         | 3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |              glusterfs               |
+---------------------+--------------------------------------+
[root@controller ~]# cinder show  3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          p_w_uploads           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2016-01-22T09:01:48.000000      |
|      display_description       |                 None                 |
|          display_name          |                test1                 |
|           encrypted            |                False                 |
|               id               | 3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001 |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |    YiZhuang_10_1_112_55@glusterfs    |        #落在10.1.112.55這臺機器
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   a49b16d5324a4d20bde2217b17200485   |
|              size              |                  10                  |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |        #建立成功,狀態爲available
|          volume_type           |              glusterfs               |
+--------------------------------+--------------------------------------+

2. 校驗glusterfs的切割狀況

[root@YiZhuang_10_1_112_56 ~]# for num in {2..12}
> do
> ls -lh /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
> done
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
-rw-rw-rw- 2 root root 931M Jan 22 17:01 /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001    #10G的磁盤,切割爲11份,分別落在11個磁盤上,達到負載均衡的效果,另一臺機器的數據是如出一轍

3. 校驗兩臺機器的數據的md5

[root@YiZhuang_10_1_112_56 ~]# for num in {2..12}; do md5sum /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001; done      
e0f8c6646f8ce81fe6be0b12f1511aa1  /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001

另一臺機器
[root@YiZhuang_10_1_112_55 ~]# for num in {2..12} ; do md5sum /data${num}/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001; done      
e0f8c6646f8ce81fe6be0b12f1511aa1  /data2/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data3/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e0f8c6646f8ce81fe6be0b12f1511aa1  /data4/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data5/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data6/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data7/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data8/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data9/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data10/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data11/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001
e18d850c6c53cdeb2e346fcd28c7a189  /data12/cinder/volume-3f0577c0-2e64-4c8d-a8a8-2b8da6b8d001

對比發現,二者的md5如出一轍,說明二者是相同的文件,互爲鏡像,至此,glusterfs和cinder聯動配置完畢!!

5. 總結

    cinder做爲openstack中管理volume的一個服務,主要承擔管理的角色,存儲的功能,有專業的存儲方案來完成,如本文的glusterfs開源分佈式存儲,此外,cinder還能夠針對不一樣的後端設置不一樣的type,如後端多是專業的存儲服務器,或者是SSD構建的glusterfs,或者SATA構建的ceph存儲,能夠設置不一樣的type,分配給不一樣的虛擬機,已達到不一樣性能的需求,關於這些功能,參考openstack cinder的配置文檔。

相關文章
相關標籤/搜索