1. 需求說明python
glance做爲openstack中p_w_picpath服務,支持多種適配器,支持將p_w_picpath存放到本地文件系統,http服務器,ceph分佈式文件系統,glusterfs和sleepdog等開源的分佈式文件系統上,本文,經過將講述glance如何和ceph結合。json
目前glance採用的是本地filesystem的方式存儲,存放在默認的路徑/var/lib/glance/p_w_picpaths下,當把本地的文件系統修改成分佈式的文件系統ceph以後,本來在系統中鏡像將沒法使用,因此建議當前的鏡像刪除,部署好ceph以後,再統一上傳至ceph中存儲。
bootstrap
2.原理解析vim
使用ceph的rbd接口,須要經過libvirt,因此須要在客戶端機器上安裝libvirt和qemu,關於ceph和openstack結合的結構以下,同時,在openstack中,須要用到存儲的地方有三個:1. glance的鏡像,默認的本地存儲,路徑在/var/lib/glance/p_w_picpaths目錄下,2. nova虛擬機存儲,默認本地,路徑位於/var/lib/nova/instances目錄下,3. cinder存儲,默認採用LVM的存儲方式。
後端
3. glance與ceph聯動api
1.建立資源池pool安全
一、ceph默認建立了一個pool:rbd [root@controller_10_1_2_230 ~]# ceph osd lspools 0 rbd, [root@controller_10_1_2_230 ~]# ceph osd pool stats pool rbd id 0 nothing is going on 二、建立一個pool,指定pg_num的大小爲128 [root@controller_10_1_2_230 ~]# ceph osd pool create p_w_picpaths 128 pool 'p_w_picpaths' created 三、查看pool的pg_num和pgp_num大小 [root@controller_10_1_2_230 ~]# ceph osd pool get p_w_picpaths pg_num pg_num: 128 [root@controller_10_1_2_230 ~]# ceph osd pool get p_w_picpaths pgp_num pgp_num: 128 四、查看ceph中的pools [root@controller_10_1_2_230 ~]# ceph osd lspools 0 rbd,1 p_w_picpaths, [root@controller_10_1_2_230 ~]# ceph osd pool stats pool rbd id 0 nothing is going on pool p_w_picpaths id 1 #增長了一個pool,id號碼是1 nothing is going on
2.配置ceph客戶端服務器
1. glance做爲ceph的客戶端,即glance-api,須要有ceph的配置文件,從ceph的monitor節點複製一份配置文件過去便可,我所在環境中控制節點和ceph monitor爲同一臺機器,不須要操做 #若是controller節點和ceph的monitor節點是分開,則須要複製 [root@controller_10_1_2_230 ~]# scp /etc/ceph/ceph.conf root@controller_10_1_2_230:/etc/ceph/ ceph.conf 2. 安裝客戶端rpm包 [root@controller_10_1_2_230 ~]# yum install python-rbd -y
3.配置ceph認證app
1. 添加認證的key [root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'class-read object_prefix rbd_children,allow rwx pool=p_w_picpaths' [client.glance] key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA== 2. 查看認證列表 [root@controller_10_1_2_230 ~]# ceph auth list installed auth entries: osd.0 key: AQDsx6lWYGehDxAAGwcYP9jDvH2Zaa8JlGwj1Q== caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQD1x6lWQCYBERAAjIKO1LVpj8FvVefDvNQZSA== caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQCexqlWQL6OGBAA2v5LsYEB5VgLyq/K2huY3A== caps: [mds] allow caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQCexqlWUMNRMRAAZEp/UlhQuaixMcNy5d5pPw== caps: [mon] allow profile bootstrap-mds client.bootstrap-osd key: AQCexqlWQFfpJBAAfPCx4sTLNztBESyFKys9LQ== caps: [mon] allow profile bootstrap-osd client.glance #glance鏈接ceph的認證信息 key: AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA== caps: [mon] allow r caps: [osd] class-read object_prefix rbd_children,allow rwx pool=p_w_picpaths 3. 將glance生成的key拷貝至 [root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance [client.glance] key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA== #將key導出到客戶端 [root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring [client.glance] key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA== [root@controller_10_1_2_230 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring [root@controller_10_1_2_230 ~]# ll /etc/ceph/ceph.client.glance.keyring -rw-r--r-- 1 glance glance 64 Jan 28 17:17 /etc/ceph/ceph.client.glance.keyring
4. 配置glance使用ceph作爲後端存儲curl
一、備份glance-api的配置文件,以便於恢復 [root@controller_10_1_2_230 ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig 二、修改glance配置文件,鏈接至ceph [root@controller_10_1_2_230 ~]# vim /etc/glance/glance-api.conf [DEFAULT] notification_driver = messaging rabbit_hosts = 10.1.2.230:5672 rabbit_retry_interval = 1 rabbit_retry_backoff = 2 rabbit_max_retries = 0 rabbit_ha_queues = True rabbit_durable_queues = False rabbit_userid = glance rabbit_password = GLANCE_MQPASS rabbit_virtual_host = /glance default_store=rbd #glance使用的後端存儲 known_stores=glance.store.rbd.Store #配置rbd的驅動 rbd_store_ceph_conf=/etc/ceph/ceph.conf #ceph的配置文件,包含有monitor的地址,經過查找monitor,能夠獲取認證信息 rbd_store_user=glance #認證用戶,便是剛建立的用戶 rbd_store_pool=p_w_picpaths #鏈接的存儲池 rbd_store_chunk_size=8 #設置chunk size,即切割的大小 3. 重啓glance服務 [root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-api restart Stopping openstack-glance-api: [ OK ] Starting openstack-glance-api: [ OK ] [root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-registry restart Stopping openstack-glance-registry: [ OK ] Starting openstack-glance-registry: [ OK ] [root@controller_10_1_2_230 ~]# tail -2 /etc/glance/glance-api.conf # location strategy defined by the 'location_strategy' config option. #store_type_preference = [root@controller_10_1_2_230 ~]# tail -2 /var/log/glance/registry.log 2016-01-28 18:40:25.231 21890 INFO glance.wsgi.server [-] Started child 21896 2016-01-28 18:40:25.232 21896 INFO glance.wsgi.server [-] (21896) wsgi starting up on http://0.0.0.0:9191/
5. 測試glance和ceph聯動狀況
[root@controller_10_1_2_230 ~]# glance --debug p_w_picpath-create --name glance_ceph_test --disk-format qcow2 --container-format bare --file cirros-0.3.3-x86_64-disk.img curl -i -X POST -H 'x-p_w_picpath-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-p_w_picpath-meta-size: 13200896' -H 'x-p_w_picpath-meta-is_public: False' -H 'X-Auth-Token: 062af9027a85487997d176c9f1e963f2' -H 'Content-Type: application/octet-stream' -H 'x-p_w_picpath-meta-disk_format: qcow2' -H 'x-p_w_picpath-meta-name: glance_ceph_test' -d '<open file u'cirros-0.3.3-x86_64-disk.img', mode 'rb' at 0x1ba24b0>' http://controller:9292/v1/p_w_picpaths HTTP/1.1 201 Created content-length: 489 etag: 133eae9fb1c98f45894a4e60d8736619 location: http://controller:9292/v1/p_w_picpaths/348a90e8-3631-4a66-a45d-590ec6413e7d date: Thu, 28 Jan 2016 10:42:06 GMT content-type: application/json x-openstack-request-id: req-b993bc0b-447e-49b4-a8ce-bd7765199d5a {"p_w_picpath": {"status": "active", "deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": "2016-01-28T10:42:06", "owner": "ef4b83a909dc4689b663ff2c70022478", "min_disk": 0, "is_public": false, "deleted_at": null, "id": "348a90e8-3631-4a66-a45d-590ec6413e7d", "size": 13200896, "virtual_size": null, "name": "glance_ceph_test", "checksum": "133eae9fb1c98f45894a4e60d8736619", "created_at": "2016-01-28T10:42:04", "disk_format": "qcow2", "properties": {}, "protected": false}} +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 133eae9fb1c98f45894a4e60d8736619 | | container_format | bare | | created_at | 2016-01-28T10:42:04 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 348a90e8-3631-4a66-a45d-590ec6413e7d | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | glance_ceph_test | | owner | ef4b83a909dc4689b663ff2c70022478 | | protected | False | | size | 13200896 | | status | active | | updated_at | 2016-01-28T10:42:06 | | virtual_size | None | +------------------+--------------------------------------+ [root@controller_10_1_2_230 ~]# glance p_w_picpath-list +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | 56e96957-1308-45c7-9c66-1afff680b217 | cirros-0.3.3-x86_64 | qcow2 | bare | 13200896 | active | | 348a90e8-3631-4a66-a45d-590ec6413e7d | glance_ceph_test | qcow2 | bare | 13200896 | active | #上傳成功 +--------------------------------------+---------------------+-------------+------------------+----------+--------+
6.查看ceph池的數據
[root@controller_10_1_2_230 ~]# rados -p p_w_picpaths ls rbd_directory rbd_header.10d7caaf292 rbd_data.10dd1fd73446.0000000000000001 rbd_id.348a90e8-3631-4a66-a45d-590ec6413e7d rbd_header.10dd1fd73446 rbd_data.10d7caaf292.0000000000000000 rbd_data.10dd1fd73446.0000000000000000 rbd_id.8a09b280-5916-44c6-9ce8-33bb57a09dad @@@glance中的數據存儲到了ceph文件系統中@@@
4. 總結
將openstack的glance的數據存儲到ceph中是一種很是好的解決方案,既可以保障p_w_picpath數據的安全性,同時glance和nova在同個存儲池中,可以基於copy-on-write的方式快速建立虛擬機,可以在秒級爲單位實現vm的建立。