OpenStack 裏有三個地方能夠和 Ceph 塊設備結合:node
ceph osd pool create volumes 128 ceph osd pool create images 128 ceph osd pool create backups 128 ceph osd pool create vms 128
在運行着glance-api 、 cinder-volume 、 nova-compute 或 cinder-backup 的主機上進行安裝python
yum -y install python-rbd ceph
在ceph存儲的管理節點上,將配置文件同步到ceph客戶端shell
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
若是ceph啓用了客戶端認證,則須要爲 Nova/Cinder 和 Glance 建立新用戶centos
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
把 client.cinder 、 client.glance 和 client.cinder-backup 的密鑰環複製到適當的節點,並更改全部權api
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
運行 nova-compute 的節點,其進程須要密鑰環文件服務器
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.key
還得把 client.cinder 用戶的密鑰存進 libvirt 。 libvirt 進程從 Cinder 掛載塊設備時要用它訪問集羣。
在運行 nova-compute 的節點上建立一個密鑰的臨時副本:ssh
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
而後,在計算節點上把密鑰加進 libvirt 、而後刪除臨時副本:性能
uuidgen 457eb676-33da-42ec-9a8c-9293d545c337 cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
保留密鑰的 uuid ,稍後配置 nova-compute 時要用。測試
編輯/etc/glance/glance-api.conf
修改glance_store的section內容:ui
[glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
若是你想容許使用 image 的寫時複製克隆,再添加下列內容到 [DEFAULT] 段下
show_image_direct_url = True
重啓glance API服務,並測試
systemctl restart openstack-glance-api.service openstack-glance-registry.service source admin-openrc.sh glance image-create --name "centos6_ceph" --file /root/centos6.5-cloud.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress openstack image list
OpenStack 須要一個驅動和 Ceph 塊設備交互。還得指定塊設備所在的存儲池名。編輯 OpenStack 節點上的 /etc/cinder/cinder.conf ,添加以下內容
[DEFAULT] enabled_backends = ceph [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 43f7430d-cce0-46eb-a0fc-a593e27878c2
backup_driver = cinder.backup.drivers.ceph backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true
重啓cinder-volume服務
systemctl restart openstack-cinder-volume.service
在cinder管理節點查看:
[root@controller ~]# cinder-manage service list Binary Host Zone Status State Updated At cinder-scheduler controller nova enabled :-) 2016-09-19 12:44:50 cinder-volume compute2@ceph nova enabled :-) 2016-09-19 12:44:49 cinder-volume compute1@ceph nova enabled :-) 2016-09-19 12:44:49
編輯全部計算節點上的 /etc/nova/nova.conf 文件,添加以下內容:
libvirt_images_type = rbd libvirt_images_rbd_pool = vms libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf libvirt_disk_cachemodes="network=writeback" rbd_user = cinder rbd_secret_uuid =43f7430d-cce0-46eb-a0fc-a593e27878c2 live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
注意修改rbd_secret_uuid 爲你本身實際的id
最後重啓nova-compute服務
systemctl restart openstack-nova-compute.service
問題彙總:
1.遇到刪除雲硬盤處於deleteing中,
經查詢/var/log/cinder/volume.log日誌發現提示一條[Errno 13] Permission denied: '/var/lock/cinder',因而在/var/lock目錄下建立cinder目錄,並賦予權限,重啓cinder相關服務便可刪掉。
2.從ceph啓動虛擬機作磁盤影射時報錯:
經檢查發現cinder api 和volumes 之間已經斷開通訊,重啓n遍也不行,最後發現時間同步,同步好時間以後,已能夠
故障時:
修復後: