openstack集成ceph過程當中出現rbd和rados鏈接ceph成功,可是openstack鏈接不成功。api
我配置的ceph使用了admin用戶進行鏈接ceph沒有創建用戶,多是權限限制。ui
給ceph新建受權用戶就能夠3d
ceph get-or-create client.glance mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.glance.keyring ceph get-or-create client.cinder mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.cinder.keyring ceph get-or-create client.nova mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.nova.keyring
另外須要注意的是修改nova的計算節點:code
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <!-- uuidgen生成,這行能夠沒有後面加入 --> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF virsh secret-define --file secret.xml virsh secret-set --secret key --base64 ceph auth get-key client.cinder
/etc/glance/glance-api.conf
xml
[DEFAULT] ... default_store = rbd ... [glance_store] stores = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
/etc/cinder/cinder.conf
ci
[DEFAULT] ... enabled_backends = ceph ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
/etc/nova/nova.conf
get
[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337 disk_cachemodes="network=writeback"