openstack鏈接ceph不成功

現象

openstack集成ceph過程當中出現rbd和rados鏈接ceph成功,可是openstack鏈接不成功。api

緣由

我配置的ceph使用了admin用戶進行鏈接ceph沒有創建用戶,多是權限限制。ui

解決方法

給ceph新建受權用戶就能夠3d

ceph get-or-create client.glance mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.glance.keyring
ceph get-or-create client.cinder mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.cinder.keyring
ceph get-or-create client.nova mon 'allow *' osd 'allow *' mds 'allow *' -o ceph.client.nova.keyring

另外須要注意的是修改nova的計算節點:code

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <!-- uuidgen生成,這行能夠沒有後面加入  -->
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

virsh secret-define --file secret.xml
virsh secret-set --secret key --base64 ceph auth get-key client.cinder

說明

  • 這一系列命令生成的key值是配置nova和cinder的一個重要的值rbd_secret_uuid
  • uuid能夠先用uuidgen生成也能夠,在virsh sercret-define 的時候生成

附:openstack配置修改

/etc/glance/glance-api.confxml

[DEFAULT]
...
default_store = rbd
...
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

/etc/cinder/cinder.confci

[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

/etc/nova/nova.confget

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"
相關文章
相關標籤/搜索