使用ceph做爲openstack存儲後端配置方法

1.安裝ceph-common,並確保能夠鏈接到ceph後端html

添加ceph.repo:mysql

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=1

安裝ceph-common:sql

yum install ceph-common

從ceph後端機器拷貝ceph.conf, ceph.client.admin.keyring, hosts(ceph節點映射信息)到/etc/ceph/下,經過ceph --help查看輸出是否徹底,徹底會列出以下一些信息:後端

... 
Monitor commands:
auth add <entity> {<caps> [<caps>...]}   add auth info for <entity> from input
                                          file, or random key if no input is
                                          given, and/or any caps specified in
                                          the command
auth caps <entity> <caps> [<caps>...]    update caps for <name> from caps
...

這樣就保證了機器能夠鏈接到ceph後端。api

 

2.經過命令行生成ceph pools和建立用戶cinderdom

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create vms 128

glance使用images,cinder使用volumes,nova使用vms。爲了方便咱們只建立一個cinder用戶,能夠管理全部三個pools,具體以下:memcached

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'

而後拷貝client.cinder.keyring到/etc/ceph,具體以下;ui

[root@ss05 ~]# ceph auth get client.cinder
exported keyring for client.cinder
[client.cinder]
        key = AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog==
        caps mon = "allow r"
        caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images"

#拷貝輸出到/etc/ceph/ceph.client.cinder.keyring

 

3.配置libvirt的secret,在有nova-compute服務節點執行url

生成uuid,後續配置openstack會使用到,具體spa

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

生成secret.xml文件:

<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>

執行命令寫入secret:

virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created

加入key:

[root@ss05 ~]# ceph auth print-key client.cinder
AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog==

[root@ss05 ~]# virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog==

 

4.配置openstack,使用ceph做爲後端

配置cinder.conf:

[DEFAULT]
# debug = True
# verbose = True

rpc_backend = rabbit
auth_strategy = keystone
my_ip = *.*.*.*
glance_api_servers = http://controller:9292
enabled_backends = rbd
transport_url = rabbit://openstack:openstack@controller
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lock/cinder

[rbd] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

重啓服務

service openstack-cinder-volume restart

配置nova-compute.conf:

[DEFAULT]
compute_driver=libvirt.LibvirtDriver

[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid =457eb676-33da-42ec-9a8c-9293d545c337

重啓服務

service openstack-nova-compute restart

配置glance-api.conf:

[paste_deploy]
flavor = keystone

[glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = cinder rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8

重啓服務

service openstack-glance-api restart

 

5.關於錯誤nova-compute日誌出現rdb協議不支持,需從新編譯qemu

參見以下連接:http://www.cnblogs.com/hurongpu/p/8514002.html

相關文章
相關標籤/搜索