ceph對接openstack

1、使用rbd方式提供存儲以下數據:

(1)image(glance):保存glanc中的image;html

(2)volume(cinder)存儲:保存cinder的volume;保存建立虛擬機時選擇建立新卷;node

  

(3)vms(nova)的存儲:保存建立虛擬機時不選擇建立新卷;python

 

2、實施步驟:

(1)客戶端也要有cent用戶:(好比說我openstack環境有100多個節點,我不可能說全部的節點都建立cent這個用戶吧,那要選擇的去建立,在我節點部署像cinder、nova、glance這三個服務的節點上去建立cent用戶)mysql

useradd cent && echo "123" | passwd --stdin cent

echo
-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod
440 /etc/sudoers.d/ceph

(2)openstack要用ceph的節點(好比compute-node和storage-node)安裝下載的軟件包:redis

yum localinstall ./* -y

或則:每一個節點安裝 clients(要訪問ceph集羣的節點):sql

yum install python-rbd

yum install ceph-common #ceph的命令工具
若是先採用上面的方式安裝客戶端,其實這兩個包在rpm包中早已經安裝過了

(3)部署節點上執行,爲openstack節點安裝ceph:vim

ceph-deploy install controller

ceph-deploy admin controller

(4)客戶端執行api

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

(5)create pools,只需在一個ceph節點上操做便可:cors

在ceph環境裏建立三個pool,這三個pool是分別保存咱們openstack平臺的鏡像、虛擬機、卷。dom

ceph osd pool create images 1024

ceph osd pool create vms 1024

ceph osd pool create volumes 1024

顯示pool的狀態

ceph osd lspools

(6)在ceph集羣中,建立glance和cinder用戶, 只需在一個ceph節點上操做便可:

我ceph集羣要給你openstack平臺的glance和cinder用。

部署節點建立glance和cinder用戶
useradd glance

useradd cinder
然後作受權
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=images'

ceph auth
get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
controller compute storage三個節點已經有了這兩個用戶
nova使用cinder用戶,就不單首創建了

(7)拷貝ceph-ring(生成令牌環), 只需在一個ceph節點上操做便可:

ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring

ceph auth
get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring

 

使用scp拷貝到其餘節點(ceph集羣節點和openstack的要用ceph的節點好比compute-node和storage-node,本次對接的是一個all-in-one的環境,因此copy到controller節點便可 )

[root@yunwei ceph]# ls
ceph.client.admin.keyring  ceph.client.cinder.keyring  ceph.client.glance.keyring  ceph.conf  rbdmap  tmpR3uL7W

[root@yunwei ceph]#
[root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:
/etc/ceph/

(8)更改文件的權限(全部客戶端節點均執行)

chown glance:glance /etc/ceph/ceph.client.glance.keyring

chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

(9)更改libvirt權限(只需在nova-compute節點上操做便可,每一個計算節點都作)

uuidgen

940f0485-e206-4b49-b878-dcd0cb9c70a4

在/etc/ceph/目錄下(在什麼目錄沒有影響,放到/etc/ceph目錄方便管理):

cat > secret(認證的意思).xml <<EOF

<secret ephemeral='no' private='no'>

<uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid>

<usage type='ceph'>

<name>client.cinder secret</name>

</usage>

</secret>

EOF

將 secret.xml 拷貝到全部compute節點,並執行::

virsh secret-define --file secret.xml

ceph auth get-key client.cinder > ./client.cinder.key

virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

最後全部compute節點的client.cinder.key和secret.xml都是同樣的, 記下以前生成的uuid:940f0485-e206-4b49-b878-dcd0cb9c70a4

如遇以下錯誤:

[root@controller ceph]# virsh secret-define --file secret.xml

錯誤:使用 secret.xml 設定屬性失敗
錯誤:
internal error: 已將 UUID 爲d448a6ee-60f3-42a3-b6fa-6ec69cab2378 的 secret 定義爲與 client.cinder secret 一同使用 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- d448a6ee-60f3-42a3-b6fa-6ec69cab2378 ceph client.cinder secret [root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
已刪除 secret d448a6ee
-60f3-42a3-b6fa-6ec69cab2378 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- [root@controller ceph]# virsh secret-define --file secret.xml
生成 secret 940f0485
-e206-4b49-b878-dcd0cb9c70a4 [root@controller ~]# virsh secret-list
UUID 用量
-------------------------------------------------------------------------------- 940f0485-e206-4b49-b878-dcd0cb9c70a4 ceph client.cinder secret virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)

(10)配置Glance, 在全部的controller節點上作以下更改:

vim /etc/glance/glance-api.conf

[DEFAULT] default_store
= rbd [cors] [cors.subdomain] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 [image_format] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor]

在全部的controller節點上作以下更改

systemctl restart openstack-glance-api.service

systemctl status openstack-glance-api.service

建立image驗證:

[root@controller ~]# openstack image create "cirros"   --file cirros-0.3.3-x86_64-disk.img.img   --disk-format qcow2 --container-format bare --public
  
[root@controller ~]# rbd ls images

9ce5055e
-4217-44b4-a237-e7b577a20dac

**********有輸出鏡像說明成功了

 (8)配置 Cinder:

vim /etc/cinder/cinder.conf

[DEFAULT]
my_ip = #當前主機IP
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
state_path = /var/lib/cinder
transport_url = rabbit://openstack:admin@controller
[backend]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[ssl]
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
volume_backend_name=ceph

重啓cinder服務:

#控制節點重啓
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service 
#存儲節點重啓
openstack-cinder-volume.service

systemctl status openstack
-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

 建立volume驗證:

[root@controller gfs]# rbd ls volumes

volume-43b7c31d-a773-4604-8e4a-9ed78ec18996

 (9)配置Nova:

vim /etc/nova/nova.conf

[DEFAULT]
my_ip=#當前主機IP
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:admin@controller
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[cloudpipe]
[conductor]
[console]
[consoleauth]
[cors]
[cors.subdomain]
[crypto]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[image_file_url]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
virt_type=qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
auth_type = password
auth_url = http://controller:35357/v3
project_name = service
project_domain_name = Default
username = placement
password = placement
user_domain_name = Default
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

重啓nova服務:

#控制節點重啓
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service
#計算節點重啓 
openstack-nova-compute.service
#存儲節點重啓
openstack-nova-compute.service

systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service

 建立虛機驗證:

 

相關文章
相關標籤/搜索