CentOS7.4安裝部署openstack [Liberty版] (二)

繼上一篇博客CentOS7.4安裝部署openstack [Liberty版] (一),本篇繼續講述後續部分的內容html

1、添加塊設備存儲服務

1.服務簡述:node

OpenStack塊存儲服務爲實例提供塊存儲。存儲的分配和消耗是由塊存儲驅動器,或者多後端配置的驅動器決定的。還有不少驅動程序可用:NAS/SAN,NFS,ISCSI,Ceph等等。塊存儲API和調度程序服務一般運行在控制節點上。取決於所使用的驅動程序,卷服務能夠運行在控制,計算節點或者獨立的存儲節點上。 OpenStack塊存儲服務(cinder)爲虛擬機添加持久的存儲,塊存儲提供一個基礎設施爲了管理卷,以及和OpenStack計算服務交互,爲實例提供卷。此服務也會激活管理卷的快照和卷類型的功能。 塊存儲服務一般包含下列組件: cinder-api   接受API請求,並將其路由到"cinder-volume"執行。 cinder-volume   與塊存儲服務和例如"cinder-scheduler"的進程進行直接交互。它也能夠與這些進程經過一個消息隊列進行交互。"cinder-volume"服務響應送到塊存儲服務的讀寫請求來維持狀態。它也能夠和多種存儲提供者在驅動架構下進行交互。 cinder-scheduler守護進程   選擇最優存儲提供節點來建立卷。其與"nova-scheduler"組件相似。 cinder-backup守護進程   "cinder-backup"服務提供任何種類備份捲到一個備份存儲提供者。就像"cinder-volume"服務,它與多種存儲提供者在驅動架構下進行交互。 消息隊列   在塊存儲的進程之間路由信息。

2.部署需求:在安裝和配置塊存儲服務以前,必須建立數據庫、服務證書和API端點。python

[root@controller ~]#mysql -u root -p123456 #建立數據庫及訪問權限 MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456'; MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456'; MariaDB [(none)]>\q [root@controller ~]#. admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt cinder User Password: #密碼爲:123456 Repeat User Password: [root@controller ~]#openstack role add --project service --user cinder admin #添加 admin 角色到 cinder 用戶上,這個命令執行後沒有輸出。 [root@controller ~]#openstack service create --name cinder  --description "OpenStack Block Storage" volume #建立 cinder 和 cinderv2 服務實體,塊設備存儲服務要求兩個服務實體。 [root@controller ~]#openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2 [root@controller ~]#openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s #建立塊設備存儲服務的 API 入口點,塊設備存儲服務每一個服務實體都須要端點
[root@controller ~]#openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

3.服務安裝mysql

控制節點:web

[root@controller ~]#yum install -y openstack-cinder python-cinderclient [root@controller ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf #編輯cinder.conf [DEFAULT] rpc_backend = rabbit #配置 RabbitMQ 消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 my_ip = 192.168.1.101 #配置 my_ip 來使用控制節點的管理接口的IP 地址 verbose = True #啓用詳細日誌 [BRCD_FABRIC_EXAMPLE] [CISCO_FABRIC_EXAMPLE] [cors] [cors.subdomain] [database] connection = mysql://cinder:123456@controller/cinder #配置數據庫訪問
[fc-zone-manager] [keymgr] [keystone_authtoken] #配置認證服務訪問,在 [keystone_authtoken] 中註釋或者刪除其餘選項。 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = 123456 [matchmaker_redis] [matchmaker_ring] [oslo_concurrency] lock_path = /var/lib/cinder/tmp #配置鎖路徑 [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit]  #配置 RabbitMQ 消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [oslo_middleware] [oslo_policy] [oslo_reports] [profiler] [root@controller ~]#su -s /bin/sh -c "cinder-manage db sync" cinder #初始化塊設備服務的數據庫 [root@controller ~]#[root@controller ~]# grep -A 1  "\[cinder\]" /etc/nova/nova.conf  #配置計算節點以使用塊設備存儲,編輯文件 /etc/nova/nova.conf 並添加以下內容 [cinder] os_region_name = RegionOne [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]#systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service [root@controller ~]#systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

存儲節點:redis

[root@block1 ~]# yum install lvm2 -y [root@block1 ~]# systemctl enable lvm2-lvmetad.service [root@block1 ~]# systemctl start lvm2-lvmetad.service [root@block1 ~]#pvcreate /dev/sdb #建立LVM 物理卷 /dev/sdb Physical volume "/dev/sdb" successfully created [root@block1 ~]#vgcreate cinder-volumes /dev/sdb #建立 LVM 卷組 cinder-volumes,塊存儲服務會在這個卷組中建立邏輯卷 Volume group "cinder-volumes" successfully created [root@block1 ~]# vim /etc/lvm/lvm.conf  #編輯etc/lvm/lvm.conf文件,在devices部分,添加一個過濾器,只接受/dev/sdb設備,拒絕其餘全部設備 devices { filter = [ "a/sda/", "a/sdb/", "r/.*/"] #若是存儲節點在操做系統磁盤上也使用了 LVM,也須要添加相關的設備到過濾器中 [root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y [root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service [root@block1 ~]# systemctl restart openstack-cinder-volume.service target.service [root@block1 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf [DEFAULT] rpc_backend = rabbit #配置 RabbitMQ 消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 my_ip = 192.168.1.103 #存儲節點上的管理網絡接口的IP 地址 enabled_backends = lvm #啓用 LVM 後端 glance_host = controller #配置鏡像服務的位置 [BRCD_FABRIC_EXAMPLE] [CISCO_FABRIC_EXAMPLE] [cors] [cors.subdomain] [database] connection = mysql://cinder:123456@controller/cinder #配置數據庫訪問
[fc-zone-manager] [keymgr] [keystone_authtoken] #配置認證服務訪問,在 [keystone_authtoken] 中註釋或者刪除其餘選項 auth_uri = http://controller:5000 
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = 123456 [matchmaker_redis] [matchmaker_ring] [oslo_concurrency] lock_path = /var/lib/cinder/tmp #配置鎖路徑 [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] #配置 RabbitMQ 消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [oslo_middleware] [oslo_policy] [oslo_reports] [profiler] [lvm] #配置LVM後端以LVM驅動結束,卷組cinder-volumes,iSCSI 協議和正確的 iSCSI服務 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service [root@block1 ~]# systemctl start openstack-cinder-volume.service target.service

驗證:sql

[root@controller ~]#source admin-openrc.sh [root@controller ~]#cinder service-list #列出服務組件以驗證是否每一個進程都成功啓動 +------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2014-10-18T01:30:54.000000 |       None      |
| cinder-volume    | block1@lvm | nova | enabled |   up  | 2014-10-18T01:30:57.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

2、添加對象存儲服務

1.服務簡述mongodb

OpenStack對象存儲服務(swift) 經過一系列: REST API 一塊兒提供對象存儲和恢復服務。在佈署對象存儲前,你的環境當中必須至少包括認證服務(keystone)。 OpenStack對象存儲是一個多租戶的對象存儲系統,它支持大規模擴展,能夠以低成原本管理大型的非結構化數據,經過RESTful HTTP 應用程序接口。 它包含下列組件: 代理服務器(swift-proxy-server) 接收OpenStack對象存儲API和純粹的HTTP請求以上傳文件,更改元數據,以及建立容器。它可服務於在web瀏覽器下顯示文件和容器列表。爲了改進性能,代理服務可使用可選的緩存,一般部署的是memcache。 帳戶服務器 (swift-account-server)   管理由對象存儲定義的帳戶。 容器服務器 (swift-container-server)   管理容器或文件夾的映射,對象存儲內部。 對象服務器 (swift-object-server)   在存儲節點上管理實際的對象,好比:文件。 各類按期進程   爲了駕馭大型數據存儲的任務,複製服務須要在集羣內確保一致性和可用性,其餘按期進程有審計,更新和reaper。 WSGI中間件   掌控認證,使用OpenStack認證服務。 swift 客戶端   用戶能夠經過此命令行客戶端來向REST API提交命令,受權的用戶角色能夠是管理員用戶,經銷商用戶,或者是swift用戶。 swift-init   初始化環鏈文件生成的腳本,將守護進程名稱看成參數並提供命令。歸檔於http://docs.openstack.org/developer/swift/admin_guide.html#managing-services。 swift-recon   一個被用於檢索多種關於一個集羣的度量和計量信息的命令行接口工具已被swift-recon中間件採集。 swift-ring-builder   存儲環鏈創建並重平衡實用程序。歸檔於http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings。

2.部署需求:配置對象存儲服務前,必須建立服務憑證和API端點。shell

[root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack user create --domain default --password-prompt swift #建立 swift用戶 User Password:  #密碼爲:123456 Repeat User Password: [root@controller ~]# openstack role add --project service --user swift admin #添加admin角色到 swift 用戶 [root@controller ~]#openstack service create --name swift --description "OpenStack Object Storage" object-store #建立 swift 服務實體 [root@controller ~]#openstack endpoint create --region RegionOne  object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s #建立對象存儲服務API端點
[root@controller ~]#openstack endpoint create --region RegionOne  object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

3.服務安裝數據庫

控制節點:

[root@controller ~]#yum install -y openstack-swift-proxy python-swiftclient  python-keystoneclient python-keystonemiddleware memcached [root@controller ~]# vim /etc/swift/proxy-server.conf  #配置文件在各發行版本中可能不一樣。你可能須要添加這些部分和選項而不是修改已經存在的部分和選項!!! [DEFAULT] #在[DEFAULT]部分,配置綁定端口,用戶和配置目錄 bind_port = 8080 user = swift swift_dir = /etc/swift [pipeline:main] #在[pipeline:main]部分,啓用合適的模塊 pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server]          #在[app:proxy-server]部分,啓用自動賬號建立 use = egg:swift#proxy account_autocreate = true [filter:keystoneauth] #在[filter:keystoneauth]部分,配置操做員角色 use = egg:swift#keystoneauth operator_roles = admin,user [filter:authtoken]  #在[filter:authtoken]部分,配置認證服務訪問 paste.filter_factory = keystonemiddleware.auth_token:filter_factory auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = swift password = 123456 delay_auth_decision = true [filter:cache] #在[filter:cache]部分,配置memcached位置 use = egg:swift#memcache memcache_servers = 127.0.0.1:11211

存儲節點:(在每一個存儲節點上執行這些步驟)

[root@object1 ~]#yum install xfsprogs rsync -y #安裝支持的工具包 [root@object1 ~]#mkfs.xfs /dev/sdb #使用XFS格式化/dev/sdb和/dev/sdc設備 [root@object1 ~]#mkfs.xfs /dev/sdc [root@object1 ~]#mkdir -p /srv/node/sdb #建立掛載點目錄結構 [root@object1 ~]#mkdir -p /srv/node/sdc [root@object1 ~]#tail -2 /etc/fstab  #編輯"/etc/fstab"文件幷包含如下內容 /dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2 [root@object1 ~]#mount /srv/node/sdb #掛載設備 [root@object1 ~]#mount /srv/node/sdc [root@object1 ~]#cat /etc/rsyncd.conf #編輯"/etc/rsyncd.conf" 文件幷包含如下內容 uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.1.104 #本機的網絡管理接口 [account] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false
lock file = /var/lock/object.lock [root@object1 ~]#systemctl enable rsyncd.service [root@object1 ~]# systemctl start rsyncd.service [root@object1 ~]# yum install openstack-swift-account openstack-swift-container  openstack-swift-object -y [root@object1 ~]#vim /etc/swift/account-server.conf [DEFAULT] #在[DEFAULT]`部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6002 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啓用合適的模塊 pipeline = healthcheck recon account-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift [root@object1 ~]# vim /etc/swift/container-server.conf [DEFAULT] #在[DEFAULT] 部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6001 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啓用合適的模塊 pipeline = healthcheck recon container-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift [root@object1 ~]#vim /etc/swift/object-server.conf [DEFAULT] #在[DEFAULT] 部分,配置綁定IP地址,綁定端口,用戶,配置目錄和掛載目錄 bind_ip = 192.168.1.104 bind_port = 6000 user = swift swift_dir = /etc/swift devices = /srv/node mount_check = true [pipeline:main] #在[pipeline:main]部分,啓用合適的模塊 pipeline = healthcheck recon object-server [filter:recon] #在[filter:recon]部分,配置recon (meters)緩存目錄和鎖文件目錄 use = egg:swift#recon recon_cache_path = /var/cache/swift recon_lock_path = /var/lock [root@object1 ~]#chown -R swift:swift /srv/node [root@object1 ~]#restorecon -R /srv/node [root@object1 ~]#mkdir -p /var/cache/swift [root@object1 ~]#chown -R root:swift /var/cache/swift

建立和分發初始化rings

控制節點:

[root@controller ~]# cd /etc/swift/ [root@controller swift]# swift-ring-builder account.builder create 10 3 1 #建立account.builder 文件 [root@controller swift]# wift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdb --weight 100 #添加每一個節點到 ring 中 [root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdc --weight 100 [root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdb --weight 100 [root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdc --weight 100 [root@controller swift]# swift-ring-builder account.builder #驗證 ring 的內容 account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0       1     1   192.168.1.104  6002   192.168.1.104              6002       sdb 100.00        768    0.00 
             1       1     1   192.168.1.104  6002   192.168.1.104              6002       sdc 100.00        768    0.00 
             2       1     1   192.168.1.105  6002   192.168.1.105              6002       sdb 100.00        768    0.00 
             3       1     1   192.168.1.105  6002   192.168.1.105              6002       sdc 100.00        768    0.00 [root@controller swift]# swift-ring-builder account.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00 [root@controller swift]# swift-ring-builder container.builder create 10 3 1 #建立container.builder文件 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdb --weight 100 #添加每一個節點到 ring 中 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdc --weight 100 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdb --weight 100 [root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdc --weight 100 [root@controller swift]# swift-ring-builder container.builder #驗證 ring 的內容 container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0       1     1   192.168.1.104  6001   192.168.1.104              6001       sdb 100.00        768    0.00 
             1       1     1   192.168.1.104  6001   192.168.1.104              6001       sdc 100.00        768    0.00 
             2       1     1   192.168.1.105  6001   192.168.1.105              6001       sdb 100.00        768    0.00 
             3       1     1   192.168.1.105  6001   192.168.1.105              6001       sdc 100.00        768    0.00 [root@controller swift]# swift-ring-builder container.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00 [root@controller swift]# swift-ring-builder object.builder create 10 3 1 #建立object.builder文件 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdb --weight 100 #添加每一個節點到 ring 中 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdc --weight 100 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdb --weight 100 [root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdc --weight 100 [root@controller swift]# swift-ring-builder object.builder #驗證 ring 的內容 object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion The minimum number of hours before a partition can be reassigned is 1 The overload factor is 0.00% (0.000000) Devices: id region zone ip address port replication ip replication port name weight partitions balance meta 0       1     1   192.168.1.105  6000   192.168.1.105              6000       sdb 100.00        768    0.00 
             1       1     1   192.168.1.105  6000   192.168.1.105              6000       sdc 100.00        768    0.00 
             2       1     1   192.168.1.104  6000   192.168.1.104              6000       sdb 100.00        768    0.00 
             3       1     1   192.168.1.104  6000   192.168.1.104              6000       sdc 100.00        768    0.00 [root@controller swift]# swift-ring-builder object.builder rebalance #平衡 ring Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00 [root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.104:/etc/swift/ #複製account.ring.gz,container.ring.gz和object.ring.gz文件到每一個存儲節點和其餘運行了代理服務的額外節點的 /etc/swift 目錄 [root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.105:/etc/swift/ [root@controller swift]# vim /etc/swift/swift.conf  #編輯/etc/swift/swift.conf文件並完成如下操做 [swift-hash] #在[swift-hash]部分,爲你的環境配置哈希路徑前綴和後綴,這些值要保密,而且不要修改或丟失。 swift_hash_path_suffix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
swift_hash_path_prefix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
[storage-policy:0] #在[storage-policy:0]部分,配置默認存儲策略 name = Policy-0
default = yes [root@controller swift]# chown -R root:swift /etc/swift [root@controller swift]# systemctl enable openstack-swift-proxy.service memcached.service #在控制節點和其餘運行了代理服務的節點上,啓動對象存儲代理服務及其依賴服務,並將它們配置爲隨系統啓動 [root@controller swift]# systemctl start openstack-swift-proxy.service memcached.service

存儲節點:

[root@object1 ~]#systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service [root@object1 ~]#systemctl enable openstack-swift-container.service  openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service [root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service  openstack-swift-object-replicator.service openstack-swift-object-updater.service [root@object1 ~]#systemctl start openstack-swift-account.service openstack-swift-account-auditor.service  openstack-swift-account-reaper.service openstack-swift-account-replicator.service [root@object1 ~]#systemctl start openstack-swift-container.service  openstack-swift-container-auditor.service openstack-swift-container-replicator.service  openstack-swift-container-updater.service [root@object1 ~]#systemctl start openstack-swift-object.service openstack-swift-object-auditor.service  openstack-swift-object-replicator.service openstack-swift-object-updater.service

驗證操做:

控制節點:

[root@controller swift]#cd [root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh #配置對象存儲服務客戶端使用版本3的認證API [root@controller ~]# swift stat #顯示服務狀態 Account: AUTH_444fce5db34546a7907af45df36d6e99 Containers: 0 Objects: 0 Bytes: 0 X-Put-Timestamp: 1518798659.41272 X-Timestamp: 1518798659.41272 X-Trans-Id: tx304f1ed71c194b1f90dd2-005a870740 Content-Type: text/plain; charset=utf-8 [root@controller ~]#  swift upload container1 demo-openrc.sh #上傳一個測試文件 demo-openrc.sh [root@controller ~]# swift list #列出容器 container1 [root@controller ~]# swift download container1 demo-openrc.sh #下載一個測試文件 demo-openrc.sh [auth 0.295s, headers 0.339s, total 0.339s, 0.005 MB/s]

 3、添加 Orchestration(編排) 服務

1.服務簡述

編排服務經過運行調用生成運行中雲應用程序的OpenStack API爲描述雲應用程序提供基於模板的編排。該軟件將其餘OpenStack核心組件整合進一個單文件模板系統。模板容許你建立不少種類的OpenStack資源,如實例,浮點IP,雲硬盤,安全組和用戶。它也提供高級功能,如實例高可用,實例自動縮放,和嵌套棧。這使得OpenStack的核心項目有着龐大的用戶羣。 服務使部署人員可以直接或者經過定製化插件來與編排服務集成 編排服務包含如下組件: heat命令行客戶端   一個命令行工具,和``heat-api``通訊,以運行:term:AWS CloudFormation API,最終開發者能夠直接使用Orchestration REST API。 heat-api組件   一個OpenStack本地 REST API ,發送API請求到heat-engine,經過遠程過程調用(RPC)。 heat-api-cfn組件   AWS 隊列API,和AWS CloudFormation兼容,發送API請求到``heat-engine``,經過遠程過程調用。 heat-engine   啓動模板和提供給API消費者回饋事件。

2.部署需求:在安裝和配置流程服務以前,必須建立數據庫,服務憑證和API端點。流程同時須要在認證服務中添加額外信息。

在控制節點上:

[root@controller ~]# mysql -u root -p123456  #建立數據庫並設置權限 MariaDB [(none)]>CREATE DATABASE heat; MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost'  IDENTIFIED BY '123456'; MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%'  IDENTIFIED BY '123456'; MariaDB [(none)]>\q [root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack user create --domain default --password-prompt heat #建立heat用戶 User Password: #密碼爲:1234546 Repeat User Password: [root@controller ~]# openstack role add --project service --user heat admin #添加 admin 角色到 heat 用戶上 [root@controller ~]# openstack service create --name heat  --description "Orchestration" orchestration #建立heat和 heat-cfn 服務實體 [root@controller ~]# openstack service create --name heat-cfn --description "Orchestration" cloudformation [root@controller ~]# openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s #建立 Orchestration 服務的 API 端點
[root@controller ~]# openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
[root@controller ~]# openstack domain create --description "Stack projects and users" heat #爲棧建立 heat 包含項目和用戶的域 [root@controller ~]# openstack user create --domain heat --password-prompt heat_domain_admin #在 heat 域中建立管理項目和用戶的heat_domain_admin用戶 User Password:  #密碼爲:1234546 Repeat User Password: [root@controller ~]# openstack role add --domain heat --user heat_domain_admin admin #添加admin角色到 heat 域 中的heat_domain_admin用戶,啓用heat_domain_admin用戶管理棧的管理權限 [root@controller ~]# openstack role create heat_stack_owner #建立 heat_stack_owner 角色 [root@controller ~]# openstack role add --project demo --user demo heat_stack_owner #添加heat_stack_owner角色到demo項目和用戶,啓用demo用戶管理棧 [root@controller ~]# openstack role create heat_stack_user #建立 heat_stack_user 角色,Orchestration 自動地分配 heat_stack_user角色給在 stack 部署過程當中建立的用戶。默認狀況下,這個角色會限制 API 的操做。爲了不衝突,請不要爲用戶添加 heat_stack_owner角色。

3.服務部署

控制節點:

[root@controller ~]# yum install -y openstack-heat-api openstack-heat-api-cfn  openstack-heat-engine python-heatclient [root@controller ~]# vim /etc/heat/heat.conf #編輯 /etc/heat/heat.conf 文件並完成以下內容 [database] connection = mysql://heat:123456@controller/heat #配置數據庫訪問
[DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 heat_metadata_server_url = http://controller:8000 #配置元數據和 等待條件URLs
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
stack_domain_admin = heat_domain_admin #配置棧域與管理憑據 stack_domain_admin_password = 123456 stack_user_domain_name = heat verbose = True #部分啓用詳細日誌 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = heat password = 123456 [trustee] #配置認證服務訪問 auth_plugin = password auth_url = http://controller:35357
username = heat password = 123456 user_domain_id = default [clients_keystone] #配置認證服務訪問 auth_uri = http://controller:5000
[ec2authtoken] #配置認證服務訪問 auth_uri = http://controller:5000/v3
[root@controller ~]# su -s /bin/sh -c "heat-manage db_sync" heat  #同步Orchestration數據庫 [root@controller ~]# systemctl enable openstack-heat-api.service  openstack-heat-api-cfn.service openstack-heat-engine.service [root@controller ~]#systemctl start openstack-heat-api.service  openstack-heat-api-cfn.service openstack-heat-engine.service 

驗證操做

[root@controller ~]# source admin-openrc.sh [root@controller ~]# heat service-list #該輸出顯示代表在控制節點上有應該四個heat-engine組件 +------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname   | binary      | engine_id                            | host       | topic  | updated_at                 | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 0d26b5d3-ec8a-44ad-9003-b2be72ccfaa7 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | 587b87e2-9e91-4cac-a8b2-53f51898a9c5 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | 8891e45b-beda-49b2-bfc7-29642f072eac | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | b0ef7bbb-cfb9-4000-a214-db9049b12a25 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

4、添加 Telemetry(計量數據收集) 服務

1.服務概述

計量數據收集(Telemetry)服務提供以下功能:   1.相關OpenStack服務的有效調查計量數據。   2.經過監測通知收集來自各個服務發送的事件和計量數據。   3.發佈收集來的數據到多個目標,包括數據存儲和消息隊列。 Telemetry服務包含如下組件: 計算代理 (ceilometer-agent-compute)   運行在每一個計算節點中,推送資源的使用狀態,也許在將來會有其餘類型的代理,可是目前來講社區專一於建立計算節點代理。 中心代理 (ceilometer-agent-central)   運行在中心管理服務器以推送資源使用狀態,既不捆綁到實例也不在計算節點。代理可啓動多個以橫向擴展它的服務。 ceilometer通知代理;   運行在中心管理服務器(s)中,獲取來自消息隊列(s)的消息去構建事件和計量數據。 ceilometor收集器(負責接收信息進行持久化存儲)   運行在中心管理服務器(s),分發收集的telemetry數據到數據存儲或者外部的消費者,但不會作任何的改動。 API服務器 (ceilometer-api)   運行在一個或多箇中心管理服務器,提供從數據存儲的數據訪問。 檢查告警服務   當收集的度量或事件數據打破了界定的規則時,計量報警服務會出發報警。 計量報警服務包含如下組件: API服務器 (aodh-api)   運行於一個或多箇中心管理服務器上提供訪問存儲在數據中心的警告信息。 報警評估器 (aodh-evaluator)   運行在一個或多箇中心管理服務器,當警告發生是因爲相關聯的統計趨勢超過閾值以上的滑動時間窗口,而後做出決定。 通知監聽器 (aodh-listener)   運行在一箇中心管理服務器上,來檢測何時發出告警。根據對一些事件預先定義一些規則,會產生相應的告警,同時可以被Telemetry數據收集服務的通知代理捕獲到。 報警通知器 (aodh-notifier)   運行在一個或多箇中心管理服務器,容許警告爲一組收集的實例基於評估閥值來設置。 這些服務使用OpenStack消息總線來通訊,只有收集者和API服務能夠訪問數據存儲。

2.部署需求:安裝和配置Telemetry服務以前,你必須建立建立一個數據庫、服務憑證和API端點。可是,不像其餘服務,Telemetry服務使用NoSQL 數據庫

控制節點:

[root@controller ~]#  yum install -y mongodb-server mongodb [root@controller ~]# vim /etc/mongod.conf #編輯 /etc/mongod.conf文件,並修改或添加以下內容 bind_ip = 192.168.1.101 smallfiles = true #默認狀況下,MongoDB會在/var/lib/mongodb/journal目錄下建立幾個 1 GB 大小的日誌文件。若是你想將每一個日誌文件大小減少到128MB而且限制日誌文件佔用的總空間爲512MB,配置 smallfiles 的值 [root@controller ~]# systemctl enable mongod.service [root@controller ~]# systemctl start mongod.service [root@controller ~]#  mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.createUser({user: "ceilometer",pwd: "123456",roles: [ "readWrite", "dbAdmin" ]})' #建立 ceilometer 數據庫 MongoDB shell version: 2.6.12 connecting to: controller:27017/test Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] } [root@controller ~]#  source admin-openrc.sh [root@controller ~]#  openstack user create --domain default --password-prompt ceilometer #建立 ceilometer 用戶 User Password:  #密碼爲:123456 Repeat User Password: [root@controller ~]# openstack role add --project service --user ceilometer admin #添加 admin 角色到 ceilometer 用戶上 [root@controller ~]#  openstack service create --name ceilometer --description "Telemetry" metering #建立 ceilometer 服務實體 [root@controller ~]# openstack endpoint create --region RegionOne metering public http://controller:8777 #建立Telemetry服務API端點
[root@controller ~]# openstack endpoint create --region RegionOne metering internal http://controller:8777
[root@controller ~]# openstack endpoint create --region RegionOne metering admin http://controller:8777

3.服務部署

控制節點:

[root@controller ~]# yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient -y [root@controller ~]# vim /etc/ceilometer/ceilometer.conf    #編輯 /etc/ceilometer/ceilometer.conf,修改或添加以下內容 [DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 verbose = True [oslo_messaging_rabbit] #配置RabbitMQ消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = 123456 [service_credentials] #配置服務證書 os_auth_url = http://controller:5000/v2.0
os_username = ceilometer os_tenant_name = service os_password = 123456 os_endpoint_type = internalURL os_region_name = RegionOne [root@controller ~]# systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service [root@controller ~]# systemctl start  openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

4.啓用鏡像服務計量

[root@controller ~]# vim /etc/glance/glance-api.conf  #編輯 /etc/glance/glance-api.conf 和 /etc/glance/glance-registry.conf 文件,同時修改或添加以下內容 [DEFAULT] #配置 notifications 和」RabbitMQ 消息隊列訪問 notification_driver = messagingv2 rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [root@controller ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service #重啓鏡像服務

5啓用計算服務計量

[root@controller ~]#  yum install -y openstack-ceilometer-compute python-ceilometerclient python-pecan [root@controller ~]#  vim /etc/ceilometer/ceilometer.conf #編輯 /etc/ceilometer/ceilometer.conf,添加或修改以下內容 [DEFAULT] rpc_backend = rabbit #配置RabbitMQ消息隊列訪問 auth_strategy = keystone #配置認證服務訪問 verbose = True [oslo_messaging_rabbit] #配置RabbitMQ消息隊列訪問 rabbit_host = controller rabbit_userid = openstack rabbit_password = 123456 [keystone_authtoken] #配置認證服務訪問 auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = ceilometer password = 123456 [service_credentials] #配置服務證書 os_auth_url = http://controller:5000/v2.0
os_username = ceilometer os_tenant_name = service os_password = 123456 os_endpoint_type = internalURL os_region_name = RegionOne [root@controller ~]#vim /etc/nova/nova.conf #編輯 /etc/nova/nova.conf 文件,添加或修改以下內容 [DEFAULT] instance_usage_audit = True #配置notifications instance_usage_audit_period = hour notify_on_state_change = vm_and_task_state notification_driver = messagingv2 [root@controller ~]# systemctl enable openstack-ceilometer-compute.service #啓動代理並配置開機啓動 [root@controller ~]# systemctl start openstack-ceilometer-compute.service [root@controller ~]# systemctl restart openstack-nova-compute.service #重啓計算服務

6.啓用塊存儲計量

在控制節點和塊存儲節點上執行這些步驟

[root@controller ~]# vim /etc/cinder/cinder.conf #編輯 /etc/cinder/cinder.conf,同時完成以下內容 [DEFAULT] notification_driver = messagingv2 [root@controller ~] [root@controller ~]# systemctl restart openstack-cinder-volume.service # 重啓控制節點上的塊設備存儲服務!!!
存儲節點上:
[root@block1 ~]#  systemctl restart openstack-cinder-volume.service #重啓存儲節點上的塊設備存儲服務!!!

7.啓用對象存儲計量

[root@controller ~]# source admin-openrc.sh [root@controller ~]# openstack role create ResellerAdmin [root@controller ~]# openstack role add --project service --user ceilometer ResellerAdmin [root@controller ~]# yum install -y python-ceilometermiddleware [root@controller ~]# vim/etc/swift/proxy-server.conf   #編輯 /etc/swift/proxy-server.conf 文件,添加或修改以下內容 [filter:keystoneauth] operator_roles = admin, user, ResellerAdmin #添加 ResellerAdmin 角色 [pipeline:main] #添加 ceilometer pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server proxy-server [filter:ceilometer] #配置提醒 paste.filter_factory = ceilometermiddleware.swift:filter_factory control_exchange = swift url = rabbit://openstack:123456@controller:5672/
driver = messagingv2 topic = notifications log_level = WARN [root@controller ~]# systemctl restart openstack-swift-proxy.service #重啓對象存儲的代理服務

8.驗證

在控制節點上執行

[root@controller ~]# source admin-openrc.sh [root@controller ~]# ceilometer meter-list |grep image #列出可用的 meters,過濾鏡像服務 +---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name                            | Type       | Unit      | Resource ID                                                           | User ID                          | Project ID                       |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| image                           | gauge      | image     | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | None                             | b1d045eb3d62421592616d56a69c4de3 |
| image.size                      | gauge      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | None                             | 
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+ [root@controller ~]# glance image-list | grep 'cirros' | awk '{ print $2 }' #從鏡像服務下載CirrOS鏡像 68259f9f-c5c1-4975-9323-cef301cedb2b [root@controller ~]# glance image-download 68259f9f-c5c1-4975-9323-cef301cedb2b > /tmp/cirros.img [root@controller ~]# ceilometer meter-list|grep image #再次列出可用的 meters 以驗證鏡像下載的檢查 | image                           | gauge      | image     | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.download                  | delta      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.serve                     | delta      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.size                      | gauge      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 | [root@controller ~]# ceilometer statistics -m image.download -p 60 #從 image.download 表讀取使用量統計值 +--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start               | Period End                 | Max        | Min        | Avg        | Sum        | Count | Duration | Duration Start             | Duration End               |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| 60     | 2018-02-16T12:47:46.351000 | 2018-02-16T12:48:46.351000 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | 1     | 0.0      | 2018-02-16T12:48:23.052000 | 2018-02-16T12:48:23.052000 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+ [root@controller ~]# ll  /tmp/cirros.img #查看下載的鏡像文件大小和使用量是否一致 -rw-r--r-- 1 root root 13287936 2月  16 20:48 /tmp/cirros.img
相關文章
相關標籤/搜索