centos7部署openstack-ocata

一、前言

本文旨在記錄本人的一個實驗過程,由於其中有一些坑,方便之後回顧查詢。html

其中限於篇幅(大部分是配置部分)有些內容省略掉了,官網都有,各位若是是安裝部署的話能夠參考官網,不建議使用本文。node

如下是ocata版本官網連接python

https://docs.openstack.org/ocata/zh_CN/install-guide-rdo/common/conventions.htmlmysql

阿里雲相關連接linux

https://www.aliyun.com/product/ecs?source=5176.11533457&userCode=kv73ipbs&type=copysql

二、環境

  在centOS 7中部署Openstack,按照官網只須要控制節點和計算節點,網絡節點安裝和控制節點安裝在一塊兒數據庫

  版本:openstack-ocatadjango

2.一、約定

/etc/hostsbootstrap

 

# controllerswift

192.168.2.19 controller

# compute1

192.168.2.21 compute1

# block1

192.168.2.21 block1

# object1

192.168.2.21 object1

# object2

192.168.2.21 object2

 

   控制節點和計算節點都須要兩個網絡接口,一個做爲管理網絡接口,一個做爲外部網絡接口。接口配置以下:

 

節點名稱

網絡名稱

IP地址

子網掩碼

默認網關

控制節點

管理網絡

192.168.2.19

255.255.255.0

192.168.2.1

 

外部網絡

10.1.12.10

255.255.255.0

10.1.12.1

計算節點

管理網絡

192.168.2.21

255.255.255.0

192.168.2.1

 

外部網絡

10.1.12.11

255.255.255.0

10.1.12.1

 

2.二、關閉

永久關閉:vi /etc/selinux/config

SELINUX=disabled

臨時關閉:setenforce 0

關閉iptables

永久關閉:systemctl disable firewalld.service

                     systemctl disable firewalld

臨時關閉:systemctl stop firewalld.service

systemctl stop firewalld

 

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

2.三、時區設置

timedatectl set-timezone Asia/Shanghai

2.四、密碼錶

密碼名稱

描述

數據庫密碼(不能使用變量)

數據庫的root密碼(空)

ADMIN_PASS

admin 用戶密碼(111111)

CINDER_DBPASS

塊設備存儲服務的數據庫密碼

CINDER_PASS

塊設備存儲服務的 cinder 密碼

DASH_DBPASS

Database password for the Dashboard

DEMO_PASS

demo 用戶的密碼

GLANCE_DBPASS

鏡像服務的數據庫密碼(glance)

GLANCE_PASS

鏡像服務的 glance 用戶密碼(111111)

KEYSTONE_DBPASS

認證服務的數據庫密碼(keystone)

METADATA_SECRET

Secret for the metadata proxy(111111)

NEUTRON_DBPASS

網絡服務的數據庫密碼(neutron)

NEUTRON_PASS

網絡服務的 neutron 用戶密碼(111111)

NOVA_DBPASS

計算服務的數據庫密碼(nova)

NOVA_PASS

計算服務中``nova``用戶的密碼(111111)

PLACEMENT_PASS

Password of the Placement service user placement(111111)

RABBIT_PASS

RabbitMQ的openstack用戶密碼(rabbit)

2.五、yum本地源

rpm.tar.gz(/var/cache/yum/$basearch/$releasever「/var/cache/yum/x86_64/7」打包)再解壓就至關於用來本地yum源,別忘了打開/etc/yum.conf中緩存。

2.六、OpenStack包

啓用OpenStack庫

yum install centos-release-openstack-ocata -y

yum upgrade -y

yum install python-openstackclient -y

2.七、SQL數據庫

2.7.一、安全並配置組件

yum install mariadb mariadb-server python2-PyMySQL -y

cat >/etc/my.cnf.d/openstack.cnf<<eof
[mysqld]
bind-address = 192.168.2.19
#
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
eof

systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation

2.八、消息隊列

消息隊列運行在控制節點。

2.8.一、安全並配置組件

yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

rabbitmqctl add_user openstack rabbit
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

2.九、Memcached

各種服務的身份認證機制使用Memcached緩存令牌。

緩存服務memecached一般運行在控制節點。

在生產部署中,咱們推薦聯合啓用防火牆、認證和加密保證它的安全。

2.9.一、安全並配置組件

yum install memcached python-memcached -y
sed -i 's#OPTIONS="-l 127.0.0.1,::1"#OPTIONS="-l 127.0.0.1,::1,controller"#g' /etc/sysconfig/memcached
systemctl enable memcached.service
systemctl start memcached.service

三、認證服務

3.一、安裝和配置

3.1.一、先決條件

mysql -u root -proot
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
exit

生成一個隨機值在初始的配置中做爲管理員的令牌。

openssl rand -hex 10

66ef83a4b21cebde6996

3.1.二、安全並配置組件

yum install openstack-keystone httpd mod_wsgi -y

vim /etc/keystone/keystone.conf

su -s  /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password 111111 \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

3.1.三、配置 Apache HTTP 服務器

echo ServerName controller >> /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

3.1.四、完成安裝

systemctl enable httpd.service
systemctl restart httpd.service

快照6

export OS_USERNAME=admin
 export OS_PASSWORD=111111
 export OS_PROJECT_NAME=admin
 export OS_USER_DOMAIN_NAME=Default
 export OS_PROJECT_DOMAIN_NAME=Default
 export OS_AUTH_URL=http://controller:35357/v3
 export OS_IDENTITY_API_VERSION=3

..........................................................................................................
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

慎用:否則會報錯(經測試沒問題黃線上面的就能夠)
export OS_TOKEN=66ef83a4b21cebde6996
export OS_USERNAME=admin
export OS_PASSWORD=111111
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
..........................................................................................................

3.二、建立域、項目、用戶和角色

openstack project create --domain default \
  --description "Service Project" service 
openstack project create --domain default \
  --description "Demo Project" demo
openstack user create --domain default \
  --password-prompt demo
111111
111111
openstack role create user
openstack role add --project demo --user demo user
sed -i 's#admin_auth_token# #g' /etc/keystone/keystone-paste.ini
unset OS_AUTH_URL OS_PASSWORD
--------------------------------------------------------------------
做爲 admin 用戶,請求認證令牌:
openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name admin --os-username admin token issue
--------------------------------------------------------------------
做爲``demo`` 用戶,請求認證令牌:
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name demo --os-username demo token issue

3.三、建立 OpenStack 客戶端環境腳本

3.3.一、建立腳本

cat >admin-openrc<<eof
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=111111
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
eof

cat > demo-openrc<<eof
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=111111
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
eof

3.3.二、使用腳本

. admin-openrc
openstack token issue

四、鏡像服務

mysql -uroot -proot
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance'

openstack user create --domain default --password-prompt glance
111111
111111
openstack role add --project service --user glance admin
openstack service create --name glance \
  --description "OpenStack Image" image
openstack endpoint create --region RegionOne \
  image public http://controller:9292
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
openstack endpoint create --region RegionOne \
  image admin http://controller:9292

4.一、安裝配置文件

yum install openstack-glance -y

配置文件:

/etc/glance/glance-api.conf

[database]
mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 111111

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
##############################################
/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 111111

[paste_deploy]
flavor = keystone

寫入鏡像服務數據庫:

su -s /bin/sh -c "glance-manage db_sync" glance

完成安裝

systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

4.二、驗證操做

使用 `CirrOS`對鏡像服務進行驗證,CirrOS是一個小型的Linux鏡像能夠用來幫助你進行 OpenStack部署測試。

 

. admin-openrc
yum install wget -y
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
openstack image create "cirros" \
  --file cirros-0.3.5-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

確認鏡像的上傳並驗證屬性:

openstack image list

五、計算服務

5.一、安裝並配置控制節點

5.1.一、先決條件

mysql -uroot -proot
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY 'nova';
exit
. admin-openrc
openstack user create --domain default \
  --password-prompt nova
111111
111111
openstack role add --project service --user nova admin
openstack service create --name nova \
  --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1
openstack user create --domain default --password-prompt placement
111111
111111
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

5.1.二、安全並配置組件

yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api -y
/etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:rabbit@controller
my_ip = 192.168.2.19
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
connection = mysql+pymysql://nova:nova@controller/nova

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 111111

[vnc]
enabled = true
# ...
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip

[glance]
# ...
api_servers=http://controller:9292

[oslo_concurrency]
# ...
lock_path=/var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 111111
/etc/httpd/conf.d/00-nova-placement-api.conf 
增長:

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
systemctl restart httpd.service
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
會有以下輸出,不用管。
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
  result = self._query(query)
nova-manage cell_v2 list_cells

5.1.三、完成安裝

systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

5.二、安裝和配置計算節點

日誌爲/vat/log/nova-compute.log

5.2.一、安全並配置組件

yum install openstack-nova-compute
/etc/nova/nova.conf

5.三、完成安裝

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service &

l  重要

Run the following commands on the controller node.

. admin-openrc
openstack hypervisor list
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

執行後會輸出

Found 2 cell mappings.

Skipping cell0 since it does not contain hosts.

Getting compute nodes from cell 'cell1': e46118d6-f516-4249-8e11-559f1a2602be

Found 1 computes in cell: e46118d6-f516-4249-8e11-559f1a2602be

Checking host mapping for compute host 'compute1': 9460baa8-d770-4841-9934-2e02df2b1ec9

Creating host mapping for compute host 'compute1': 9460baa8-d770-4841-9934-2e02df2b1ec9
/etc/nova/nova.conf
----->
[scheduler]
discover_hosts_in_cells_interval = 300

5.四、驗證操做

在控制節點上執行這些命令。

. admin-openrc
openstack compute service list
openstack catalog list
openstack image list
nova-status upgrade check

六、網絡服務

6.一、安裝並配置控制節點

6.1.一、先決條件

mysql -uroot -proot
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'neutron';
exit
. admin-openrc
openstack user create --domain default --password-prompt neutron
111111
111111

openstack role add --project service --user neutron admin
建立``neutron``服務實體:
openstack service create --name neutron \
  --description "OpenStack Networking" network
建立網絡服務API端點:
openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696

6.1.二、配置網絡選項

l  網絡選項1:提供者網絡

網絡選項2:自服務網絡

安裝組件

yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y

配置服務組件

/etc/neutron/neutron.conf

[database]
# ...
connection = mysql+pymysql://neutron:neutron@controller/neutron

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:rabbit@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111

[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 111111

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

配置 Modular Layer 2 (ML2) 插件

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true

配置Linuxbridge代理

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = 192.168.2.19
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置layer-3代理

/etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

配置DHCP代理

/etc/neutron/dhcp_agent.ini

6.1.三、配置元數據代理

/etc/neutron/metadata_agent.ini

須要設置元數據密碼:這裏就是第一次設置,直接設置便可,本例爲111111

6.1.四、配置計算服務來使用網絡服務

/etc/nova/nova.conf

6.1.五、完成安裝

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步數據庫

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service

對於兩種網絡選項(即不論是網絡1仍是網絡2都要作):

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
對於網絡選項2,一樣啓用layer-3服務並設置其隨系統自啓動
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

6.二、安裝和配置計算節點

6.2.一、安裝組件

yum install openstack-neutron-linuxbridge ebtables ipset -y

6.2.二、配置通用組件

Networking 通用組件的配置包括認證機制、消息隊列和插件

/etc/neutron/neutron.conf

6.2.三、配置網絡選項

選擇與您以前在控制節點上選擇的相同的網絡選項。

l  網絡選項1:提供者網絡

網絡選項2:自服務網絡

配置Linuxbridge代理

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

6.2.四、配置計算服務來使用網絡服務

/etc/nova/nova.conf

6.2.五、完成安裝

重啓計算服務:

systemctl restart openstack-nova-compute.service

啓動Linuxbridge代理並配置它開機自啓動:

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

6.三、驗證操做

l  註解

在控制節點上執行這些命令。

. admin-openrc
openstack extension list --network

使用網絡部分你選擇的驗證部分來進行部署,網絡選項2:自服務網絡

輸出結果應該包括控制節點上的四個代理和每一個計算節點上的一個代理。

openstack network agent list

七、儀表盤

7.一、安裝和配置

這個部分將描述如何在控制節點上安裝和配置儀表板。

7.二、安全並配置組件

yum install openstack-dashboard -y
/etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['horizon.example.com', 'localhost','192.168.2.19']

7.三、完成安裝

systemctl restart httpd.service memcached.service

訪問 http://192.168.2.19/dashboard
admin  111111

排障:
Openstack安裝Dashboard以後沒法打開頁面

[root@controller ~]# cd /var/log/httpd/
[root@controller httpd]# less error_log
[Wed Aug 15 04:55:22.431328 2018] [core:error] [pid 109774] [client 192.168.2.1:9918] Script timed out before returning headers: django.wsgi
[Wed Aug 15 04:56:15.073662 2018] [core:error] [pid 109701] [client 192.168.2.1:9748] End of script output before headers: django.wsgi
修改   /etc/httpd/conf.d/openstack-dashboard.conf  文件
在WSGISocketPrefix run/wsgi下面加一行代碼: 
WSGIApplicationGroup %{GLOBAL} 

保存退出,而後重啓httpd服務。

 

八、塊存儲服務

這個部分描述如何在控制節點上安裝和配置塊設備存儲服務,即 cinder。

這個服務須要至少一個額外的存儲節點,以向實例提供卷。

8.一、安裝並配置控制節點

mysql -u root -proot
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
exit
openstack user create --domain default --password-prompt cinder
111111
111111
openstack role add --project service --user cinder admin
 註解 塊設備存儲服務要求兩個服務實體。
openstack service create --name cinderv2   --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3   --description "OpenStack Block Storage" volumev3

    註解

塊設備存儲服務每一個服務實體都須要端點。

openstack endpoint create --region RegionOne   volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne   volumev3 admin http://controller:8776/v3/%\(project_id\)s

8.1.一、安全並配置組件

yum install openstack-cinder -y
/etc/cinder/cinder.conf

[database]
# ...
connection = mysql+pymysql://cinder:cinder@controller/cinder
[DEFAULT]
# ...
transport_url = rabbit://openstack:rabbit@controller
auth_strategy = keystone
my_ip = 192.168.2.19

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111

[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

初始化塊設備服務的數據庫:

su -s /bin/sh -c "cinder-manage db sync" cinder

8.1.二、配置計算節點以使用塊設備存儲

vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

8.1.三、完成安裝

systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

8.二、安裝並配置一個存儲節點

8.2.一、先決條件

l  註解

在存儲節點實施這些步驟。

[root@compute1 ~]#
yum install lvm2 -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

 

pvcreate /dev/sda
vgcreate cinder-volumes /dev/sda

  只有實例能夠訪問塊存儲卷組。可是呢,底層的操做系統(如centos)管理着與這些卷相關聯的設備。默認狀況下,LVM卷掃描工具會對底層操做系統掃描``/dev`` 目錄,查找包含卷的塊存儲設備。

  若是項目在他們的捲上使用了LVM,LVM卷掃描工具便會在檢測到這些塊存儲卷時嘗試緩存它們,這可能會在底層操做系統和項目捲上產生各類問題。因此您必須從新配置LVM,讓它掃描僅包含``cinder-volume``卷組的設備。編輯``/etc/lvm/lvm.conf``文件並完成下面的操做:

vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/", "r/.*/"]
或者若是sda也是lvm卷的話:
        filter = [ "a/sda/", "a/sdb/", "r/.*/"]

【PS】

centos7默認狀況下是建立不了pv的(緣由待查證),解決方法以下:

默認:

[root@compute1 ~]# pvcreate /dev/sdb    
  Device /dev/sdb excluded by a filter.

解決:

[root@compute1 ~]# dd if=/dev/urandom of=/dev/sdb bs=512 count=64
64+0 records in
64+0 records out
32768 bytes (33 kB) copied, 0.00760562 s, 4.3 MB/s
[root@compute1 ~]# pvcreate /dev/sdb                             
  Physical volume "/dev/sdb" successfully created.

小擴展:http://www.voidcn.com/article/p-uxhrkuzs-bsd.html

8.1.三、安全並配置組件

yum install openstack-cinder targetcli python-keystone
vim /etc/cinder/cinder.conf
[database]
# ...
connection = mysql+pymysql://cinder:cinder@controller/cinder
[DEFAULT]
# ...
transport_url = rabbit://openstack:rabbit@controller
auth_strategy = keystone
my_ip=192.168.2.21
enabled_backends = lvm
glance_api_servers = http://controller:9292

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111

若是``[lvm]``部分不存在,則建立它:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

8.1.四、完成安裝

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

8.二、驗證操做

l  註解

在控制節點上執行這些命令。

 

. admin-openrc
openstack volume service list

九、其餘服務

9.一、裸金屬服務(ironic)

裸金屬服務是提供管理和準備物理硬件支持的組件的集合。

9.二、容器的基礎設施管理服務(magnum)

容器的基礎設施管理服務(magnum)是OpenStack API服務,它使容器編排引擎(COE),好比Docker Swarm, Kubernetes和Mesos,成爲了OpenStack頭等資源。

9.三、數據庫服務(trove)

數據庫服務(trove)提供了數據庫引擎的雲部署功能。

9.四、DNS service (designate)

The DNS service (designate) provides cloud provisioning functionality for DNS Zones and Recordsets.

9.五、祕鑰管理器服務

密鑰管理服務爲存儲提供了RESTful API,以及密鑰數據,好比口令、加密密鑰和X.509證書。

9.六、雲消息服務(zaqar)

雲消息服務容許開發人員共享分佈式應用組件間的數據來完成不一樣任務,而不會丟失消息或要求每一個組件老是可用。

9.七、對象存儲服務(swift)

對象存儲服務(swift)經過REST API提供對象存儲和檢索的訪問入口。

9.八、編排服務(heat)

The Orchestration service (heat) uses a Heat Orchestration Template (HOT) to create and manage cloud resources.

9.九、共享文件系統服務(manila)

共享文件系統服務(manila)提供了共享或分佈式文件系統的協同訪問。

9.十、監測告警服務(aodh)

當收集到的測量或事件數據符合預約義的規則時,監測告警服務就會觸發告警。

9.十一、Telemetry 數據收集服務(ceilometer)

Telemetry 數據收集服務提供以下功能:

  • 高效地輪詢與 OpenStack 服務相關的計量數據。
  • 經過監測通知收集來自各個服務發送的事件和計量數據。
  • 將收集到的數據發佈到各個目標區,包括數據存儲區和消息隊列。

十、啓動一個實例

l  警告

在建立私有項目網絡前,你必須:ref:create the provider network <launch-instance-networks-provider>。

10.一、建立Provider網絡

10.1.一、建立網絡

. admin-openrc
neutron net-create --shared --provider:physical_network provider \
  --provider:network_type flat provider

10.1.二、建立子網

neutron subnet-create --name provider --allocation-pool start=10.2.2.178,end=10.2.2.190 \
 --disable-dhcp --gateway 10.2.2.1 provider 10.2.2.0/24

10.二、neutron.wsgi建立自服務網絡

在控制節點上,得到 admin 憑證來獲取只有管理員能執行的命令的訪問權限:

. demo-openrc
openstack network create selfservice
openstack subnet create --network selfservice \
  --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
  --subnet-range 172.16.1.0/24 selfservice

10.三、建立路由器

. admin-openrc
. demo-openrc
openstack router create router
neutron router-interface-add router selfservice
neutron router-gateway-set router provider

10.四、驗證操做

. admin-openrc
ip netns
neutron router-port-list router

。。。。。。

後續請移步官網https://docs.openstack.org/ocata/zh_CN/install-guide-rdo/common/conventions.html

相關文章
相關標籤/搜索