Openstack Ocata 多節點分佈式部署

1 安裝環境

1.1 安裝鏡像版本

建議最小化安裝,這裏用的是CentOS-7-x86_64-Minimal-1511。html

1.2 網絡規劃

本文包含控制節點controller3,計算節點compute11,存儲節點cinder各一臺,全部密碼爲pass123456。其它全部計算節點配置基本相同,但每個計算節點的主機名和IP應該是惟一的。node

每一個節點上有兩塊網卡,一塊是能夠訪問外網的192.158.32.0/24段,另外一塊是內部通訊管理網絡的172.16.1.0/24段。python

網卡配置根據環境,虛擬機或物理機上配置方法請自行百度。mysql

其中,按該文配置的一個控制節點和一個計算節點的IP分別以下:linux

節點名稱 提供網絡 自選網絡
controller3 192.168.32.134 172.16.1.136
compute11 192.168.32.129 172.16.1.130
cinder 192.168.32.139 172.16.1.138

2 準備條件

2.1 配置國內yum源

在全部節點上:web

# yum install -y wget
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -P /etc/yum.repos.d/ http://mirrors.163.com/.help/CentOS7-Base-163.repo
# yum clean all
# yum makecache

2.2 安裝經常使用工具

在全部節點上:sql

# yum install -y vim net-tools epel-release python-pip

2.3 關閉selinux

在全部節點上:數據庫

編輯/etc/selinux/config文件apache

selinux=disabled

2.4 編輯hosts,修改主機名

在全部節點上:django

編輯/etc/hosts

# controller3
192.168.32.134 controller3
# compute11
192.168.32.129 compute11
# cinder
192.168.32.139 cinder

修改主機名,將servername分別在主機上修改成節點名稱controller3compute11cinder

hostnamectl set-hostname servername
systemctl restart systemd-hostnamed

驗證:分別在各節點間ping每一個主機名的聯通性。

3 Openstack環境

3.1 NTP

  • 安裝配置

在控制節點上:

# yum install -y chrony

編輯文件/etc/chrony.conf添加:

allow 192.168.32.0/24

啓動NTP服務並隨系統系統

# systemctl enable chronyd.service
# systemctl start chronyd.service

在除控制節點外其它節點上:

# yum install -y chrony

編輯文件/etc/chrony.conf,並註釋其它全部server選項

server controller3 iburst
  • 啓動服務並設置隨系統啓動

更改時區:

# timedatectl set-timezone Asia/Shanghai

啓動NTP服務並隨系統系統

# systemctl enable chronyd.service
# systemctl start chronyd.service

驗證:在全部節點上運行chronyc sources,輸出結果MS前帶*表示同步了相應Name/IP address的時間。
若是時間不一樣步,則重啓服務:

# systemctl restart chronyd.service

3.2 啓用OpenStack庫

在全部節點上:

# yum install -y centos-release-openstack-ocata
# yum install -y https://rdoproject.org/repos/rdo-release.rpm
# yum install -y python-openstackclient

3.3 數據庫

在控制節點上:

# yum install -y mariadb mariadb-server python2-PyMySQL

建立並編輯/etc/my.cnf.d/openstack.cnf文件,註釋bind-address行:

[mysqld]
#bind-address = 127.0.0.1

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

啓動數據庫服務,並隨系統而啓動:

# systemctl enable mariadb.service
# systemctl start mariadb.service

運行數據庫初始化安全腳本,設置數據庫root用戶密碼,剛登陸數據庫時密碼默認爲空:

mysql_secure_installation

3.4 消息隊列

在控制節點上:

# yum install -y rabbitmq-server

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service

# rabbitmqctl add_user openstack pass123456
Creating user "openstack" ...

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

3.5 Memcached 緩存令牌

在控制節點上:

# yum install -y memcached python-memcached

編輯文件/etc/sysconfig/memcached

OPTIONS="-l 127.0.0.1,::1,controller3"

啓動memcache服務,並隨系統啓動:

# systemctl enable memcached.service
# systemctl start memcached.service

4 認證服務

在控制節點上:

4.1 準備條件

首先要爲認證服務建立數據庫,用root用戶登陸數據庫:

$ mysql -u root -p

建立數據庫,併爲用戶分配權限:

MariaDB [(none)]> CREATE DATABASE keystone;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'pass123456';

MariaDB [(none)]> exit

4.2 安裝配置組件

# yum install -y openstack-keystone httpd mod_wsgi

編輯配置文件/etc/keystone/keystone.conf
配置數據庫訪問

[database]
# ...
connection = mysql+pymysql://keystone:pass123456@controller3/keystone

配置Fernet 令牌提供者

[token]
# ...
provider = fernet

初始化認證服務數據庫、Fernetkey倉庫

# su -s /bin/sh -c "keystone-manage db_sync" keystone

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引導認證服務

# keystone-manage bootstrap --bootstrap-password pass123456 \
  --bootstrap-admin-url http://controller3:35357/v3/ \
  --bootstrap-internal-url http://controller3:5000/v3/ \
  --bootstrap-public-url http://controller3:5000/v3/ \
  --bootstrap-region-id RegionOne

4.3 配置Apache服務器

編輯/etc/httpd/conf/httpd.conf,配置ServerName爲控制節點

ServerName controller3

建立連接文件

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

4.4 完成安裝

啓動 Apache HTTP 服務並配置其隨系統啓動

# systemctl enable httpd.service
# systemctl start httpd.service

4.5 建立 OpenStack 客戶端環境腳本

使用環境變量和命令的組合來配置認證服務,爲了更加高效和方便,建立 admindemo項目和用戶建立客戶端環境變量腳本,爲客戶端操做加載合適的的憑證。

建立並編輯admin-openrc文件,並添加如下內容:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=pass123456
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://controller3:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

建立並編輯demo-openrc文件,並添加如下內容:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=pass123456
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://controller3:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

運行admin用戶認證腳本. admin-openrc,加載環境變量。

4.6 建立域、項目、用戶和角色

本指南有一個service 項目,你添加的每個服務都有惟一的用戶。建立service項目:

$ openstack project create --domain default \
  --description "Service Project" service

常規(非管理)任務應該使用無特權的項目和用戶。做爲例子,本指南建立 demo 項目和用戶:

$ openstack project create --domain default \
  --description "Demo Project" demo

注意:當爲這個項目建立額外用戶時,不要重複這一步。

建立demo 用戶、角色:

$ openstack user create --domain default \
  --password-prompt demo
User Password:
Repeat User Password:

$ openstack role create user

user角色添加到demo項目中的user用戶中。

$ openstack role add --project demo --user demo user

4.7 驗證操做

出於安全性的緣由,禁用掉暫時的認證令牌機制。

編輯/etc/keystone/keystone-paste.ini文件,並從[pipeline:public_api][pipeline:admin_api][pipeline:api_v3]選項中刪除admin_token_auth

使用admin用戶,請求一個認證令牌;

$ openstack --os-auth-url http://controller3:35357/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name admin --os-username admin token issue

使用demo用戶,請求認證令牌:

$ openstack --os-auth-url http://controller3:5000/v3 \
  --os-project-domain-name default --os-user-domain-name default \
  --os-project-name demo --os-username demo token issue

請求認證令牌:

$ openstack token issue

+------------+-----------------------------------------------------------------+
| Field      | Value                                                           |
+------------+-----------------------------------------------------------------+
| expires    | 2016-02-12T20:44:35.659723Z                                     |
| id         | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
|            | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
|            | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E       |
| project_id | 343d245e850143a096806dfaefa9afdc                                |
| user_id    | ac3377633149401296f6c0d92d79dc16                                |
+------------+-----------------------------------------------------------------+

5 鏡像服務

在控制節點上:

5.1 準備條件

在安裝配置鏡像服務以前,你必須建立數據庫、服務憑證和API端點。

5.1.1 數據庫

以root用戶鏈接數據庫服務器,建立glance數據庫,並賦予適當的權限:

$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE glance;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'pass123456';

MariaDB [(none)]> exit

5.1.2 服務憑證

$ . admin-openrc

$ openstack user create --domain default --password-prompt glance

User Password:
Repeat User Password:

$ openstack role add --project service --user glance admin

$ openstack service create --name glance \
  --description "OpenStack Image" image

5.1.3 API 端點

$ openstack endpoint create --region RegionOne \
  image public http://controller3:9292
  
$ openstack endpoint create --region RegionOne \
  image internal http://controller3:9292
 
$ openstack endpoint create --region RegionOne \
  image admin http://controller3:9292

5.2 安裝配置組件

安裝包:

# yum install -y openstack-glance

編輯文件/etc/glance/glance-api.conf

[database]
# ...
connection = mysql+pymysql://glance:pass123456@controller3/glance

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = pass123456

[paste_deploy]
# ...
flavor = keystone

[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

注意:註釋或刪除[keystone_authtoken]選項的其它內容。

編輯文件/etc/glance/glance-registry.conf

[database]
# ...
connection = mysql+pymysql://glance:pass123456@controller3/glance

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = pass123456

[paste_deploy]
# ...
flavor = keystone

填充鏡像數據庫:

# su -s /bin/sh -c "glance-manage db_sync" glance

5.3 完成安裝

啓動鏡像服務並配置隨系統啓動

# systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

5.4 驗證操做

驗證使用一個小的Linux系統 CirrOS 來測試OpenStack的部署。

$ . admin-openrc

$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

$ openstack image create "cirros" \
  --file cirros-0.3.5-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
  
$ openstack image list

+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+

6 計算服務

6.1 安裝配置控制節點

在控制節點上:

6.1.1 準備條件

在安裝配置計算服務以前,你必須建立數據庫、服務憑證和API端點。

  • 數據庫

以root用戶鏈接數據庫服務器,建立以下數據庫,並賦予適當的權限:

$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'pass123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'pass123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY 'pass123456';

MariaDB [(none)]> exit
  • 服務憑證

計算服務憑證:

$ openstack user create --domain default --password-prompt nova

User Password:
Repeat User Password:

$ openstack role add --project service --user nova admin

$ openstack service create --name nova \
  --description "OpenStack Compute" compute

Placement服務憑證:

$ openstack user create --domain default --password-prompt placement

User Password:
Repeat User Password:

$ openstack role add --project service --user placement admin

$ openstack service create --name placement --description "Placement API" placement
  • API 端點

計算服務API 端點:

$ openstack endpoint create --region RegionOne \
  compute public http://controller3:8774/v2.1

$ openstack endpoint create --region RegionOne \
  compute internal http://controller3:8774/v2.1
  
$ openstack endpoint create --region RegionOne \
  compute admin http://controller3:8774/v2.1

Placement API 端點 :

$ openstack endpoint create --region RegionOne placement public http://controller3:8778

$ openstack endpoint create --region RegionOne placement internal http://controller3:8778

$ openstack endpoint create --region RegionOne placement admin http://controller3:8778

6.1.2 安裝配置組件

安裝包:

# yum install -y openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api

編輯/etc/nova/nova.conf文件:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:pass123456@controller3

my_ip = 172.16.1.136

use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
# ...
connection = mysql+pymysql://nova:pass123456@controller3/nova_api

[database]
# ...
connection = mysql+pymysql://nova:pass123456@controller3/nova

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = pass123456

[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
# ...
api_servers = http://controller3:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller3:35357/v3
username = placement
password = pass123456

編輯/etc/httpd/conf.d/00-nova-placement-api.conf文件添加:

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重啓httpd服務:

# systemctl restart httpd

填充nova-api數據庫:

# su -s /bin/sh -c "nova-manage api_db sync" nova

註冊cell0數據庫:

# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

建立cell1單元:

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650

填充nova數據庫,警告信息能夠忽略:

# su -s /bin/sh -c "nova-manage db sync" nova

驗證nova cell0cell1是否註冊正確:

# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name  | UUID                                 |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+

6.1.3 完成安裝

啓動計算服務並配置隨系統啓動:

# systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

6.2 安裝配置計算節點

在全部計算節點上:

6.2.1 安裝配置組件

安裝包:

# yum install -y openstack-nova-compute

編輯/etc/nova/nova.conf文件:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:pass123456@controller3

my_ip = 172.16.1.130

use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = pass123456

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller3:6080/vnc_auto.html

[glance]
# ...
api_servers = http://controller3:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller3:35357/v3
username = placement
password = pass123456

6.2.2 完成安裝

檢查你的計算節點是否支持硬件虛擬化:

$ egrep -c '(vmx|svm)' /proc/cpuinfo

若是命令返回值大於等於1,那麼不須要配置,不然,須要作一下配置libvirt來使用QEMU而不能用KVM。

編輯/etc/nova/nova.conf文件:

[libvirt]
# ...
virt_type = qemu

啓動計算服務及其依賴服務並配置隨系統啓動:

# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

6.2.3 添加計算節點到cell數據庫

注意:下面的命令在控制節點運行。

確認有哪些計算節點主機在數據庫:

$ . admin-openrc

$ openstack hypervisor list
+----+---------------------+-----------------+-----------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP   | State |
+----+---------------------+-----------------+-----------+-------+
|  1 | compute1            | QEMU            | 10.0.0.31 | up    |
+----+---------------------+-----------------+-----------+-------+

發現計算節點主機:

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

注意:當你添加一個新的計算節點的時候,須要在控制節點運行nova-manage cell_v2 discover_hosts來註冊該新計算節點,或者在/etc/nova/nova.conf配置節點中設置:

[scheduler]
discover_hosts_in_cells_interval = 300

6.3 驗證操做

在控制節點上:

$ . admin-openrc

$ openstack compute service list

+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary             | Host       | Zone     | Status  | State | Updated At                 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth   | controller | internal | enabled | up    | 2016-02-09T23:11:15.000000 |
|  2 | nova-scheduler     | controller | internal | enabled | up    | 2016-02-09T23:11:15.000000 |
|  3 | nova-conductor     | controller | internal | enabled | up    | 2016-02-09T23:11:16.000000 |
|  4 | nova-compute       | compute1   | nova     | enabled | up    | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+

$ openstack catalog list

+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| keystone  | identity  | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:35357/v3/    |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           |                                         |
| nova      | compute   | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+

$ openstack image list

+--------------------------------------+-------------+-------------+
| ID                                   | Name        | Status      |
+--------------------------------------+-------------+-------------+
| 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros      | active      |
+--------------------------------------+-------------+-------------+

# nova-status upgrade check

+---------------------------+
| Upgrade Check Results     |
+---------------------------+
| Check: Cells v2           |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Placement API      |
| Result: Success           |
| Details: None             |
+---------------------------+
| Check: Resource Providers |
| Result: Success           |
| Details: None             |
+---------------------------+

7 網絡服務

7.1 安裝配置控制節點

在控制節點上:

7.1.1 準備條件

在配置OpenStack網絡服務以前,你必須建立數據庫、服務憑證和API端點。

  • 數據庫

以root用戶鏈接數據庫服務器,建立glance數據庫,並賦予適當的權限:

$ mysql -u root -p

MariaDB [(none)] CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'pass123456';
  
MariaDB [(none)]> exit
  • 服務憑證

建立neutron服務實體:

$ . admin-openrc

$ openstack user create --domain default --password-prompt neutron

User Password:
Repeat User Password:

$ openstack role add --project service --user neutron admin

$ openstack service create --name neutron \
  --description "OpenStack Networking" network
  • API 端點

建立網絡服務API端點:

$ openstack endpoint create --region RegionOne \
  network public http://controller3:9696

$ openstack endpoint create --region RegionOne \
  network internal http://controller3:9696

$ openstack endpoint create --region RegionOne \
  network admin http://controller3:9696

7.1.2 配置網絡選項

這裏選擇自服務網絡。

  • 安裝組件
# yum install -y openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables
  • 配置服務組件

編輯配置文件/etc/neutron/neutron.conf:

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

transport_url = rabbit://openstack:pass123456@controller3

auth_strategy = keystone

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
# ...
connection = mysql+pymysql://neutron:pass123456@controller3/neutron

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pass123456

[nova]
# ...
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = pass123456

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
  • 配置 Modular Layer 2 (ML2) 插件

ML2插件使用Linux bridge機制來爲實例建立layer-2虛擬網絡基礎設施。

編輯配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
# ...
flat_networks = provider

[ml2_type_vxlan]
# ...
vni_ranges = 1:1000

[securitygroup]
# ...
enable_ipset = true

警告:在配置完ML2插件以後,刪除可能致使數據庫不一致的type_drivers項的值。

  • 7.1.2.4 配置Linux bridge 代理

Linux bridge代理爲實例創建layer-2虛擬網絡而且處理安全組規則。

編輯配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = true
local_ip = 172.16.1.136
l2_population = true

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

PUBLIC_INTERFACE_NAME替換爲底層的物理公共網絡接口。

172.16.1.136爲計算節點的管理網絡的IP地址。

  • 配置layer-3代理

編輯配置文件/etc/neutron/l3_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge
  • 7.1.2.6 配置DHCP代理

DHCP代理爲虛擬網絡提供了DHCP服務。

編輯配置文件/etc/neutron/dhcp_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

7.1.3 配置元數據代理

編輯配置文件/etc/neutron/metadata_agent.ini

[DEFAULT]
# ...
nova_metadata_ip = controller3
metadata_proxy_shared_secret = pass123456

7.1.4 配置計算服務使用網絡服務

編輯配置文件/etc/nova/nova.conf

[neutron]
# ...
url = http://controller3:9696
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = pass123456
service_metadata_proxy = true
metadata_proxy_shared_secret = pass123456

7.1.5 完成安裝

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  
# systemctl restart openstack-nova-api.service

# systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
# systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
  
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service

7.2 安裝配置計算節點

在計算節點上:

7.2.1 安裝組件

# yum install -y openstack-neutron-linuxbridge ebtables ipset

7.2.2 配置通用組件

網絡通用組件的配置包括認證機制、消息隊列和插件。

編輯配置文件/etc/neutron/neutron.conf

[database] 部分,註釋全部connection項,由於計算節點不直接訪問數據庫。

[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pass123456

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

7.2.3 配置網絡選項

對應控制節點,這裏也選擇自服務網絡。

7.2.3.1 配置Linux bridge代理

Linux bridge代理爲實例創建layer-2虛擬網絡而且處理安全組規則。

編輯配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

[vxlan]
enable_vxlan = true
local_ip = 172.16.1.130
l2_population = true

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

PROVIDER_INTERFACE_NAME替換爲底層的物理公共網絡接口。

172.16.1.130爲計算節點的管理網絡的IP地址。

7.2.4 配置計算服務來使用網絡服務

編輯配置文件/etc/nova/nova.conf

[neutron]
# ...
url = http://controller3:9696
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = pass123456

7.2.5 完成安裝

重啓計算服務,啓動Linuxbridge代理並配置它開機自啓動:

# systemctl restart openstack-nova-compute.service

# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

7.3 驗證操做

在控制節點上:

$ . admin-openrc

$ openstack extension list --network

$ openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1   | None              | True  | UP    | neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent           | controller | nova              | True  | UP    | neutron-l3-agent          |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

8 控制面板

在控制節點上:

8.1 安裝配置組件

安裝包:

# yum install -y openstack-dashboard

編輯配置文件/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller3"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller3:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

8.2 完成安裝

重啓web服務器以及會話存儲服務:

# systemctl restart httpd.service memcached.service

8.3 驗證操做

在瀏覽器中輸入 http://192.168.32.134/dashboard訪問儀表盤。

驗證使用 admin 或者demo用戶憑證和default域憑證。

9 塊存儲

9.1 安裝配置控制節點

在控制節點上:

9.1.1 準備條件

  • 數據庫
$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller3' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'pass123456';
MariaDB [(none)]> exit
  • 服務憑證
$ openstack user create --domain default --password-prompt cinder

User Password:
Repeat User Password:

$ openstack role add --project service --user cinder admin

$ openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2
  
$ openstack service create --name cinderv3 \
  --description "OpenStack Block Storage" volumev3
  • API端點
$ openstack endpoint create --region RegionOne \
  volumev2 public http://controller3:8776/v2/%\(project_id\)s

$ openstack endpoint create --region RegionOne \
  volumev2 internal http://controller3:8776/v2/%\(project_id\)s
  
$ openstack endpoint create --region RegionOne \
  volumev2 admin http://controller3:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
  volumev3 public http://controller3:8776/v3/%\(project_id\)s
  
$ openstack endpoint create --region RegionOne \
  volumev3 internal http://controller3:8776/v3/%\(project_id\)s
  
$ openstack endpoint create --region RegionOne \
  volumev3 admin http://controller3:8776/v3/%\(project_id\)s

9.1.2 安裝配置組件

  • 安裝包
# yum install -y openstack-cinder
  • 配置服務組件
    編輯配置文件/etc/cinder/cinder.conf
[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone
my_ip = 172.16.1.136


[database]
# ...
connection = mysql+pymysql://cinder:pass123456@controller3/cinder

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = pass123456

[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
  • 初始化數據庫
# su -s /bin/sh -c "cinder-manage db sync" cinder

9.1.3 配置計算服務使用塊存儲

編輯配置文件/etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

9.1.4 完成安裝

# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.2 安裝配置存儲節點

在存儲節點上:

9.2.1 準備條件

  • 儲服務所依賴的包
# yum install lvm2

# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
  • 建立物理卷和組
# pvcreate /dev/sdb

# vgcreate cinder-volumes /dev/sdb

9.2.2 安裝配置組件

  • 安裝包
# yum install openstack-cinder targetcli python-keystone
  • 配置服務組件

編輯配置文件/etc/cinder/cinder.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
enabled_backends = lvm
glance_api_servers = http://controller3:9292

[database]
# ...
connection = mysql+pymysql://cinder:pass123456@controller3/cinder

[keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = pass123456

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

9.2.3 完成安裝

# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service

9.3 驗證操做

$ . admin-openrc

$ openstack volume service list

+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated_at                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2016-09-30T02:27:41.000000 |
| cinder-volume    | block@lvm  | nova | enabled | up    | 2016-09-30T02:27:46.000000 |
+------------------+------------+------+---------+-------+----------------------------+
相關文章
相關標籤/搜索