實驗環境html
系統:CentOS-7-x86_64-DVD-1804node
實驗環境:vmwarepython
hostname ip 功能mysql
node1.heleicool.cn 172.16.175.11 管理節點linux
node2.heleicool.cn 172.16.175.12 計算節點sql
環境設置數據庫
安裝必要軟件:django
yum install -y vim net-tools wget telnet
bootstrap
分別配置/etc/hosts文件:vim
172.16.175.11 node1.heleicool.cn
172.16.175.12 node2.heleicool.cn
分別配置/etc/resolv.conf文件:
nameserver 8.8.8.8
關閉防火牆:
systemctl disable firewalld
systemctl stop firewalld
關閉selinux:(應該能夠省略)
setenforce 0
vim /etc/selinux/config
SELINUX=disabled
安裝openstack包
安裝對應版本的epel庫:
yum install centos-release-openstack-rocky -y
安裝openstack客戶端:
yum install python-openstackclient -y
RHEL和CentOS 默認啓用SELinux。安裝 openstack-selinux軟件包以自動管理OpenStack服務的安全策略:
yum install openstack-selinux -y
數據庫安裝
安裝包:
yum install mariadb mariadb-server python2-PyMySQL -y
建立和編輯配置文件/etc/my.cnf.d/openstack.cnf:
[mysqld]
bind-address = 172.16.175.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
啓動數據庫:
systemctl enable mariadb.service
systemctl start mariadb.service
經過運行mysql_secure_installation 腳原本保護數據庫服務。特別是,爲數據庫root賬戶選擇合適的密碼 :
mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and you haven't set the root password yet, the password will be blank, so you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password ensures that nobody can log into the MariaDB root user without the proper authorisation. Set root password? [Y/n] y # 是否設置root密碼 New password: # 輸入兩次root密碼 Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] y # 是否刪除匿名用戶 ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] y # 是否禁止root遠程登錄 ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] y # 是否刪除test庫 ▽ - Dropping test database... ▽ ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] y # 加載權限表 ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB!
安裝消息隊列
安裝rabbitmq
yum install rabbitmq-server -y
啓動rabbitmy
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加openstack用戶
# 我 添加的用戶名爲openstack,密碼也是。
rabbitmqctl add_user openstack openstack
對openstack用戶進行讀寫受權:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
###安裝Memcached
安裝Memacached
yum install memcached python-memcached -y
編輯/etc/sysconfig/memcached,修改配置
OPTIONS="-l 127.0.0.1,::1,172.16.175.11"
啓動memcached
systemctl enable memcached.service
systemctl start memcached.service
目前爲止端口信息以下
# rabbitmq 端口 tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 1690/beam # mariadb-server 端口 tcp 0 0 172.16.175.11:3306 0.0.0.0:* LISTEN 1506/mysqld # memcached 端口 tcp 0 0 172.16.175.11:11211 0.0.0.0:* LISTEN 2236/memcached tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 2236/memcached tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 766/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1050/master tcp6 0 0 :::5672 :::* LISTEN 1690/beam tcp6 0 0 ::1:11211 :::* LISTEN 2236/memcached tcp6 0 0 :::22 :::* LISTEN 766/sshd tcp6 0 0 ::1:25 :::* LISTEN 1050/master
開始安裝openstack服務
keystone服務安裝
配置keystone數據庫:
使用數據庫訪問客戶端以root用戶身份鏈接到數據庫服務器:
mysql -u root -p
建立keystone數據庫,授予對keystone數據庫的適當訪問權限:
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
安裝配置keystone
運行如下命令以安裝軟件包:
yum install openstack-keystone httpd mod_wsgi -y
編輯/etc/keystone/keystone.conf文件並完成如下操做:
[database]
connection = mysql+pymysql://keystone:keystone@172.16.175.11/keystone
[token]
provider = fernet
填充Identity服務數據庫:
su -s /bin/sh -c "keystone-manage db_sync" keystone
# 驗證數據庫表
mysql -ukeystone -pkeystone -e "use keystone; show tables;"
初始化Fernet密鑰存儲庫:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引導身份服務:
# ADMIN_PASS爲管理用戶的密碼,這裏是設置密碼。
keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://172.16.175.11:5000/v3/ \
--bootstrap-internal-url http://172.16.175.11:5000/v3/ \
--bootstrap-public-url http://172.16.175.11:5000/v3/ \
--bootstrap-region-id RegionOne
配置Apache HTTP服務
編輯/etc/httpd/conf/httpd.conf
ServerName 172.16.175.11
建立/usr/share/keystone/wsgi-keystone.conf文件的連接:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
啓動服務
啓動Apache HTTP服務並將其配置爲在系統引導時啓動:
systemctl enable httpd.service
systemctl start httpd.service
配置管理賬戶
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
建立domain,projects,users and roles
雖然本指南中的keystone-manage bootstrap步驟中已存在「默認」域,但建立新域的正式方法是:
# openstack domain create --description "An Example Domain" example
使用默認的domain,建立service project:用作服務。
openstack project create --domain default \
--description "Service Project" service
建立myproject項目:用作常規(非管理員)任務應使用非特權項目和用戶。
openstack project create --domain default \
--description "Demo Project" myproject
建立myuser用戶:
# 建立用戶須要輸入密碼
openstack user create --domain default \
--password-prompt myuser
建立myrole角色:
openstack role create myrole
將myuser添加到myproject項目中並賦予myrole的角色:
openstack role add --project myproject --user myuser myrole
驗證用戶
取消設置臨時 變量OS_AUTH_URL和OS_PASSWORD環境變量:
unset OS_AUTH_URL OS_PASSWORD
做爲admin用戶,請求身份驗證令牌:
# 執行後須要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
做爲myuser用戶,請求身份驗證令牌:
# 執行後須要輸入admin密碼
openstack --os-auth-url http://172.16.175.11:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
建立openstack 客戶端環境腳本
openstack客戶端經過添加參數或使用環境變量的方式來與Identity服務進行交互,爲了提升效率,建立環境腳本:
建立admin用戶環境腳本:admin-openstack.sh
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
建立myuser用戶環境腳本:demo-openstack.sh
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://172.16.175.11:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
使用腳本
source admin-openstack.sh
openstack token issue
glance服務安裝
配置glance數據庫:
root用戶登錄數據庫:
mysql -u root -p
建立glance數據庫和用戶受權:
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
建立glance服務憑證,使用admin用戶:
source admin-openstack.sh
建立glance用戶:
# 須要輸入glance用戶密碼,個人是 glance
openstack user create --domain default --password-prompt glance
將glance用戶添加到service項目中,並賦予admin角色:
openstack role add --project service --user glance admin
建立glance服務實體:
openstack service create --name glance \
--description "OpenStack Image" image
建立Image服務API端點:
openstack endpoint create --region RegionOne image public http://172.16.175.11:9292
openstack endpoint create --region RegionOne image internal http://172.16.175.11:9292
openstack endpoint create --region RegionOne image admin http://172.16.175.11:9292
安裝和配置glance
安裝包:
yum install openstack-glance -y
編輯/etc/glance/glance-api.conf文件並完成如下操做:
# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
# 配置本地文件系統存儲和映像文件的位置:
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
編輯/etc/glance/glance-registry.conf文件並完成如下操做:
# 配置數據庫訪問:
[database]
connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# 配置身份服務訪問:
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
填充Image服務數據庫,並驗證:
su -s /bin/sh -c "glance-manage db_sync" glance
mysql -uglance -pglance -e "use glance; show tables;"
啓動服務:
systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
systemctl start openstack-glance-api.service \
openstack-glance-registry.service
驗證服務
來源admin憑據來訪問僅管理員CLI命令:
source admin-openstack.sh
下載源圖像:
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
使用QCOW2磁盤格式,bare容器格式和公共可見性將圖像上載到Image服務 ,以便全部項目均可以訪問它:
# 確保cirros-0.4.0-x86_64-disk.img 文件在當前目錄下
openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
確認上傳圖像並驗證屬性:
openstack image list
nova服務安裝
Nova控制節點安裝
創建nova數據庫信息:
mysql -u root -p
建立nova_api,nova,nova_cell0,和placement數據庫:
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
使用admin權限訪問:
source admin-openstack.sh
建立nova用戶:
openstack user create --domain default --password-prompt nova
將admin角色添加到nova用戶:
openstack role add --project service --user nova admin
建立nova服務實體:
openstack service create --name nova --description "OpenStack Compute" compute
建立Compute API服務端點:
openstack endpoint create --region RegionOne compute public http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://172.16.175.11:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://172.16.175.11:8774/v2.1
建立placement用戶:
# 須要設置用戶名的密碼,個人密碼是 placement
openstack user create --domain default --password-prompt placement
使用admin角色將Placement用戶添加到服務項目:
openstack role add --project service --user placement admin
建立placement服務實體:
openstack service create --name placement --description "Placement API" placement
建立Placement API服務端點:
openstack endpoint create --region RegionOne placement public http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement internal http://172.16.175.11:8778
openstack endpoint create --region RegionOne placement admin http://172.16.175.11:8778
#####安裝nova
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api -y
編輯/etc/nova/nova.conf文件並完成如下操做:
# 僅啓用計算和元數據API
[DEFAULT]
enabled_apis = osapi_compute,metadata
# 配置數據庫訪問
[api_database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# 配置RabbitMQ消息隊列訪問
[DEFAULT]
transport_url = rabbit://openstack:openstack@172.16.175.11
# 配置身份服務訪問
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://172.16.175.11:5000/v3
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
# 啓用對網絡服務的支持
[DEFAULT]
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
# 配置VNC代理以使用控制器節點的管理接口IP地址
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 172.16.175.11
# 配置Image服務API的位置
[glance]
api_servers = http://172.16.175.11:9292
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
# 配置Placement API
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://172.16.175.11:5000/v3
username = placement
password = placement
配置添加到如下內容來啓用對Placement API的訪問 /etc/httpd/conf.d/00-nova-placement-api.conf:
添加到配置文件最後
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重啓httpd服務
systemctl restart httpd
填充nova-api和placement數據庫:
su -s /bin/sh -c "nova-manage api_db sync" nova
註冊cell0數據庫:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
建立cell1單元格:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充nova數據庫:
su -s /bin/sh -c "nova-manage db sync" nova
驗證nova cell0和cell1是否正確註冊:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
驗證數據庫:
mysql -unova -pnova -e "use nova ; show tables;"
mysql -unova -pnova -e "use nova_api ; show tables;"
mysql -unova -pnova -e "use nova_cell0 ; show tables;"
mysql -uplacement -pplacement -e "use placement ; show tables;"
啓動nova 控制節點服務
systemctl enable openstack-nova-api.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
Nova計算節點安裝
安裝包
yum install openstack-nova-compute -y
編輯/etc/nova/nova.conf文件並完成如下操做:
# 拉取控制節點配置進行修改。刪除如下配置便可,這些是數據庫訪問的配置。
[api_database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# 添加內容以下:
[vnc]
# 修改成計算節點的IP
server_proxyclient_address = 172.16.175.12
novncproxy_base_url = http://172.16.175.11:6080/vnc_auto.html
肯定您的計算節點是否支持虛擬機的硬件加速:
egrep -c '(vmx|svm)' /proc/cpuinfo
若是此命令返回值大於1,則計算節點支持硬件加速,一般不須要其餘配置。
若是此命令返回值zero,則您的計算節點不支持硬件加速,您必須配置libvirt爲使用QEMU而不是KVM。
編輯文件中的[libvirt]部分,/etc/nova/nova.conf以下所示:
[libvirt]
# ...
virt_type = kvm
# 我這裏的返回值雖然大於1,可是配置爲kvm致使虛擬機不能啓動,修改成qemu正常,求大神赤腳。
啓動nova計算節點服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
將計算節點添加到單元數據庫(在管理節點執行)
source admin-openstack.sh
# 確認數據庫中有主機
openstack compute service list --service nova-compute
# 發現計算主機
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
添加新計算節點時,必須在控制器節點上運行以註冊這些新計算節點。或者,您能夠在如下位置設置適當的間隔 :/etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
驗證操做
source admin-openstack.sh
# 列出服務組件以驗證每一個進程的成功啓動和註冊:state爲up 狀態
openstack compute service list
# 列出Identity服務中的API端點以驗證與Identity服務的鏈接
openstack catalog list
# 列出Image服務中的圖像以驗證與Image服務的鏈接:
openstack image list
# 檢查單元格和放置API是否成功運行:
nova-status upgrade check
這裏說明一下,在openstack compute service list命令進行查看時官方文檔比你多啓動一個服務器,你啓動它就好了。
這個服務是控制檯遠程鏈接認證服務器,不安裝不能進行vnc遠程登陸。
systemctl enable openstack-nova-consoleauth
systemctl start openstack-nova-consoleauth
neutron 服務安裝
neutron控制節點安裝
爲neutron服務建立數據庫相關:
mysql -uroot -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
建立neutron管理用戶
openstack user create --domain default --password-prompt neutron
將neutron用戶添加到 neutron 服務中,並賦予admin的角色
openstack role add --project service --user neutron admin
建立neutron服務實體:
openstack service create --name neutron --description "OpenStack Networking" network
建立網絡服務API端點:
openstack endpoint create --region RegionOne network public http://172.16.175.11:9696
openstack endpoint create --region RegionOne network internal http://172.16.175.11:9696
openstack endpoint create --region RegionOne network admin http://172.16.175.11:9696
配置網絡選項
您可使用選項1(Procider)、2(Self-service)表示的兩種體系結構之一來部署網絡服務。
選項1部署了最簡單的架構,該架構僅支持將實例附加到提供商(外部)網絡。沒有自助(私有)網絡,路由器或浮動IP地址。只有該admin特權用戶或其餘特權用戶才能管理提供商網絡。
Procider Network
安裝插件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置服務器組件
編輯/etc/neutron/neutron.conf文件並完成如下操做
[DEFAULT]
# 啓用模塊化第2層(ML2)插件並禁用其餘插件
core_plugin = ml2
service_plugins =
# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
auth_strategy = keystone
[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置模塊化第2層(ML2)插件
ML2插件使用Linux橋接機制爲實例構建第2層(橋接和交換)虛擬網絡基礎架構。
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件並完成如下操做:
[ml2]
# 啓用平面和VLAN網絡
type_drivers = flat,vlan
# 禁用自助服務網絡
tenant_network_types =
# 啓用Linux橋接機制
mechanism_drivers = linuxbridge
# 啓用端口安全性擴展驅動程序
extension_drivers = port_security
[ml2_type_flat]
# 將提供商虛擬網絡配置爲扁平網絡
flat_networks = provider
[securitygroup]
# 啓用ipset以提升安全組規則的效率
enable_ipset = true
配置linux網橋代理
Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成如下操做:
[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這裏的eth-0爲映射的網卡
physical_interface_mappings = provider:eth-0
[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false
[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
驗證如下全部sysctl值設置爲1:確保您的Linux操做系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置DHCP代理
DHCP代理爲虛擬網絡提供DHCP服務。
編輯/etc/neutron/dhcp_agent.ini文件並完成如下操做:
[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,並啓用隔離的元數據,以便提供商網絡上的實例能夠經過網絡訪問元數據:
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Self-service networks
安裝組件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
配置服務組件
編輯/etc/neutron/neutron.conf文件並完成如下操做:
[DEFAULT]
# 啓用模塊化第2層(ML2)插件,路由器服務和重疊的IP地址
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
auth_strategy = keystone
# 通知Compute網絡拓撲更改
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
# 配置數據庫訪問
connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# 配置身份服務訪問
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
# 配置網絡以通知Compute網絡拓撲更改
[nova]
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
# 配置鎖定路徑
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置模塊化第2層(ML2)插件
ML2插件使用Linux橋接機制爲實例構建第2層(橋接和交換)虛擬網絡基礎架構。
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件並完成如下操做:
[ml2]
# 啓用flat,VLAN和VXLAN網絡
type_drivers = flat,vlan,vxlan
# 啓用VXLAN自助服務網絡
tenant_network_types = vxlan
# 啓用Linux橋和第2層填充機制
mechanism_drivers = linuxbridge,l2population
# 啓用端口安全性擴展驅動程序
extension_drivers = port_security
[ml2_type_flat]
# 將提供商虛擬網絡配置爲扁平網絡
flat_networks = provider
[ml2_type_vxlan]
# 自助服務網絡配置VXLAN網絡標識符範圍
vni_ranges = 1:1000
[securitygroup]
# 啓用ipset以提升安全組規則的效率
enable_ipset = true
配置Linux橋代理
Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成如下操做:
[linux_bridge]
# 提供者虛擬網絡映射到提供者物理網絡接口,這裏的eth0爲映射的網卡
physical_interface_mappings = provider:eth0
[vxlan]
# 啓用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啓用第2層填充
enable_vxlan = true
local_ip = 172.16.175.11
l2_population = true
[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序:
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
經過驗證如下全部sysctl值設置爲1:確保您的Linux操做系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置第三層代理
第3層(L3)代理爲自助虛擬網絡提供路由和NAT服務。
編輯/etc/neutron/l3_agent.ini文件並完成如下操做:
[DEFAULT]
# 配置Linux橋接接口驅動程序和外部網橋
interface_driver = linuxbridge
配置DHCP代理
DHCP代理爲虛擬網絡提供DHCP服務。
編輯/etc/neutron/dhcp_agent.ini文件並完成如下操做:
[DEFAULT]
# 配置Linux橋接接口驅動程序,Dnsmasq DHCP驅動程序,並啓用隔離的元數據,以便提供商網絡上的實例能夠經過網絡訪問元數據
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata 客戶端
metadata數據爲虛擬機提供配置信息。
編輯/etc/neutron/metadata_agent.ini文件並完成如下操做
[DEFAULT]
# 配置metadata主機和共享密鑰
nova_metadata_host = controller
metadata_proxy_shared_secret = heleicool
# heleicool 爲neutron和nova之間通訊的密碼
配置計算服務(nova計算服務)使用網絡服務
編輯/etc/nova/nova.conf文件並執行如下操做
[neutron]
# 配置訪問參數,啓用metadata代理並配置密碼:
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = heleicool
安裝完成
網絡服務初始化腳本須要一個/etc/neutron/plugin.ini指向ML2插件配置文件的符號連接/etc/neutron/plugins/ml2/ml2_conf.ini。若是此符號連接不存在,請使用如下命令建立它
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充數據庫,這裏須要用到neutron.conf和ml2_conf.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啓nova 計算服務,由於修改了它的配置文件。
systemctl restart openstack-nova-api.service
啓動網絡服務並將其配置爲在系統引導時啓動
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
neutron 計算節點安裝
安裝組件
yum install openstack-neutron-linuxbridge ebtables ipset -y
配置公共組件
Networking公共組件配置包括身份驗證機制,消息隊列和插件。
編輯/etc/neutron/neutron.conf文件並完成如下操做:
註釋掉任何connection選項,由於計算節點不直接訪問數據庫
[DEFAULT]
# 配置RabbitMQ 消息隊列訪問
transport_url = rabbit://openstack:openstack@172.16.175.11
# 配置身份服務訪問
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://172.16.175.11:5000
auth_url = http://172.16.175.11:5000
memcached_servers = 172.16.175.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
# 配置鎖定路徑
lock_path = /var/lib/neutron/tmp
配置網絡選項
選擇爲控制器節點選擇的相同網絡選項,以配置特定於其的服務
Procider Network
配置網橋代理
Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成如下操做:
[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0
[vxlan]
# 禁用VXLAN覆蓋網絡
enable_vxlan = false
[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
經過驗證如下全部sysctl值設置爲1:確保您的Linux操做系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
Self-service networks
配置網橋代理
Linux網橋代理爲實例構建第2層(橋接和交換)虛擬網絡基礎架構並處理安全組。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並完成如下操做:
[linux_bridge]
# 將提供者虛擬網絡映射到提供者物理網絡接口
physical_interface_mappings = provider:eth0
[vxlan]
# 啓用VXLAN重疊網絡,配置處理覆蓋網絡的物理網絡接口的IP地址,並啓用第2層填充
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# 啓用安全組並配置Linux橋接iptables防火牆驅動程序
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
經過驗證如下全部sysctl值設置爲1:確保您的Linux操做系統內核支持網橋過濾器:
modprobe br_netfilter
ls /proc/sys/net/bridge
在/etc/sysctl.conf中添加:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
執行生效
sysctl -p
配置計算(nova計算服務)服務使用網絡服務
編輯/etc/nova/nova.conf文件並完成如下操做
[neutron]
# ...
url = http://172.16.175.11:9696
auth_url = http://172.16.175.11:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
完成安裝
重啓Compute服務
systemctl restart openstack-nova-compute.service
啓動Linux網橋代理並將其配置爲在系統引導時啓動
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
驗證操做
Provider networks
列出驗證成功鏈接neutron的代理
openstack network agent list
Self-service networks
列出驗證成功鏈接neutron的代理
# Metadata agent/Linux brideg agent/L3 agent/DHCP agent四個代理程序
openstack network agent list
啓動實例
以上服務都沒有問題後就能夠進行建立啓動虛擬機。
建立虛擬網絡
首先須要建立一個虛擬網絡,根據配置Neutron時選擇的網絡選項進行虛擬網絡的配置。
Provider networks
建立網絡
source admin-openstack.sh
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat public
# --share 選項容許全部的項目使用虛擬網絡
# --external 選項將虛擬網絡定義爲外部,若是你但願建立內部網絡,則可使用--internal。默認時internal
# --provider-physical-network爲在ml2_conf.ini中配置的flat_networks。
# --provider-network-type flat 是網絡名稱
在網絡上建立子網
openstack subnet create --network public \
--allocation-pool start=172.16.175.100,end=172.16.175.250 \
--dns-nameserver 172.16.175.2 --gateway 172.16.175.2 \
--subnet-range 172.16.175.0/24 public
# --subnet-range 使用CIDR表示法表示提供IP的子網
# start和end分別爲要爲實例分配IP的範圍
# --dns-nameserver 指定DNS解析的IP地址
# --gateway 網關地址
Self-service networks
建立自有網絡
source admin-openstack.sh
openstack network create selfservice
在網絡上建立子網
openstack subnet create --network selfservice \
--dns-nameserver 8.8.8.8 --gateway 192.168.1.1 \
--subnet-range 192.168.1.0/24 selfservice
建立路由
source demo-openstack.sh
openstack router create router
將自助網絡子網添加爲路由器上的接口
openstack router add subnet router selfservice
在路由器上的提供商網絡上設置網關
openstack router set router --external-gateway public
驗證操做
列出網絡命名空間。您應該看到一個qrouter名稱空間和兩個 qdhcp名稱空間
source demo-openstack.sh
ip netns
列出路由器上的端口以肯定提供商網絡上的網關IP地址
openstack port list --router router
建立實例配置類型
# 爲虛擬機分配資源爲1C64M 名爲m1.nano的資源類型
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
配置祕鑰對
# 生成祕鑰文件
ssh-keygen -q -N ""
# openstack建立名爲mykey的祕鑰
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 查看祕鑰
openstack keypair list
添加安全策略
默認狀況下,default安全組適用於全部實例。
# 容許icmp
openstack security group rule create --proto icmp default
# 容許22端口
openstack security group rule create --proto tcp --dst-port 22 default
啓動實例
Provider networks
肯定實例選項
查看可用的配置類型
source demo-openstack.sh
openstack flavor list
查看可用的鏡像
openstack image list
查看可用的網絡
openstack network list
查看可用的安全組
openstack security group list
啓動實例
openstack server create --flavor m1.nano --image cirros \
--nic net-id=PROVIDER_NET_ID --security-group default \
--key-name mykey provider-instance
# PROVIDER_NET_ID 爲public網絡ID,若是選擇環境只包含一個網絡,則能夠省略該--nic選項,由於OpenStack會自動選擇惟一可用的網絡。
檢查實例的狀態
openstack server list
使用虛擬控制檯訪問實例
openstack console url show provider-instance
Self-service networks
肯定實例選項
查看可用的配置類型
source demo-openstack.sh
openstack flavor list
查看可用的鏡像
openstack image list
查看可用的網絡
openstack network list
查看可用的安全組
openstack security group list
啓動實例
# 替換SELFSERVICE_NET_ID爲selfservice網絡ID 。
openstack server create --flavor m1.nano --image cirros \
--nic net-id=SELFSERVICE_NET_ID --security-group default \
--key-name mykey selfservice-instance
檢查實例的狀態
openstack server list
使用虛擬控制檯訪問實例
openstack console url show provider-instance
horizon服務安裝
horizon服務須要基於 Apache HTTP服務和Memcached服務,我把這個服務安裝在控制節點,因此免去了這些服務的安裝,若是你要單獨部署,則須要安裝這些服務。
安裝和配置組件
安裝包
yum install openstack-dashboard -y
編輯 /etc/openstack-dashboard/local_settings 文件並完成如下操做
# 配置儀表板以在controller節點上使用OpenStack服務
OPENSTACK_HOST = "172.16.175.11"
# 配置容許訪問的主機列表
ALLOWED_HOSTS = ['*', 'two.example.com']
# 配置memcached會話存儲服務
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '172.16.175.11:11211',
}
}
# 啓用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 啓用對域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
# 配置Default爲經過儀表板建立的用戶的默認域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置user爲您經過儀表板建立的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "myrole"
# 若是選擇網絡選項1,請禁用對第3層網絡服務的支持
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
# 配置時區
TIME_ZONE = "Asia/Shanghai"
/etc/httpd/conf.d/openstack-dashboard.conf若是未包含,請添加如下行 。
WSGIApplicationGroup %{GLOBAL}
安裝完成
從新啓動Web服務器和memcached存儲服務:
systemctl restart httpd.service memcached.service
完成