openstack的四大服務組件及openstack環境搭建

opensatck的虛擬機建立流程圖
openstack的四大服務組件及openstack環境搭建 html

一.openstack的四大服務及組件功能

1.keystone認證服務的一些概念

1)User:

使用openstack的用戶

2)Role:

給用戶添加到一個角色中,給予此用戶操做權限

3)Tenant:

人、項目或組織擁有的資源合集,一個租戶下有多個用戶,能夠給用戶權限劃分來使用租戶中的資源

4)TOken:

密令,口令,keystone認證將token口令返回到瀏覽器,即在一段時間免祕鑰登陸,功能相似與cookie
會話保持,但又不一樣於cookie,cookie記錄瀏覽器的登陸信息,不能分配用戶訪問權限;token保存了用戶認證信息,
與用戶權限相關。

2.glance鏡像服務的組件及功能

1)glance-api

接收鏡像的刪除、上傳和讀取

2)glance-registry

負責與mysql數據庫的交互,用於存儲和獲取鏡像的元數據,在數據庫中用於存儲鏡像信息的兩張表image表
和image-property表
image表:用於存放鏡像文件的格式、大小等信息
image-property表:用於存放鏡像文件定製化信息

3)image-store

鏡像的保存與讀取的接口,僅僅是一個接口

4)注意

glance服務不須要配置消息隊列,但要配置keystone認證和數據庫

3.nova配置虛擬機服務的組件及功能(openstack最先組件之一)

1)nova-api

接收和響應外部請求,將接收到的請求經過消息隊列發送給其餘服務組件

2)nova-compute

建立虛擬機,是經過libvirt來調用kvm模塊建立虛擬機,nova分爲控制節點和計算節點,nova之間經過
消息隊列進行通訊

3)nova-schdule

是用來調度建立虛擬機所需的物理機

4)nova-palcement-api

監控提供者的庫存和使用量,如跟蹤計算節點資源存儲池的使用量、ip的分配等狀況,配合schdule實現物理機的調度

5)nova-conductor

計算節點訪問數據庫時的中間件,即當nova-compute須要獲取或更新數據庫中的實例信息,不會直接訪問數據庫,而
是經過conductor來訪問數據庫,當在較大的集羣環境中時,須要橫向擴展conductor,但不要擴展在計算節點上

6)nova-novncproxy

VNC代理,用來顯示虛擬機操做終端的界面

7)nova-medata-api

接收虛擬機的元數據請求

4.neutron網絡服務的一些組件和功能(之前叫作nova-network,已改名爲netron)

1)分爲自服務網絡和提供者網絡

自服務網絡:能夠本身建立網絡,用虛擬路由器鏈接外網,此網絡類型使用極少
提供者網絡:虛擬機網絡橋接到物理機網絡,且必須和物理機在同一網絡段,大多數都選擇此網絡類型

2)neutron-server

對外提供openstack的網絡API,接收請求,並調用插件

3)plugin

處理neutron-server接收的請求維護邏輯網絡狀態並調用agent處理請求

4)neutron-linuxbridge-agent

處理plugin請求,確保網絡提供者實現各類網絡功能

5)消息隊列

neutron-server、agent、plugin它們之間是經過消息隊列來進行通訊和調用的

6)網絡提供者

提供虛擬網絡或物理網絡設備,例如linux-bridge、支持neutron服務的物理交換機,    
其網絡、子網、端口、路由等信息都存放在數據庫中

二.環境準備(全部節點centos7.2版本)

1.controll-node控制節點

1)網卡

eth0:192.168.1.10/24   
            eth1:192.168.23.100/24

2) 須要的包:

python-openstackclient  #openstack的客戶端鏈接包
            python2-PyMySQL #鏈接數據庫包
            mariadb #用於mysql數據庫鏈接測試的客戶端
            python-memcached   #鏈接memcached數據包
            openstack-keystone   #認證服務包
            httpd
            mod_wsgi  #httpd的模塊包
            openstack-glance  #鏡像服務包
            openstack-nova-api #接收和響應外部請求包
            openstack-nova-conductor 
            openstack-nova-console 
            openstack-nova-novncproxy 
            openstack-nova-scheduler 
            openstack-nova-placement-api  
            openstack-neutron
            openstack-neutron-ml2  #二層模塊插件
            openstack-neutron-linuxbridge
            ebtables
            openstack-dashboard

2.compute-node計算節點

1)網卡

eth0:192.168.1.10/24   
     eth1:192.168.23.201/24

2) 須要的包:

python-openstackclient
     openstack-nova-compute
     openstack-neutron-linuxbridge
     ebtables
     ipset

3.mysql-node數據庫節點

1)網卡

eth0:192.168.1.41/24

2) 須要的包:

python-openstackclient  
     mariadb 
     mariadb-server  
     rabbitmq-server   
     memcached

4.實驗前必須關閉的功能和開啓的功能,避免實驗沒法進行

1)防火牆要禁用

2)NetworkManager要禁用:會致使沒法bond網卡和橋接網卡不能生效

3)selinux必定要禁用:可能會致使網絡不通的問題

4)開啓chrony時鐘同步,保持全部節點時間同步,避免控制節點找不到計算節點而報錯致使實驗沒法進行

三.openstack的搭建過程(openstack的ocata版本)

1.準備openstack倉庫及一些軟件包的安裝

1)分別在controll-node和compute-node安裝openstack倉庫

~]#yum install centos-release-openstack-ocata -y

2)在全部節點安裝openstack客戶端:

~]#yum install python-openstackclient -y

3)controll-node控制節點安裝數據庫鏈接包:

用於鏈接memcached數據的包
~]#yum install  python-memcached -y
用於鏈接mysql數據庫的包
~]#yum install python2-PyMySQL -y  
安裝mysql數據庫客戶端用於遠程測試鏈接
~]#yum install mariadb -y

4)mysql-node節點安裝mysql數據庫、消息隊列、memcached等

安裝mysql數據庫:
     ~]#yum install mariadb mariadb-server -y   
配置mysql數據庫的配置文件:
     ~]#vim/etc/my.cnf.d/openstack.cnf   
    [mysqld]
    bind-address = 192.168.1.41
 default-storage-engine = innodb
 innodb_file_per_table = on
 max_connections = 4096
 collation-server = utf8_general_ci
 character-set-server = utf8
 …………
 ~]#vim/etc/my.cnf
 ……
 [mysqld]
 bind-address = 192.168.1.41
 ……
啓動mysql服務:
systemctl enable mariadb.service && systemctl start mariadb.service 
執行數據庫安全安裝命令,刪除匿名用戶以及無密碼登陸,確保數據庫安全
~]#mysql_secure_installation 
安裝消息隊列rabbitmq:
~]# yum install rabbitmq-server -y   
 ~]#systemctl enable rabbitmq-server.service  && systemctl start rabbitmq-server.service  
~]#rabbitmqctl add_user openstack openstack  #添加 openstack用戶
 ~]#rabbitmqctl set_permissions openstack ".*" ".*" ".*" #容許讀寫權限
安裝memcached數據庫:
~]#yum install memcached -y
~]#vim /etc/sysconfig/memcached
     OPTIONS="-l 127.0.0.1,::1,192.168.1.41"
~]#systemctl enable memcached.service && systemctl start memcached.service

2.認證服務keystone的部署

1)mysql-node節點建立keystone數據庫以及keystone受權用戶

MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'  IDENTIFIED BY 'keystone'

2)controll-node安裝認證服務相關的包並配置認證服務配置文件

~]#yum install openstack-keystone httpd mod_wsgi -y
 ~]# vim /etc/keystone/keystone.conf
    [database]
    # ...
 connection = mysql+pymysql://keystone:keystone@192.168.1.41/keystone #指明鏈接的數據庫
[token]
# ...
provider = fernet

3)controll-node執行keystone的一些初始化命令

~]#su -s /bin/sh -c "keystone-manage db_sync" keystone  #同步數據到數據庫中
~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone   初始化fernet
~]#keystone-manage credential_setup --keystone-user keystone --keystone-group keystone #初始化證書
引導啓動keystone服務:
~]#keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.23.100:35357/v3/ --bootstrap-internal-url http://192.168.23.100:5000/v3/ --bootstrap-public-url http://192.168.23.100:5000/v3/ --bootstrap-region-id RegionOne
編輯httpd配置文件:
~]#vim /etc/httpd/conf/httpd.conf
ServerName 192.168.23.100  
建立軟鏈接:
~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
啓動http的服務:
~]#systemctl enable httpd.service && systemctl start httpd.service
聲明openstack管理員帳號及密碼等:
~]#export OS_USERNAME=admin
~]# export OS_PASSWORD=keystone
~]# export OS_PROJECT_NAME=admin
~]# export OS_USER_DOMAIN_NAME=Default
~]# export OS_PROJECT_DOMAIN_NAME=Default
~]# export OS_AUTH_URL=http://192.168.23.100:35357/v3
~]# export OS_IDENTITY_API_VERSION=3

4)在controll-node建立域名、項目等

建立一個service項目:
~]#openstack project create --domain default --description "Service Project" service
建立一個demo項目:
~]#openstack project create --domain default --description "Demo Project" demo
建立一個demo用戶:
~]#openstack user create --domain default --password-prompt demo
建立一個user角色:
~]#openstack role create user
將demo用戶添加到demo項目中並授予user角色的權限:
~]#openstack role add --project demo --user demo user
編輯keystone-paste.ini配置文件,移出掉如下[….]section中的admin_token_auth選項,爲安全不啓用臨時帳戶
~]#vim /etc/keystone/keystone-paste.ini
remove 'admin_token_auth' from the [pipeline:public_api],[pipeline:admin_api], and [pipeline:api_v3] sections.
將設置的變量祕鑰、系統認證url變量復位:
~]#unset OS_AUTH_URL OS_PASSWORD
admin用戶請求一個身份認證密令,訪問時須要輸入管理員祕鑰:
~]#openstack --os-auth-url http://192.168.23.100:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
demo用戶請求一個身份認證密令,訪問時須要輸入demo用戶祕鑰:
~]#openstack --os-auth-url http://192.168.23.100:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue

5)在controll-node建立用戶受權腳本,以便下次訪問服務

admin用戶受權腳本:
~]#vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_AUTH_URL=http://192.168.23.100:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmod +x /data/admin-openrc
demo用戶受權腳本:
~]#vim /data/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.23.100:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmd +x /data/demo-openrc
執行admin收取腳本測試訪問
~]#. admin-openrc
~]#openstack token issue

3.glance鏡像服務部署

1)在mysql-node上建立glance數據庫以及glance受權用戶:

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY 'glance';

2)controll-node執行glance服務相關命令

執行admin受權腳本,以admin身份訪問:
~]#. admin-openrc
建立glance用戶:
~]#openstack user create --domain default --password-prompt glance
將glance用戶賦予admin管理員權限:
~]#openstack role add --project service --user glance admin
建立一個服務名爲image的服務
~]#openstack service create --name glance --description "OpenStack Image" image
建立endpoint
~]#openstack endpoint create --region RegionOne image public http://192.168.23.100:9292 #公有端點
~]#openstack endpoint create --region RegionOne image internal http://192.168.23.100:9292  #私有端點
~]#openstack endpoint create --region RegionOne image admin http://192.168.23.100:9292 #管理端點

3)controll-node安裝glance鏡像服務相關包並配置glance配置文件

~]#yum install openstack-glance -y
~]#vim /etc/glance/glance-api.conf
[database]
# ...
connection = mysql+pymysql://glance:glance@192.168.1.41/glance #指定鏈接的數據庫

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211  #指定memcached數據庫服務
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance #glance服務的用戶名
password = glance #glance服務的用戶名密碼

[paste_deploy]
# ...
flavor = keystone

[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/   #鏡像文件的路徑

~]#vim /etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:glance@192.168.1.41/glance  #指定glance數據庫

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
# ...
flavor = keystone
生成表文件到mysql的glance數據庫中:
~]# su -s /bin/sh -c "glance-manage db_sync" glance
啓動glance多有的相關服務:
~]# systemctl enable openstack-glance-api.service  openstack-glance-registry.service
~]#systemctl start openstack-glance-api.service openstack-glance-registry.service
執行admin權限腳本:
 ~]#. admin-openrc
下載cirros鏡像文件:
~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
將鏡像文件加載到glance服務中:
~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
查看服務是否加成功:
~]#openstack image list

4.nova服務的部署

1)在mysql-node上建立nova相關數據庫以及nova受權用戶

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    IDENTIFIED BY 'nova123456';

2)controll-node執行nova服務的相關命令

執行admin受權腳本:
~]#. admin-openrc
建立nova用戶:
~]#openstack user create --domain default --password-prompt nova
授予nova用戶admin權限:
~]#openstack role add --project service --user nova admin
建立一個服務名爲cpmpute的服務
~]#openstack service create --name nova --description "OpenStack Compute" compute
建立enpoint:
~]#openstack endpoint create --region RegionOne compute public http://192.168.23.100:8774/v2.1  #公共端點
~]#openstack endpoint create --region RegionOne compute internal http://192.168.23.100:8774/v2.1 # 私有端點
 ~]#openstack endpoint create --region RegionOne compute admin http://192.168.23.100:8774/v2.1 # 管理端點
建立placement用戶:
~]#openstack user create --domain default --password-prompt placement
將placement用戶授予admin權限
~]#openstack role add --project service --user placement admin
建立一個服務名爲placement服務
~]# openstack service create --name placement --description "Placement API" placement
建立endpoint:
~]#openstack endpoint create --region RegionOne placement public http://192.168.23.100:8778 #公共端點
~]#openstack endpoint create --region RegionOne placement internal http://192.168.23.100:8778 # 私有端點
~]#openstack endpoint create --region RegionOne placement admin http://192.168.23.100:8778 # 管理端點

3)在controll-node安裝nova相關的包並配置nova的配置文件

~]#yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

 ~]#vim /etc/nova/nova.conf
[api_database]
# ...
connection = mysql+pymysql://nova:nova123456@192.168.1.41/nova_api  #指定nova_api數據

[database]
# ...
connection = mysql+pymysql://nova:nova123456@192.168.1.41/nova #指定nova數據庫

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.1.41  #指定消息隊列
my_ip = 192.168.23.100   管理節點ip,能夠不啓用
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver  #禁用掉計算防火牆

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers =192.168.1.41:11211 #指定memcached數據庫
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova  #nova服務的用戶名
password = nova123456 #nova服務的用戶名密碼

[vnc]
enabled = true
# ...
vncserver_listen = $my_ip  #指定vnc的監聽地址
vncserver_proxyclient_address = $my_ip #指定vnc代理地址

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp  #指定鎖目錄

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement

~]#vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
     <IfVersion >= 2.4>
            Require all granted
     </IfVersion>
     <IfVersion < 2.4>
            Order allow,deny
            Allow from all
     </IfVersion>
</Directory>
從新啓動httd服務:
~]#systemctl restart httpd

4)配置完成後,在controll-node 上執行導入數據命令並啓動nova的全部服務

導入nova的表格數據到mysql中的nova對應的數據庫:
~]#su -s /bin/sh -c "nova-manage api_db sync" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
~]#su -s /bin/sh -c "nova-manage db sync" nova
查詢cell單元:
~]#nova-manage cell_v2 list_cells
啓動nova全部的相關服務:
~]#systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
~]#systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

5)在compute-node部署nova服務

計算節點安裝nova服務相關包並配置nova服務的配置文件:
~]#yum install openstack-nova-compute -y
~]#vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.1.41
my_ip = 192.168.23.201   #計算節點

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123456

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.23.100:6080/vnc_auto.html

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]:
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement
查看你的計算節點對虛擬機是否支持硬件,返回值爲1時或比一大說明就支持
~]#egrep -c '(vmx|svm)' /proc/cpuinfo
若是你的計算節點不支持虛擬機硬件加速就須要配置一下選項libvirt
~]#vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type = qemu

6)在compute-node完成以上的配置後啓動nova服務

~]#systemctl enable libvirtd.service openstack-nova-compute.service
~]#systemctl start libvirtd.service openstack-nova-compute.service
 Note注意
If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. 
The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall
 on the controller node is preventing access to port 5672. 
Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.
執行admin受權腳本:
~]#. admin-openrc
查看管理程序狀態:
~]# openstack hypervisor list 
查詢計算主機並添加到cell庫:
~]#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova  
若是須要添加一個新的計算節點須要在主節點上運行nova-manage cell_v2 discover_hosts
或者
~]#vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300   #設置合適的時間間隔

7)配置完成後在controll-node上檢查nova服務

~]#. admin-openrc
列出全部計算服務
~]#openstack compute service list
~]#openstack catalog list
 ~]#openstack image list
 nova服務的升級狀態檢查:
~]#nova-status upgrade check

5.neutron網絡服務的部署

1)在mysql-node上建立neutron數據庫以及neutron服務的受權用戶

MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

2)在controll-node上執行neutron服務的相關命令

執行admin受權腳本,以admin的身份執行命令
~]#. admin-openrc
建立neutron用戶:
~]#openstack user create --domain default --password-prompt neutron
將neutron用戶屬於admin權限
~]#openstack role add --project service --user neutron admin
建立一個服務名爲neutron,類型爲network的網絡服務:
~]#openstack service create --name neutron --description "OpenStack Networking" network
建立endpoint:
~]#openstack endpoint create --region RegionOne network public http://192.168.23.100:9696 #公共端點
~]#openstack endpoint create --region RegionOne network internal http://192.168.23.100:9696 #私用端點
~]#openstack endpoint create --region RegionOne network admin http://192.168.23.100:9696 #管理端點

3)在controll-node上安裝neutron服務的相關包及配置相關的配置文件(這裏一provider-network網絡類型爲例)

~]#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

~]#vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:neutron123@192.168.1.41/neutron

[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@192.168.1.41
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
# ...
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

~]#vim /etc/neutron/plugins/ml2/ml2_conf.ini  #模塊二層配置文件(model layer2)
[ml2]    
# ...
type_drivers = flat,vlan  #分別啓用flat(橋接網絡)、vlan(虛擬局域網絡)
tenant_network_types =        #爲空表示禁用用戶建立子網
mechanism_drivers = linuxbridge  #機制驅動爲橋接
extension_drivers = port_security  #開啓擴展驅動端口安全

[ml2_type_flat]
# ...
flat_networks = provider   #指定的虛擬網絡名稱

[securitygroup]
# ...
enable_ipset = true

~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #物理網卡映射到eth1,其中發熱provider爲虛擬網絡名,必須跟上面指定的flat_networks=
指定的虛擬網絡名稱相同

[vxlan]
enable_vxlan = false   #禁用vxlan技術,從而進制用戶建立本身的網絡

[securitygroup]   #開啓安全組能夠限定外界主機的訪問規則
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

~]#vim /etc/neutron/dhcp_agent.ini  #dhcp代理服務
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

~]#vim /etc/neutron/metadata_agent.ini
[DEFAULT]
# ...
nova_metadata_ip = 192.168.23.100
metadata_proxy_shared_secret = 123456

~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
……

4)配置完成後,在controll-node執行neutron網絡服務的數據庫導入初始化並開啓neutron的全部服務

網絡初始化腳本軟鏈接:
~]#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini  
將neutron生成的表格導入mysql中的neutron數據庫中:
~]#su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啓nova-api服務:
~]#systemctl restart openstack-nova-api.service
啓動neutron網絡服務的全部服務:
~]#systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
~]#systemctl start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

提供者網絡模式工做在二層,無需開啓此三層服務,此服務用在自服務網絡模式下須要開啓:
~]#systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service

5)在compute-node部署neutron服務

在計算節點安裝neutron服務的相關包並配置neutron配置文件
~]#yum install openstack-neutron-linuxbridge ebtables ipset -y
~]#vim /etc/neutron/neutron.conf
在[database]下注釋掉全部connection選項

[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@192.168.1.41
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

選擇和控制node相同的網絡選項
~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #此處的虛擬網卡名provider必須和控制節點保持一致

[vxlan]
enable_vxlan = false    #一樣關閉VXlan

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重啓nova服務:
~]#systemctl restart openstack-nova-compute.service
啓動計算節點上的neutron服務:
 ~]#systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service

6.部署儀表dashboard的web訪問端

1)在controll-nod是上安裝dashboard的相關包並配置dashboard的配置文件

~]#yum install openstack-dashboard -y
~]#vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.23.100"  #指定主機
ALLOWED_HOSTS = ['*,']  #容許全部主機訪問dashboard

SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #

CACHES = {   #
        'default': {
                 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
                 'LOCATION': 'controller:11211',
        }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST #

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #

OPENSTACK_API_VERSIONS = {   
        "identity": 3,
        "image": 2,
        "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"  #

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #

OPENSTACK_NEUTRON_NETWORK = {    #
        'enable_router': False,
        'enable_quotas': False,
        'enable_ipv6': False,
        'enable_distributed_router': False,
        'enable_ha_router': False,
        'enable_lb': False,
        'enable_firewall': False,
        'enable_***': False,
        'enable_fip_topology_check': False,
}

TIME_ZONE = "Asia/Shanghai"  #時區設置
從新啓動httpd服務和memcached服務
~]# systemctl restart httpd.service memcached.service
dashboard頁面訪問測試:

openstack的四大服務組件及openstack環境搭建

7.建立一個虛擬網絡

1)執行admin權限腳本

~]#. admin-openrc

2) 建立一個共享外網的提供網絡者的名稱爲provider,網絡類型爲flat,網絡名稱也爲provider

~]#openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider

3)在provider虛擬網絡基礎上建立一個子網絡名稱也爲provider,分配的地址池爲192.168.23.10-192.168.23.99 ,子網範圍爲192.168.23.0/24

~]#openstack subnet create --network provider --allocation-pool start=192.168.23.10,end=192.168.23.99 --dns-nameserver 192.168.23.1 --gateway 192.168.23.1 --subnet-range 192.168.23.0/24 provider

8.建立一個虛擬機實例兩種方法

方法一:.直接在web建立

1)點擊管理員先建立一實例類型
openstack的四大服務組件及openstack環境搭建
2)選擇實例類型
openstack的四大服務組件及openstack環境搭建
3)點擊建立實例類型
openstack的四大服務組件及openstack環境搭建
4)填寫實例類型中的虛擬cpu個數、內存、盤大小等信息,再點擊建立實例類型
openstack的四大服務組件及openstack環境搭建
5)查看建立好的實例類型
openstack的四大服務組件及openstack環境搭建
6)選擇項目、在選擇實例
openstack的四大服務組件及openstack環境搭建
7)選擇建立實例
openstack的四大服務組件及openstack環境搭建node

8)填寫建立的 虛擬機名稱
openstack的四大服務組件及openstack環境搭建
9)選擇鏡像源cirrors
openstack的四大服務組件及openstack環境搭建python

10)在選擇建立好的實例類型,在點擊右下角的實例建立
openstack的四大服務組件及openstack環境搭建mysql

11)實例正在建立中,等待孵化,即將完成建立
openstack的四大服務組件及openstack環境搭建linux

方法二:直接在控制節點命令行建立虛擬機創

1)建立虛擬機命令

~]#openstack server create --flavor m1.nano --image cirros   --nic net-id=06a17cc8-57b8-4823-a6df-28da24061d8a --security-group default test-vm

2)命令選項註釋

server create #建立一個虛擬機實例
    --flavor  #指定的實例類型即建立的虛擬機vcpu個數、內存大小、磁盤大小等配置信息
    --nic net-id=指定的網絡Id號,即虛擬機是基於網絡的建立
     --image 指定的鏡像名稱
    default test-vm #指定默認的虛擬機名稱爲test-vm
相關文章
相關標籤/搜索