第二十六週微職位

一、請描述openstack、kvm、qemu-kvm、libvirt及xen之間的關係。
KVM是最底層的hypervisor,它是用來模擬CPU的運行,它缺乏了對network和周邊I/O的支持,因此咱們是無法直接用它的。QEMU-KVM就是一個完整的模擬器,它是建基於KVM上面的,它提供了完整的網絡和I/O支持. Openstack不會直接控制qemu-kvm,它會用一個叫libvit的庫去間接控制qemu-lvm, libvirt提供了誇VM平臺的功能,它能夠控制除了QEMU的模擬器,包括vmware, virtualbox xen等等。因此爲了openstack的誇VM性,因此openstack只會用libvirt而不直接用qemu-kvm。libvirt還提供了一些高級的功能,例如pool/vol管理。
第二十六週微職位html

二、建立一套openstack-M版雲主機
當前openstack的各個組件及功能
Dashboard(Horizon):
經過提供了web服務實現openstack服務的交互,好比建立實例,配置IP以及配置訪問控制。python

Compute(Nova):
在系統環境中管理整個生態圈的計算。承擔着通過請求後的孵化,調度和回收虛擬機等一系列的責任,是核心組件,能夠說是真正實現的角色。mysql

Networking(Neutron):
提供了網絡服務,鏈接起了其餘服務。爲用戶提供API去定義網絡並將它們聯繫起來。支持多種網絡供應商和新興的網絡技術,好比vxlan等。linux

Object Storage(Swift):經過了RESTful API來存儲和檢索任務非結構化的數據對象,對數據同步和橫向擴展有很高的容錯性,不是掛載文件目錄形勢的使用方式,它是將對象和文件寫入多個驅動程序以確保數據在服務器集羣中的完整性。web

Block(Cinder):
提供了塊存儲和持久化,可插拔式的體系架構簡化了建立和管理存儲設備。sql

Identity(Keystone):
提供openstack服務的驗證和受權功能。爲所有服務提供了訪問接口。mongodb

Image service(Glance):
提供虛擬磁盤設備的鏡像和檢索服務,在計算實例時以供使用。數據庫

Telemetry(Ceilometer):
可擴展的服務,提供了監控、測量、計費、統計等功能。vim

Orchestration(Heat):
經過組合模板來進行的服務。centos

Database service(Trove):
爲關係數據庫和非關係數據庫提供可擴展和可依賴的雲數據庫服務。

Data processing(Sahara service):
屬於openstack的大數據項目。是openstack與hadoop的融合。

[openstack環境準備]
軟件:vmareworkstation 11

操做系統:CentOS7 3.10.0-327.e17.x86_64

2臺:一臺做爲控制節點controller,一臺做爲計算節點compute。

基本設置:

  1. 每個都選擇2張網卡,一張對外提供服務,須要鏈接互聯網,能夠是橋接或者nat模式。一張提供內部網絡,做爲openstack內部聯繫的網絡,在VMware選擇host only模式。

  2. 修改對應的主機名,並在hosts作主機與IP的解析,

  3. 作時間同步,能夠是本地,也能夠是時間服務器,在centos7中改成chrony,調整完畢後設置開機啓動。

  4. 注!:方便起見,把防火牆和selinux所有關閉,確保網絡暢通。

到這一步,基本上的環境配置就搞定了。
[openstack基本安裝]

在centos上默認包含openstack版本,能夠直接用yum安裝最新的版本不用費力去尋找相關的源碼包。

[root@ceshiapp_2~]# yum install centos-release-openstack-mitaka.noarch -y

安裝openstack客戶端

[root@ceshiapp_2~]# yum install -y python-openstackclient

openstack自帶有openstack-selinux來管理openstack服務之間的安裝性能,這裏,爲了實驗順利,這裏選擇忽略,實際狀況能夠根據需求選擇是否安裝。

注!:如沒有特殊說明如下安裝說明都是在controller中執行。

[數據庫及nosql安裝]

[root@controller~]yum install mariadb mariadb-server python2-PyMySQL

#yum安裝數據庫,用於到時候建立服務的數據庫 port:3306

[root@controller~]mysql_secure_installation #安全配置嚮導,包括root密碼等。

[root@controllermy.cnf.d]# vim openstack.cnf

[mysqld]

bind-address= 0.0.0.0 #這個地址表示可以讓全部的地址能遠程訪問。

default-storage-engine= innodb

innodb_file_per_table

max_connections= 4096

collation-server= utf8_general_ci

character-set-server= utf8

#修改配置文件,定義字符集和引擎,其餘可按需修改。

這是yum安裝的數據庫,是mariadb是兼容mysql的,也能夠選擇源碼編譯安裝,安裝過程這裏不詳細講了。

MariaDB[(none)]> grant all on . to root@'%' identified by "root" ;

這裏先添加遠程登陸的root用戶。

[root@controllermy.cnf.d]# yum install -y mongodb-server mongodb

安裝mongodb數據庫,提升web訪問的性能。Port:27017
[root@controllermy.cnf.d]# vim /etc/mongod.conf

bind_ip = 192.168.10.200 #將IP綁定在控制節點上。

......

[root@controllermy.cnf.d]# systemctl enable mongod.service

Createdsymlink from /etc/systemd/system/multi-user.target.wants/mongod.service to/usr/lib/systemd/system/mongod.service.

[root@controllermy.cnf.d]# systemctl start mongod.service

設置開機自啓動。

[root@controllermy.cnf.d]#yum install memcached python-memcached

[root@controllermy.cnf.d]# systemctl enable memcached.service

Createdsymlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.

[root@controllermy.cnf.d]# systemctl start memcached.service

安裝memcached並設置開機啓動。Port:11211 memcache用於緩存tokens

[rabbitmq消息隊列安裝與配置]

爲了保持openstack項目鬆耦合,扁平化架構的特性,所以消息隊列在整個架構中扮演了交通樞紐的做用,因此消息隊列對openstack來講是必不可少的。不必定適用RabbitMQ,也能夠是其餘產品。

[root@controllermy.cnf.d]# yum install -y rabbitmq-server

[root@controllermy.cnf.d]# systemctl enable rabbitmq-server.service

Createdsymlink from/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to/usr/lib/systemd/system/rabbitmq-server.service.

[root@controllermy.cnf.d]# systemctl start rabbitmq-server.service

[root@controllermy.cnf.d]# rabbitmqctl add_user openstack openstack
Creatinguser "openstack" ...

[root@controllermy.cnf.d]# rabbitmqctl set_permissions openstack "." "."".*"

Settingpermissions for user "openstack" in vhost "/" ...

#安裝rabbidmq,建立用戶並設置容許讀寫操做。

[Identity service—keystone]

接下來是openstack第一個驗證服務——keystone服務安裝,它爲項目管理提供了一系列重要的驗證服務和服務目錄。openstack的其餘服務安裝以後都須要再keystone進行驗證過,以後可以跟蹤全部在局域網內安裝好的openstack服務。

安裝keystone以前先配置好數據庫環境。

[root@controller~]# mysql -uroot -p

MariaDB[(none)]> CREATE DATABASE keystone;

QueryOK, 1 row affected (0.00 sec)

MariaDB[(none)]> grant all on keystone.* to keystone@localhost identified by'keystone';
QueryOK, 0 rows affected (0.09 sec)

MariaDB[(none)]> grant all on keystone.* to keystone@'%' identified by'keystone';

QueryOK, 0 rows affected (0.00 sec)

MariaDB[(none)]> flush privileges;

QueryOK, 0 rows affected (0.05 sec)

#生成admin tokens

[root@controller~]# openssl rand -hex 10

bc6aec97cc5e93a009b1

安裝keystone httpd wsgi

WSGI是Web Server Gateway Interface的縮寫,是python定義的web服務器或者框架的一種接口,位於web應用程序和web服務器之間。

[root@controller~]# yum install -y openstack-keystone httpd mod_wsgi

修改keystone配置文件

[root@controller~]# vim /etc/keystone/keystone.conf

[default]

[DEFAULT]
admin_token= bc6aec97cc5e93a009b1 #這裏填以前生成的admintokens

[database]

......

connection= mysql+pymysql://keystone:keystone@controller/keystone

[token]

......

provider= fernet #控制tokens驗證、建設和撤銷的操做方式

[root@controller~]# sh -c "keystone-manage db_sync" keystone #設置數據庫同步

[root@controller~]# keystone-manage fernet_setup --keystone-user keystone --keystone-groupkeystone #生成fernetkeys

[root@controller~]# vim /etc/httpd/conf/httpd.conf #修改配置文件
ServerAdmincontroller
接下來配置虛擬主機

[root@controller~]# vim /etc/httpd/conf.d/wsgi-keystone.conf

WSGIScriptAlias //usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog/var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

    Require all granted

</Directory>

</VirtualHost>

<VirtualHost*:35357>

WSGIDaemonProcess keystone-adminprocesses=5 threads=1 user=keystone group=keystone display-na

me=%{GROUP}

WSGIProcessGroup keystone-admin
    WSGIScriptAlias //usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog/var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

    Require all granted

</Directory>

</VirtualHost>

設置開機自啓動。

[root@controller~]# systemctl enable httpd.service

[root@controller~]# systemctl start httpd.service

配置上述生成的admin token

[root@controller~]# export OS_TOKEN=bc6aec97cc5e93a009b1

配置endpoint URL

[root@controller~]# export OS_URL=http://controller:35357/v3

Identity 版本

[root@controller~]# export OS_IDENTITY_API_VERSION=3

建立identity service 。

[root@controller~]# openstack service create --namekeystone --description "OpenStack Identity" identity

驗證服務管理着openstack項目中全部服務的API endpoint,服務之間的相互聯繫取決於驗證服務中的API endpoint。根據用戶不一樣,它提供了三種形式的API endpoint,分別是admin、internal、public。admin默認容許管理用戶和租戶,internal和public不容許管理用戶和租戶。由於在openstack中通常都會設置爲2張網卡,一張對外提供服務,一張是內部交互提供的網絡,所以對外通常使用public API 可以使得用戶對他們本身的雲進行管理。Admin API 通常對管理員提供服務,而internal API 是針對用戶對本身所使用opservice服務進行管理。所以通常一個openstack服務都會建立多個API,針對不用的用戶提供不一樣等級的服務。

[root@controller~]# openstack endpoint create --region RegionOne identity publichttp://controller:5000/v3

[root@controller~]# openstack endpoint create --region RegionOne identity internalhttp://controller:5000/v3

[root@controller~]# openstack endpoint create --region RegionOne identity adminhttp://controller:35357/v3

接下來是建立默認的域、工程(租戶)、角色、用戶以及綁定。
[root@controller~]# openstack domain create --description "Default Domain" default

[root@controller~]# openstack project create --domain default \

--description "Admin Project"admin

[root@controller~]# openstack user create --domain default --password-prompt admin

UserPassword:

RepeatUser Password:#密碼admin

[root@controller~]# openstack role add --project admin --user admin admin

對demo工程、用戶、角色進行綁定。

[root@controller~]# openstack project create --domain default --description "Service Project" service

[root@controller~]# openstack project create --domain default \

--description "Demo Project" demo

[root@controller~]# openstack user create --domain default --password-prompt demo

UserPassword:

RepeatUser Password: #demo

[root@controller~]# openstack role add --project demo --user demo user

測試驗證服務是否安裝有效。

1.在keystone-paste.ini中的這3個字段中刪除admin_token_auth

[root@controller~]# vim /etc/keystone/keystone-paste.ini

[pipeline:public_api]

......admin_token_auth

[pipeline:admin_api]

......

[pipeline:api_v3]

......

2.刪除剛纔設置的OS_token和OS_URL

Unset OS_TOKEN OS_URL

而後開始驗證,沒報錯說明成功了。

[root@controller~]# openstack --os-auth-url http://controller:35357/v3 \

--os-project-domain-name default--os-user-domain-name default \

--os-project-name admin --os-username admintoken issue

Password:#admin

驗證demo用戶:

[root@controller~]# openstack --os-auth-url http://controller:5000/v3 \

--os-project-domain-name default--os-user-domain-name default \

--os-project-name demo --os-username demotoken issue

Password:#demo

最後建立user腳本

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

這裏若是是多個臺服務器配置的話。這樣的腳本就會方便你配置管理員的API。主要仍是起到了方便的做用。

[Image service—glance]

依舊先建立數據庫以及相關用戶。

[root@controller~]# mysql -u root -p

MariaDB[(none)]> CREATE DATABASE glance;

QueryOK, 1 row affected (0.04 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \

->  IDENTIFIED BY 'glance';

QueryOK, 0 rows affected (0.28 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

->  IDENTIFIED BY 'glance';

QueryOK, 0 rows affected (0.01 sec)

MariaDB[(none)]> flush privileges;

QueryOK, 0 rows affected (0.08 sec)

[root@controller~]# . admin-openrc注!若是在其餘服務器上就須要執行這個腳本,來配置admin api

接來下建立glance用戶以及綁定。

[root@controller~]# openstack user create --domain default --password-prompt glance

UserPassword:

RepeatUser Password: #glance

將admin 角色增長到 service工程和 glance用戶。

[root@controller~]# openstack role add --project service --user glance admin

[root@controller~]# openstack service create --name glance \

--description "OpenStack Image"image

建立image服務的三種接口

[root@controller~]# openstack endpoint create --region RegionOne \

image public http://controller:9292

[root@controller~]# openstack endpoint create --region RegionOne \

image internal http://controller:9292

[root@controller~]# openstack endpoint create --region RegionOne image admin http://controller:9292

安裝openstack-glance並修改配置文件

[root@controller~]# yum install -y openstack-glance

[root@controller~]# vim /etc/glance/glance-api.conf

[database]

......

connection= mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken]

...

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= glance

password= glance

......

[paste_deploy]

...

flavor= keystone

[glance_store]

...

stores= file,http

default_store= file

filesystem_store_datadir= /var/lib/glance/images/

[root@controller~]# vim /etc/glance/glance-registry.conf

[database]

......

connection= mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken]

...

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= glance

password= glance

......

[paste_deploy]

...

flavor= keystone

作數據庫同步。輸出信息忽略。

[root@controller ~]# su -s /bin/sh -c "glance-managedb_sync" glance

Option"verbose" from group "DEFAULT" is deprecated forremoval. Its value may be silentlyignored in the future.

/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1056:OsloDBDeprecationWarning: EngineFacade is deprecated; please useoslo_db.sqlalchemy.enginefacade

expire_on_commit=expire_on_commit,_conf=conf)

/usr/lib/python2.7/site-packages/pymysql/cursors.py:146:Warning: Duplicate index 'ix_image_properties_image_id_name' defined on thetable 'glance.image_properties'. This is deprecated and will be disallowed in afuture release.

result = self._query(query)

開機啓動。

[root@controller~]# systemctl enable openstack-glance-api.service \

openstack-glance-registry.service

Createdsymlink from/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to/usr/lib/systemd/system/openstack-glance-api.service.

Createdsymlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.serviceto /usr/lib/systemd/system/openstack-glance-registry.service.

[root@controller~]# systemctl start openstack-glance-api.service \

openstack-glance-registry.service

下載一個鏡像並建立,而後能夠list查看是否成功。

[root@controller~]wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

[root@controller~]# openstack image create "cirros" \

--file cirros-0.3.4-x86_64-disk.img \

--disk-format qcow2 --container-format bare\

--public

鏡像active成功。

[root@controller~]# openstack image list

+--------------------------------------+--------+--------+

|ID | Name | Status |

+--------------------------------------+--------+--------+

|7f715e8d-6f29-4e78-ab6e-d3b973d20cf7 | cirros | active |

+--------------------------------------+--------+--------+

[Compute-service nova]

依舊爲計算服務建立數據庫。這裏要爲nova建立2個數據庫。

MariaDB[(none)]> CREATE DATABASE nova_api;CREATE DATABASE nova;

QueryOK, 1 row affected (0.09 sec)

QueryOK, 1 row affected (0.00 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \

->  IDENTIFIED BY 'nova';

QueryOK, 0 rows affected (0.15 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';

QueryOK, 0 rows affected (0.00 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';

QueryOK, 0 rows affected (0.00 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';

QueryOK, 0 rows affected (0.00 sec)

建立用戶並綁定admin角色

[root@controller~]# . admin-openrc

[root@controller~]# openstack user create --domain default --password-prompt nova

UserPassword:

RepeatUser Password: #nova

[root@controller~]# openstack role add --project service --user nova admin

建立nova計算服務。

[root@controller~]# openstack service create --name nova \

--description "OpenStack Compute"compute

建立nova的api。

[root@controller~]# openstack endpoint create --region RegionOne \

compute public http://controller:8774/v2.1/%\(tenant_id\)s

[root@controller~]# openstack endpoint create --region RegionOne compute internalhttp://controller:8774/v2.1/%(tenant_id)s

[root@controller~]# openstack endpoint create --region RegionOne compute adminhttp://controller:8774/v2.1/%(tenant_id)s

安裝nova組件。

[root@controller~]# yum install -y openstack-nova-api openstack-nova-conductor \

openstack-nova-consoleopenstack-nova-novncproxy \

openstack-nova-scheduler

修改配置文件,修改內容比較多,須要仔細比對一下。

[root@controller~]# vim /etc/nova/nova.conf

[default]

......

enabled_apis= osapi_compute,metadata

rpc_backend=rabbit

auth_strategy=keystone

my_ip=192.168.10.200# 設置內網IP

use_neutron=true#支持網絡服務

firewall_driver=nova.virt.libvirt.firewall.NoopFirewallDriver

[api_database]

connection=mysql+pymysql://nova:nova@controller/nova_api

[database]

......

connection=mysql+pymysql://nova:nova@controller/nova

[oslo_messaging_rabbit]

......

rabbit_host= controller

rabbit_userid= openstack

rabbit_password= openstack #之間建立的密碼。

[keystone_authtoken]

......

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= nova

password= nova #以前建立的用戶密碼

[vnc]

......

vncserver_listen= $my_ip

vncserver_proxyclient_address= $my_ip

[glance]

.......

api_servers= http://controller:9292

[oslo_concurrency]

......

lock_path= /var/lib/nova/tmp

同步數據庫,忽略輸出。

[root@controller~]# su -s /bin/sh -c "nova-manage api_db sync" nova

[root@controller~]# su -s /bin/sh -c "nova-manage db sync" nova

設置開機啓動。

systemctlenable openstack-nova-api.service \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service \

openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

#systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service \

openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

配置compute節點

[root@compute ~]# yum install -ycentos-release-openstack-mitaka #不然直接安裝openstack-nova-compute失敗

[root@compute ~]# yum install -y openstack-nova-compute

修改配置文件

[root@compute~]# vim /etc/nova/nova.conf

[DEFAULT]

......

rpc_backend= rabbit

auth_strategy= keystone

my_ip=192.168.10.201

firewall_driver=nova.virt.libvirt.firewall.NoopFirewallDriver

use_neutron=true

[glance]

......

api_servers=http://controller:9292

[keystone_authtoken]

......

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= nova

password= nova

[libvirt]

......

virt_type=qemu#egrep -c '(vmx|svm)' /proc/cpuinfo 若是顯示爲0 則須要修改爲qemu

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[oslo_messaging_rabbit]

......

rabbit_host= controller

rabbit_userid= openstack

rabbit_password= openstack

[vnc]

......

enabled=true

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=$my_ip

novncproxy_base_url=http://controller:6080/vnc_auto.html

而後設置開機自啓動。

systemctlenable libvirtd.service openstack-nova-compute.service

systemctlstart libvirtd.service openstack-nova-compute.service

驗證結果,在compute上操做。4個up爲ok

[root@compute~]# . admin-openrc

[root@compute~]# openstack compute service list

+----+------------------+------------+----------+---------+-------+----------------------------+

| Id |Binary | Host | Zone | Status | State | Updated At |

+----+------------------+------------+----------+---------+-------+----------------------------+

| 1 | nova-conductor | controller | internal | enabled | up | 2016-08-16T14:30:42.000000 |

| 2 | nova-scheduler | controller | internal | enabled | up | 2016-08-16T14:30:33.000000 |

| 3 | nova-consoleauth | controller | internal| enabled | up |2016-08-16T14:30:39.000000 |

| 6 | nova-compute | compute | nova | enabled | up |2016-08-16T14:30:33.000000 |

+----+------------------+------------+----------+---------+-------+----------------------------+

[Network service—neutron ]

Neutron最主要是功能是負責虛擬環境下的網絡,但它以前並不叫neutron,是叫quantum,由於被註冊而後才更名的。網絡在openstack中最複雜的功能,配置也是最繁瑣的,從官方文檔中單獨把網絡嚮導列出來寫一大篇文章就能夠看出來。從L1到L7都有所涉及,openstack全部的服務都是經過網絡練習起來。像glance同樣,neutron自己並提供網絡服務,它的大部分功能都是經過Plugin提供的,除了DHCP和L3-agent,這些信息在配置服務的過程當中能體現出來。

Neutron自己有2種模式,一種是provider networks,指的是運營商提供網絡,能夠說是外網,還有一種是Self-service networks,這是私人網絡,能夠說內網,可是第二種網絡模式包含第一種,因此基本上都會選擇第二種部署方式,對外提供服務用外網,對內使用內網,所以部署openstack通常都會使用2張網卡。

在controller上配置數據庫,賦權等操做。

[root@controller~]# mysql -u root -p

MariaDB[(none)]> CREATE DATABASE neutron;

QueryOK, 1 row affected (0.09 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';

QueryOK, 0 rows affected (0.36 sec)

MariaDB[(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

QueryOK, 0 rows affected (0.00 sec)

MariaDB[(none)]> flush privileges;

QueryOK, 0 rows affected (0.09 sec)

建立neutron用戶,將admin角色綁定到用戶上。

[root@controller~]# openstack user create --domain default --password-prompt neutron

UserPassword:

RepeatUser Password: #neutron

[root@controller~]# openstack role add --project service --user neutron admin

建立neutron網絡服務以及三種API接口。

[root@controller~]# openstack service create --name neutron \

--description "OpenStackNetworking" network

[root@controller~]# openstack endpoint create --region RegionOne \

network public http://controller:9696

[root@controller~]# openstack endpoint create --region RegionOne network internal http://controller:9696

[root@controller~]# openstack endpoint create --region RegionOne network admin http://controller:9696

網絡配置,安裝組件。

[root@controller~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

修改neutron配置文件。

[root@controller~]# vim /etc/neutron/neutron.conf

[default]

core_plugin= ml2

service_plugins= router

allow_overlapping_ips= True

auth_strategy=keystone

rpc_backend=rabbit

notify_nova_on_port_status_changes= True

notify_nova_on_port_data_changes= True

[database]

......

connection= mysql+pymysql://neutron:neutron@controller/neutron

[keystone_authtoken]

......

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= neutron

password= neutron

[nova]

......

auth_url= http://controller:35357

auth_type= password

project_domain_name= default

user_domain_name= default

region_name= RegionOne

project_name= service

username= nova

password= nova

[oslo_concurrency]

......

lock_path= /var/lib/neutron/tmp

修改ML2配置文件

[root@controller~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

......

type_drivers= flat,vlan,vxlan

tenant_network_types= vxlan

mechanism_drivers= linuxbridge,l2population

extension_drivers= port_security

[ml2_type_flat]

......

flat_networks= provider

[ml2_type_vxlan]

...

vni_ranges= 1:1000

[securitygroup]

...

enable_ipset= True

設置橋接代理配置文件。

[root@controller~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

......

physical_interface_mappings=provider:eno33554984#內網網卡

[vxlan]

......

enable_vxlan= true

local_ip= 192.168.10.200 #controller ip

l2_population= true

[securitygroup]

......

enable_security_group= true

firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改Layer-3 代理配置文件。

[root@controller~]# vim /etc/neutron/l3_agent.ini

[DEFAULT]

......

interface_driver= neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge=

修改DHCP代理配置文件。

[root@controller~]# vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

......

interface_driver= neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver= neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

修改元數據代理配置文件。

vim/etc/neutron/metadata_agent.ini

[DEFAULT]

......

nova_metadata_ip= controller

metadata_proxy_shared_secret= meta #元數據共享的一個密碼,以後會用到。

在nova配置文件中修改neutron的參數。

[root@controller~]# vim /etc/nova/nova.conf

[neutron]

......

url =http://controller:9696

auth_url= http://controller:35357

auth_type= password

project_domain_name= default

user_domain_name= default

region_name= RegionOne

project_name= service

username= neutron

password= neutron

service_metadata_proxy= True

metadata_proxy_shared_secret= meta

設置軟鏈接

[root@controller~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步數據庫。

[root@controllerneutron]# su -s /bin/sh -c "neutron-db-manage --config-file/etc/neutron/neutron.conf \

--config-file /etc/neutron/plugins/ml2/ml2_conf.iniupgrade head" neutron

出現OK成功。

重啓nova-api服務

[root@controllerneutron]# systemctl restart openstack-nova-api.service

設置開機啓動並啓動neutron服務。

#systemctl enable neutron-server.service \

neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service \

neutron-metadata-agent.service

#systemctl enable neutron-l3-agent.service

#systemctl start neutron-server.service \

neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service \

neutron-metadata-agent.service

#systemctlstart neutron-l3-agent.service

配置完控制節點,配置compute節點的網絡服務。

[root@compute~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

修改neutron配置文件

[root@compute~]# vim /etc/neutron/neutron.conf

[DEFAULT]

rpc_backend= rabbit

auth_strategy= keystone

[keystone_authtoken]

......

auth_uri= http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= neutron

password= neutron

[oslo_concurrency]

.....

lock_path= /var/lib/neutron/tmp

[oslo_messaging_rabbit]

rabbit_host= controller

rabbit_userid= openstack

rabbit_password= openstack

修改linux橋接代理的配置文件

[root@compute~]# vim /etc/neutron/plugins/ml2/linuxbridge.ini

[linux_bridge]

......

physical_interface_mappings= provider:eno33554984

[securitygroup]

.....

firewall_driver= neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group= true

[vxlan]

......

enable_vxlan= true

local_ip= 192.168.10.201 #內網IP

l2_population= true

修改nova中neutron模塊的配置信息。

[root@compute~]# vim /etc/nova/nova.conf

[neutron]

......

url =http://controller:9696

auth_url= http://controller:35357

auth_type= password

project_domain_name= default

user_domain_name= default

region_name= RegionOne

project_name= service

username= neutron

password= neutron

重啓compute服務。

#systemctlrestart openstack-nova-compute.service

設置neutron開機啓動。

#systemctlenable neutron-linuxbridge-agent.service

#systemctl start neutron-linuxbridge-agent.service

驗證neutron服務,出現5個笑臉說明OK。

[root@controller~]# neutron agent-list

+--------------+--------------+------------+-------------------+-------+----------------+---------------+

|id | agent_type | host | availability_zone | alive | admin_state_up | binary |

+--------------+--------------+------------+-------------------+-------+----------------+---------------+

|7476f2bf- | DHCP agent | controller | nova | :-) | True | neutron-dhcp- |

|dced-44d6-8a | | | | | | agent |

|a0-2aab951b1 | | | | | | |

|961 | | | | | | |

|82319909-a4b | Linux bridge | compute | | :-) | True | neutron- |

|2-47d9-ae42- | agent | | | | | linuxbridge- |

|288282ad3972 | | | | | | agent |

|b465f93f- | Metadata | controller | | :-) | True | neutron- |

|90c3-4cde- | agent | | | | | metadata- |

|949c- | | | | | | agent |

|47c205a74f65 | | | | | | |

|e662d628-327 | L3 agent | controller| nova | :-) | True | neutron-l3-ag |

|b-4695-92e1- | | | | | | ent |

|76da38806b05 | | | | | | |

|fa84105f-d49 | Linux bridge | controller | | :-) | True | neutron- |

|7-4b64-9a53- | agent | | | | | linuxbridge- |

|3b1f074e459e | | | | | | agent |

+--------------+--------------+------------+-------------------+-------+----------------+---------------+

[root@compute~]# neutron ext-list

+---------------------------+-----------------------------------------------+

|alias | name |

+---------------------------+-----------------------------------------------+

|default-subnetpools | DefaultSubnetpools |

|network-ip-availability | Network IPAvailability |

|network_availability_zone | Network Availability Zone |

|auto-allocated-topology | AutoAllocated Topology Services |

|ext-gw-mode | Neutron L3Configurable external gateway mode |

|binding | PortBinding |

|agent | agent |

|subnet_allocation | SubnetAllocation |

|l3_agent_scheduler | L3 AgentScheduler |

|tag | Tagsupport |

|external-net | Neutronexternal network |

|net-mtu

至此完成部署。

相關文章
相關標籤/搜索