OpenStack Mitaka安裝

http://egon09.blog.51cto.com/9161406/1839667html

 

前言:python

  openstack的部署很是簡單,簡單的前提創建在紮實的理論功底,本人一直以爲,玩技術必定是理論指導實踐,網上遍及個種搭建方法均可以實現一個基本的私有云環境,可是諸位可曾發現,不少配置都是重複的,爲什麼重複?到底什麼位置該不應配?具體配置什麼參數?不少做者本人都搞不清楚,今天本人就是要在這裏正本清源(由於你不理解因此你會有冗餘的配置,說白了,啥配置啥意思你根本沒鬧明白)。mysql

若有不解可郵件聯繫我:egonlin4573@gmail.comlinux

 

介紹:本次案列爲基本的三節點部署,集羣案列後期有時間再整理sql

一:網絡(本次實驗沒有作Cinder節點):mongodb

 1.管理網絡:172.16.209.0/24數據庫

 2.數據網絡:1.1.1.0/24apache

 

二:操做系統:CentOS Linux release 7.2.1511 (Core)django

 

三:內核:3.10.0-327.el7.x86_64vim

 

四:openstack版本mitaka

 

效果圖:

 

 

 

OpenStack mitaka部署

約定:

0.如下配置是在原有配置文件上找相關項進行修改或添加

1.在修改配置的時候,切勿在某條配置後加上註釋,能夠在配置的上面或者下面加註釋

2.相關配置必定是在標題後追加,不要在原有註釋的基礎上修改

 

 

PART1:環境準備

一:

1:每臺機器設置固定ip,每臺機器添加hosts文件解析,爲每臺機器設置主機名,關閉firewalld,selinux

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.209.115 controller01

172.16.209.117 compute01

172.16.209.119 network02

 

其中 network02三塊網卡,其餘的兩塊

 

2.每臺機器配置yum源,可不配置,,使用默認的CentOS repo

[mitaka]

name=mitaka repo

baseurl=http://172.16.209.100/mitaka-rpms/

enabled=1

gpgcheck=0

 

3.每臺機器

yum makecache && yum install vim net-tools -y&& yum update -y

 

4.時間服務部署

 

全部節點:

yum install chrony -y

控制節點:

修改配置:

/etc/chrony.conf

server ntp.staging.kycloud.lan iburst

allow 管理網絡網段ip/24

 

啓服務:

systemctl enable chronyd.service

systemctl start chronyd.service

 

 

其他節點:

修改配置:

/etc/chrony.conf

server 控制節點ip iburst

 

啓服務

systemctl enable chronyd.service

systemctl start chronyd.service

 

 

 

時區不是Asia/Shanghai須要改時區:

# timedatectl set-local-rtc 1 # 將硬件時鐘調整爲與本地時鐘一致, 0 爲設置爲 UTC 時間

# timedatectl set-timezone Asia/Shanghai # 設置系統時區爲上海

其實不考慮各個發行版的差別化, 從更底層出發的話, 修改時間時區比想象中要簡單:

# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

 

 

 

驗證:

每臺機器執行:

chronyc sources

在S那一列包含*號,表明同步成功(可能須要花費幾分鐘去同步,時間務必同步)

 

二:獲取軟件包

若是使用自定義源,那麼下列centos和redhat的操做能夠省略

#在全部節點執行

centos:

yum install yum-plugin-priorities -y #防止自動更新

yum install centos-release-openstack-mitaka -y #若是不使用個人自定義yum那麼請執行這一步

redhat:

yum install yum-plugin-priorities -y

yum install https://rdoproject.org/repos/rdo-release.rpm -y

紅帽系統請去掉epel源

 

#在全部節點執行

 

yum upgrade

yum install python-openstackclient -y

yum install openstack-selinux -y

 

三:部署mariadb數據庫

控制節點:

yum install mariadb mariadb-server python2-PyMySQL -y

 

編輯:

/etc/my.cnf.d/openstack.cnf

 

[mysqld]

bind-address = 控制節點管理網絡ip

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

 

啓服務:

systemctl enable mariadb.service

systemctl start mariadb.service

mysql_secure_installation

 

 

四:爲Telemetry 服務部署MongoDB

控制節點:

yum install mongodb-server mongodb -y

 

編輯:/etc/mongod.conf

bind_ip = 控制節點管理網絡ip

smallfiles = true

 

啓動服務:

systemctl enable mongod.service

systemctl start mongod.service

 

 

五:部署消息隊列rabbitmq

控制節點:

yum install rabbitmq-server -y

 

啓動服務:

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

 

新建rabbitmq用戶密碼:

rabbitmqctl add_user openstack che001

 

rabbitmqctl  delete_user guest 

 

爲新建的用戶openstack設定權限:

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

 

啓動管理WEB

rabbitmq-plugins enable rabbitmq_management

(驗證方式:http://172.16.209.104:15672/ 用戶:openstack 密碼:che001)

 

六:部署memcached緩存(爲keystone服務緩存tokens)

控制節點:

yum install memcached python-memcached -y

cat /etc/sysconfig/memcached

PORT="11211"
USER="memcached" MAXCONN="10240" CACHESIZE="64" #OPTIONS="-l 127.0.0.1,::1" OPTIONS="-l 0.0.0.0"

 

啓動服務:

systemctl enable memcached.service

systemctl start memcached.service

 

PART2:認證服務keystone部署

 

一:安裝和配置服務

1.建庫建用戶

mysql -u root -p

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \

  IDENTIFIED BY 'che001';

flush privileges;

 

2.yum install openstack-keystone httpd mod_wsgi -y

 

3.編輯/etc/keystone/keystone.conf

 

[DEFAULT]

admin_token = che001

#建議用命令製做token:openssl rand -hex 10

#這裏的做用主要是先手動指定admin_token,爲了部署keystone,由於keystone沒部署,認證環節還不能工做,等keystone部署好,會把手動指定admin_token認證方式去掉

 

[database]

connection = mysql+pymysql://keystone:che001@controller01/keystone

 

[token]

provider = fernet

#Token Provider:UUID, PKI, PKIZ, or Fernet #http://blog.csdn.net/miss_yang_cloud/article/details/49633719

 

 

4.同步修改到數據庫

su -s /bin/sh -c "keystone-manage db_sync" keystone

 

5.初始化fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

 

6.配置apache服務

編輯:/etc/httpd/conf/httpd.conf

ServerName controller01

 

編輯:/etc/httpd/conf.d/wsgi-keystone.conf

新增配置

Listen 5000

Listen 35357

 

<VirtualHost *:5000>

    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-public

    WSGIScriptAlias / /usr/bin/keystone-wsgi-public

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog /var/log/httpd/keystone-access.log combined

 

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>

 

<VirtualHost *:35357>

    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-admin

    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog /var/log/httpd/keystone-access.log combined

 

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>

 

7.啓動服務:

systemctl enable httpd.service

systemctl start httpd.service

 

二:建立服務實體和訪問端點

 

1.實現配置管理員環境變量,用於獲取後面建立的權限

export OS_TOKEN=che001

#要與前面的/etc/keystone/keystone.conf中的admin_token相同

export OS_URL=http://controller01:35357/v3

export OS_IDENTITY_API_VERSION=3

 

2.基於上一步給的權限,建立認證服務實體(目錄服務)

openstack service create --name keystone --description "OpenStack Identity" identity

#如遇到報500錯誤,ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option,可把--description "OpenStack Identity"去掉

3.基於上一步創建的服務實體,建立訪問該實體的三個api端點

 

openstack endpoint create --region RegionOne \

  identity public http://controller01:5000/v3

  

openstack endpoint create --region RegionOne \

  identity internal http://controller01:5000/v3

  

openstack endpoint create --region RegionOne \

  identity admin http://controller01:35357/v3

  

三:建立域,租戶,用戶,角色,把四個元素關聯到一塊兒

創建一個公共的域名:

openstack domain create --description "Default Domain" default

 

管理員:admin

openstack project create --domain default \

  --description "Admin Project" admin

  

openstack user create --domain default \

  --password-prompt admin

 

openstack role create admin

 

openstack role add --project admin --user admin admin

 

普通用戶:demo

openstack project create --domain default \

  --description "Demo Project" demo

  

openstack user create --domain default \

  --password-prompt demo

 

openstack role create user

 

openstack role add --project demo --user demo user

 

爲後續的服務建立統一租戶service

解釋:後面每搭建一個新的服務都須要在keystone中執行四種操做:1.建租戶 2.建用戶 3.建角色 4.作關聯

後面全部的服務公用一個租戶service,都是管理員角色admin,因此實際上後續的服務安裝關於keysotne

的操做只剩2,4

openstack project create --domain default \

  --description "Service Project" service

  

  

四:驗證操做:

編輯:/etc/keystone/keystone-paste.ini

在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三個地方

移走:admin_token_auth 

keystone部署好後,可使用用戶名密碼進行驗證產生token了,不須要手動指定admin_token了

 

unset OS_TOKEN OS_URL

 

openstack --os-auth-url http://controller01:35357/v3 \

  --os-project-domain-name default --os-user-domain-name default \

  --os-project-name admin --os-username admin token issue

Password: (輸入openstack user create --domain default --password-prompt admin爲admin設置的密碼)

 

 

五:新建客戶端腳本文件

 

管理員:admin-openrc

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=che001

export OS_AUTH_URL=http://controller01:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

 

普通用戶demo:demo-openrc

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=che001

export OS_AUTH_URL=http://controller01:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

 

效果:

source admin-openrc 

[root@controller01 ~]# openstack token issue

 

 

part3:部署鏡像服務

一:安裝和配置服務

1.建庫建用戶

mysql -u root -p

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \

  IDENTIFIED BY 'che001';

flush privileges;

  

2.keystone認證操做:

上面提到過:全部後續項目的部署都統一放到一個租戶service裏,而後須要爲每一個項目創建用戶,建管理員角色,創建關聯

. admin-openrc

openstack user create --domain default --password-prompt glance

 

openstack role add --project service --user glance admin

 

創建服務實體

openstack service create --name glance \

  --description "OpenStack Image" image

  

建端點

openstack endpoint create --region RegionOne \

  image public http://controller01:9292

  

 

openstack endpoint create --region RegionOne \

  image internal http://controller01:9292

 

openstack endpoint create --region RegionOne \

  image admin http://controller01:9292

 

3.安裝軟件

yum install openstack-glance -y

 

4.修改配置:

編輯:/etc/glance/glance-api.conf 

 

[database]

#這裏的數據庫鏈接配置是用來初始化生成數據庫表結構,不配置沒法生成數據庫表結構

#glance-api不配置database對建立vm無影響,對使用metada有影響

#日誌報錯:ERROR glance.api.v2.metadef_namespaces

connection = mysql+pymysql://glance:che001@controller01/glance

 

[keystone_authtoken]

auth_url = http://controller01:5000

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = che001

 

[paste_deploy]

flavor = keystone

 

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

 

編輯:/etc/glance/glance-registry.conf

 

[database]

#這裏的數據庫配置是用來glance-registry檢索鏡像元數據

connection = mysql+pymysql://glance:che001@controller01/glance

 

 

新建目錄:

mkdir /var/lib/glance/images/

chown glance. /var/lib/glance/images/

 

同步數據庫:(此處會報一些關於future的問題,自行忽略)

su -s /bin/sh -c "glance-manage db_sync" glance

 

啓動服務:

systemctl enable openstack-glance-api.service \

  openstack-glance-registry.service

systemctl start openstack-glance-api.service \

  openstack-glance-registry.service

  

  

二:驗證操做:

. admin-openrc

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

(本地下載:wget http://172.16.209.100/cirros-0.3.4-x86_64-disk.img)

 

openstack image create "cirros" \

  --file cirros-0.3.4-x86_64-disk.img \

  --disk-format qcow2 --container-format bare \

  --public

  

openstack image list

 

 

part4:部署compute服務

 

一:控制節點配置

1.建庫建用戶

CREATE DATABASE nova_api;

CREATE DATABASE nova;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \

  IDENTIFIED BY 'che001';

  

flush privileges;

 

2.keystone相關操做

 

. admin-openrc

openstack user create --domain default \

  --password-prompt nova

openstack role add --project service --user nova admin

openstack service create --name nova \

  --description "OpenStack Compute" compute

  

openstack endpoint create --region RegionOne \

  compute public http://controller01:8774/v2.1/%\(tenant_id\)s

  

openstack endpoint create --region RegionOne \

  compute internal http://controller01:8774/v2.1/%\(tenant_id\)s

  

openstack endpoint create --region RegionOne \

  compute admin http://controller01:8774/v2.1/%\(tenant_id\)s

  

  

3.安裝軟件包:

yum install openstack-nova-api openstack-nova-conductor \

  openstack-nova-console openstack-nova-novncproxy \

  openstack-nova-scheduler -y

 

4.修改配置:

編輯/etc/nova/nova.conf

 

[DEFAULT]

enabled_apis = osapi_compute,metadata

rpc_backend = rabbit

auth_strategy = keystone

#下面的爲管理ip

my_ip = 172.16.209.115

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

 

[api_database]

connection = mysql+pymysql://nova:che001@controller01/nova_api

 

[database]

connection = mysql+pymysql://nova:che001@controller01/nova

 

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

 

[keystone_authtoken]

auth_url = http://controller01:5000

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = che001

 

 

[vnc]

#下面的爲管理ip

vncserver_listen = 172.16.209.115

#下面的爲管理ip

vncserver_proxyclient_address = 172.16.209.115

 

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

 

5.同步數據庫:(此處會報一些關於future的問題,自行忽略)

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

 

6.啓動服務

systemctl enable openstack-nova-api.service \

  openstack-nova-consoleauth.service openstack-nova-scheduler.service \

  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \

  openstack-nova-consoleauth.service openstack-nova-scheduler.service \

  openstack-nova-conductor.service openstack-nova-novncproxy.service

  

二:計算節點配置

 

1.安裝軟件包:

yum install openstack-nova-compute libvirt-daemon-lxc -y

 

2.修改配置:

編輯/etc/nova/nova.conf

 

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

#計算節點管理網絡ip

my_ip = 172.16.209.117

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

 

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

 

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

#計算節點管理網絡ip

vncserver_proxyclient_address = 172.16.209.117

#控制節點管理網絡ip

novncproxy_base_url = http://172.16.209.115:6080/vnc_auto.html

 

[glance]

api_servers = http://controller01:9292

 

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

 

3.若是在不支持虛擬化的機器上部署nova,請確認

egrep -c '(vmx|svm)' /proc/cpuinfo結果爲0

則編輯/etc/nova/nova.conf

[libvirt]

virt_type = qemu

 

4.啓動服務

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

 

三:驗證

控制節點

[root@controller01 ~]# source admin-openrc

[root@controller01 ~]# openstack compute service list

+----+------------------+--------------+----------+---------+-------+----------------------------+

| Id | Binary           | Host         | Zone     | Status  | State | Updated At                 |

+----+------------------+--------------+----------+---------+-------+----------------------------+

|  1 | nova-consoleauth | controller01 | internal | enabled | up    | 2016-08-17T08:51:37.000000 |

|  2 | nova-conductor   | controller01 | internal | enabled | up    | 2016-08-17T08:51:29.000000 |

|  8 | nova-scheduler   | controller01 | internal | enabled | up    | 2016-08-17T08:51:38.000000 |

| 12 | nova-compute     | compute01    | nova     | enabled | up    | 2016-08-17T08:51:30.000000 |

 

 

part5:部署網絡服務

 

一:控制節點配置

1.建庫建用戶

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \

  IDENTIFIED BY 'che001';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \

  IDENTIFIED BY 'che001';

flush privileges;

 

2.keystone相關

. admin-openrc

 

openstack user create --domain default --password-prompt neutron

 

openstack role add --project service --user neutron admin

 

openstack service create --name neutron \

  --description "OpenStack Networking" network

 

openstack endpoint create --region RegionOne \

  network public http://controller01:9696

  

openstack endpoint create --region RegionOne \

  network internal http://controller01:9696

  

openstack endpoint create --region RegionOne \

  network admin http://controller01:9696

 

 

3.安裝軟件包

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which  -y

 

4.配置服務器組件 

編輯 /etc/neutron/neutron.conf文件,並完成如下動做: 

在[數據庫]節中,配置數據庫訪問:

[DEFAULT]

core_plugin = ml2

service_plugins = router

#下面配置:啓用重疊IP地址功能

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

 

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

 

[database]

connection = mysql+pymysql://neutron:che001@controller01/neutron

 

[keystone_authtoken]

auth_url = http://controller01:5000

memcached_servers = controller01:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = che001

 

[nova]

auth_url = http://controller01:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = che001

 

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件 

[ml2]

type_drivers = flat,vlan,vxlan,gre

tenant_network_types = vxlan

mechanism_drivers = openvswitch,l2population

extension_drivers = port_security

 

[ml2_type_flat]

flat_networks = provider

 

[ml2_type_vxlan]

vni_ranges = 1:1000

 

[securitygroup]

enable_ipset = True

 

 

編輯/etc/nova/nova.conf文件:

[neutron]

url = http://controller01:9696

auth_url = http://controller01:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = che001

service_metadata_proxy = True

 

5.建立鏈接

 

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

6.同步數據庫:(此處會報一些關於future的問題,自行忽略)

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \

 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

7.重啓nova服務

systemctl restart openstack-nova-api.service

 

8.啓動neutron服務

systemctl enable neutron-server.service

systemctl start neutron-server.service

 

二:網絡節點配置

 

1. 編輯 /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

 

2.執行下列命令,當即生效

sysctl -p

 

3.安裝軟件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

 

4.配置組件 

編輯/etc/neutron/neutron.conf文件

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

 

 

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

 

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

六、編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件:

[ovs]

#下面ip爲網絡節點數據網絡ip

local_ip=1.1.1.119

bridge_mappings=external:br-ex

 

[agent]

tunnel_types=gre,vxlan

l2_population=True

prevent_arp_spoofing=True

 

 

7.配置L3代理。編輯 /etc/neutron/l3_agent.ini文件:

[DEFAULT]

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

external_network_bridge=br-ex

 

8.配置DHCP代理。編輯 /etc/neutron/dhcp_agent.ini文件:

 

[DEFAULT]

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata=True

 

9.配置元數據代理。編輯 /etc/neutron/metadata_agent.ini文件:

[DEFAULT]

nova_metadata_ip=controller01

metadata_proxy_shared_secret=che001

 

10.啓動服務

 

網路節點:

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

 

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

 

 

 

12.建網橋

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth2

(br-ex、eth2能夠不設置IP,有三塊網卡不用作如下設置) 

 

注意,若是網卡數量有限,想用網路節點的管理網絡網卡做爲br-ex綁定的物理網卡

#那麼須要將網絡節點管理網絡網卡ip去掉,創建br-ex的配置文件,ip使用原管理網ip

ovs-vsctl add-br br-ex

[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 

DEVICE="eno16777736"

TYPE=Ethernet

ONBOOT="yes"

BOOTPROTO="none"

[root@network01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex 

DEVICE=br-ex

TYPE=Ethernet

ONBOOT="yes"

BOOTPROTO="none"

#eno16777736 MAC

HWADDR=bc:ee:7b:78:7b:a7

IPADDR=172.16.209.10

GATEWAY=172.16.209.1

NETMASK=255.255.255.0

DNS1=202.106.0.20

DNS1=8.8.8.8

NM_CONTROLLED=no #表示修改配置文件後不當即生效,而是在重啓/重載network服務時生效

 

systemctl restart network

ovs-vsctl add-port br-ex eth0

 

 

三:計算節點配置

1. 編輯 /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

 

2.sysctl -p

 

3.yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

 

4.編輯 /etc/neutron/neutron.conf文件

 

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

 

 

[oslo_messaging_rabbit]

rabbit_host = controller01

rabbit_userid = openstack

rabbit_password = che001

 

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

5.編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]

#下面ip爲計算節點數據網絡ip

local_ip = 1.1.1.117

#bridge_mappings = vlan:br-vlan

[agent]

tunnel_types = gre,vxlan

l2_population = True

prevent_arp_spoofing = True

 

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True

 

7.編輯 /etc/nova/nova.conf

 

[neutron]

url = http://controller01:9696

auth_url = http://controller01:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = che001

 

8.啓動服務

systemctl enable neutron-openvswitch-agent.service

systemctl start neutron-openvswitch-agent.service

systemctl restart openstack-nova-compute.service

 

 

 

 

part6:部署控制面板dashboard

在控制節點

1.安裝軟件包

yum install openstack-dashboard -y

 

2.配置/etc/openstack-dashboard/local_settings

 

 

OPENSTACK_HOST = "controller01"

 

ALLOWED_HOSTS = ['*', ]

 

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

         'LOCATION': 'controller01:11211',

    }

}

 

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

 

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

 

OPENSTACK_API_VERSIONS = {

    "identity": 3,

    "image": 2,

    "volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

TIME_ZONE = "UTC"

 

3.啓動服務

systemctl enable httpd.service memcached.service

systemctl restart httpd.service memcached.service

 

 

4.驗證;

http://172.16.209.115/dashboard

 

 

 

 

 

總結:

  1. 與keystone打交道的只有api層,因此不要處處亂配

  2. 建主機的時候由nova-compute負責調用各個api,因此不要再控制節點配置啥調用

  3. ml2是neutron的core plugin,只須要在控制節點配置

  4. 網絡節點只須要配置相關的agent

  5. 各組件的api除了接收請求外還有不少其餘功能,比方說驗證請求的合理性,控制節點nova.conf須要配neutron的api、認證,由於nova boot時須要去驗證用戶提交網絡的合理性,控制節點neutron.conf須要配nova的api、認證,由於你刪除網絡端口時須要經過nova-api去查是否有主機正在使用端口。計算幾點nova.conf須要配neutron,由於nova-compute發送請求給neutron-server來建立端口。這裏的端口值得是'交換機上的端口'

  6. 不明白爲啥?或者不懂我在說什麼,請好好研究openstack各組件通訊機制和主機建立流程,或者來聽個人課哦,通常博文都不教真的。

     

網路故障排查:

網絡節點:

[root@network02 ~]# ip netns show

qdhcp-e63ab886-0835-450f-9d88-7ea781636eb8

qdhcp-b25baebb-0a54-4f59-82f3-88374387b1ec

qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83

[root@network02 ~]# ip netns exec qrouter-ff2ddb48-86f7-4b49-8bf4-0335e8dbaa83 bash

[root@network02 ~]# ping -c2 www.baidu.com

PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.

64 bytes from 61.135.169.125: icmp_seq=1 ttl=52 time=33.5 ms

64 bytes from 61.135.169.125: icmp_seq=2 ttl=52 time=25.9 ms

 

 

若是沒法ping通,那麼退出namespace

ovs-vsctl del-br br-ex

ovs-vsctl del-br br-int

ovs-vsctl del-br br-tun

ovs-vsctl add-br br-int

ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex eth0

systemctl restart neutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.service neutron-metadata-agent.service

 

查看虛擬設備:

brctl show

在網絡節點、計算機節點均可以使用

 

openStack 基本操做使用

個人環境:

 

admin用戶建立網絡

管理員/系統/網絡/建立網絡

 

若是供應商網絡爲 flat網絡時,字符串要跟 前面配置 bridge_mappings對應上,如 external,若是取名是其餘就填寫其餘的

 

六、編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件:

[ovs]

#下面ip爲網絡節點數據網絡ip

local_ip=1.1.1.119

bridge_mappings=external:br-ex

 

 

 

 

普通用戶demo:

 

建立用戶網絡 

 

建立路由

建立雲主機,建立一個鏈接demo-net網絡的雲主機

點擊 vm1鏈接到控制檯

輸入用戶:cirros   密碼:cubswin:)

ping 外部域名或地址,看網絡是否通

 

 查看路由

 

綁定浮動IP,讓外部主機能訪問到雲主機

 

 

讓外部能訪問雲主機的ssh

 

默認狀況下,租戶的網絡之間不能通訊,若要通訊須要admin把他們的網絡設置成共享,經過路由來轉發

新建項目ops,並新建兩個用戶ops一、ops2,新建組opsgroup,把前面兩用戶加入opsgroup組,項目osp項目中添加opsgroup組 

osp1用戶登陸後,創建網絡ops,子網172.16.10.0/24

demo用戶創建網絡demo-sub2及相關子網172.16.1.0/24

admin新建路由core-router,把 網絡 demo-sub二、ops設成成共享,在網絡拓撲中,把這兩網絡鏈接上core-router

 

demo用戶創建雲主機vm2,鏈接上demo-sub2網絡,加入ssh-sec安全組,假設雲主機IP爲172.16.1.3

osp1用戶創建雲主機vm_ops1,鏈接上ops網絡,登陸vm_ops1雲主機,假設DHCP IP爲172.16.10.3

ping 172.16.1.3

ssh cirros@172.16.1.3 看是否成功

 

相關文章
相關標籤/搜索