前言 搭建前必須看我
本文檔搭建的是分佈式O版openstack(controller+ N compute + 1 cinder)的文檔。
openstack版本爲Ocata。
搭建的時候,請嚴格按照文檔所描寫的進行配置,在不熟悉的狀況下,嚴禁本身添加額外的配置和設置!
學習這個文檔能搭建基本的openstack環境,切記千萬不能用於生產!要用於生產的環境,必須有嚴格的
測試還有額外的高級配置!
文檔版權屬於DevOps運維,未經容許,嚴禁售賣、複製傳播!html
閱讀文檔注意,紅色的部分是重要提示,另外其餘加顏色的字體參數也要額外注意!
有些命令很長,注意有換行了,別隻敲一半,每條命令前面都帶有 #。python
歡迎加入千人OpenStack高級技術交流羣:127155263 (很是活躍)
另外有OpenStack高級視頻學習視頻:連接:https://pan.baidu.com/s/1kWTZhl1 密碼:xbnc (高清)mysql
1、環境準備linux
1. 前提準備
安裝vmware workstation12.5.0,虛擬出三臺配置至少CPU 4c MEM 4G的虛擬機web
Controller節點和Compute節點配置:
CPU:4c
MEM:4G
Disk:200G
Network: 3 (eth0 eth1 eth2, 第一塊網卡就是extenel的網卡,第二塊網卡是admin網卡,第三塊是tunnel隧道)sql
Cinder節點配置:
CPU:4c
MEM:4G
Disk:200G+50G(這個50G能夠根據本身需求調整大小)
Network: 2 (eth0 eth1, 第一塊網卡就是extenel的網卡,第二塊網卡是admin網卡,cinder節點不須要隧道)數據庫
安裝CentOS7.2系統(最小化安裝系統,不要yum update升級到7.3!Ocata版7.3下依然有虛擬機啓動出現iPXE啓動問題) + 關閉防火牆 + 關閉selinux
# systemctl stop firewalld.service
# systemctl disable firewalld.servicebootstrap
安裝好相關工具,由於系統是最小化安裝的,因此一些ifconfig vim等命令沒有,運行下面的命令把它們裝上:
# yum install net-tools wget vim ntpdate bash-completion -yvim
2. 更改hostname
# hostnamectl set-hostname controllercentos
若是是compute就運行:
# hostnamectl set-hostname compute1
cinder節點就運行:
# hostnamectl set-hostname cinder
而後每一個節點配置/etc/hosts文件以下
10.1.1.150 controller
10.1.1.151 compute1
10.1.1.152 cinder
3. NTP同步系統時間
# ntpdate cn.pool.ntp.org
而後查看運行date命令查看時間是否同步成功
注意,這個操做很重要,openstack是分佈式架構的,每一個節點都不能有時間差!
不少同窗剛裝完centos系統,時間會跟當前北京的時間不一致,因此必須運行下這個命令!
另外,也把這個命令加到開機啓動裏面去
# echo "ntpdate cn.pool.ntp.org" >> /etc/rc.d/rc.local
# chmod +x /etc/rc.d/rc.local
4. 配置IP 網絡配置規劃
網絡配置:
external : 9.110.187.0/24
admin mgt : 10.1.1.0/24
tunnel:10.2.2.0/24
storage:10.3.3.0/24 (咱們環境沒有,若是你集成了ceph就應該用到)
controller虛擬機第一塊網卡external,請配置IP 9.110.187.150
第二塊網卡admin,請配置IP 10.1.1.150
第三塊網卡tunnel,請配置IP 10.2.2.150
compute1虛擬機第一塊網卡external,請配置IP 9.110.187.151
第二塊網卡admin,請配置IP 10.1.1.151
第三塊網卡tunnel,請配置IP 10.2.2.151
cinder虛擬機第一塊網卡external,請配置IP 9.110.187.152
第二塊網卡admin,請配置IP 10.1.1.152
第三塊網卡tunnel,請配置IP 10.2.2.152
三個網絡解釋:
1. external : 這個網絡是連接外網的,也就是說openstack環境裏的虛擬機要讓用戶訪問,那必須有個網段是連外網的,用戶經過這個網絡能訪問到虛擬機。若是是搭建的公有云,這個IP段通常是公網的(不是公網,你讓用戶怎麼訪問你的虛擬機?)
2. admin mgt:這個網段是用來作管理網絡的。管理網絡,顧名思義,你的openstack環境裏面各個模塊之間須要交互,鏈接數據庫,鏈接Message Queue都是須要一個網絡去支撐的,那麼這個網段就是這個做用。最簡單的理解,openstack本身自己用的IP段。
3. tunnel : 隧道網絡,openstack裏面使用gre或者vxlan模式,須要有隧道網絡;隧道網絡採用了點到點通訊協議代替了交換鏈接,在openstack裏,這個tunnel就是虛擬機走網絡數據流量用的。
固然這3個網絡你都放在一塊也行,可是隻能用於測試學習環境,真正的生產環境是得分開的。在本身學習搭建的時候,一般咱們用的是vmware workstation虛擬機,有些同窗建立虛擬機後,默認只有一塊網卡,有些同窗在只有一塊網卡就不知道如何下手了,一看有三種網絡就暈乎了... 因此,在建立完虛擬機後,請給虛擬機再添加2塊網卡,根據生產環境的要求去搭建學習。
三種網絡在生產環境裏是必須分開的,有的生產環境還有分佈式存儲,因此還得額外給存儲再添加一網絡,storage段。網絡分開的好處就是數據分流、安全、不相互干擾。你想一想,若是都整一塊了,還怎麼玩?用戶訪問虛擬機還使用你openstack的管理段,那太不安全了...
5. 搭建OpenStack內部使用源
關於內部源的搭建,請看視頻。
2、 搭建Mariadb
1. 安裝mariadb數據庫
# yum install -y MariaDB-server MariaDB-client
2. 配置mariadb
# vim /etc/my.cnf.d/mariadb-openstack.cnf
在mysqld區塊添加以下內容:
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
bind-address = 10.1.1.150
三、啓動數據庫及設置mariadb開機啓動
# systemctl enable mariadb.service
# systemctl restart mariadb.service
# systemctl status mariadb.service
# systemctl list-unit-files |grep mariadb.service
4. 配置mariadb,給mariadb設置密碼
# mysql_secure_installation
先按回車,而後按Y,設置mysql密碼,而後一直按y結束
注意,全篇博文咱們設置的密碼都是devops,你們能夠自行更改!
3、安裝RabbitMQ
1. 每一個節點都安裝erlang
# yum install -y erlang
2. 每一個節點都安裝RabbitMQ
# yum install -y rabbitmq-server
3. 每一個節點都啓動rabbitmq及設置開機啓動
# systemctl enable rabbitmq-server.service
# systemctl restart rabbitmq-server.service
# systemctl status rabbitmq-server.service
# systemctl list-unit-files |grep rabbitmq-server.service
4. 建立openstack,注意將PASSWOED替換爲本身的合適密碼(本文所有都是devops爲密碼)
# rabbitmqctl add_user openstack devops
5. 將openstack用戶賦予權限
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl set_user_tags openstack administrator
# rabbitmqctl list_users
6. 看下監聽端口 rabbitmq用的是5672端口
# netstat -ntlp |grep 5672
7. 查看RabbitMQ插件
# /usr/lib/rabbitmq/bin/rabbitmq-plugins list
8. 打開RabbitMQ相關插件
# /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
打開相關插件後,重啓下rabbitmq服務
systemctl restart rabbitmq-server
瀏覽器輸入:http://9.110.187.150:15672 默認用戶名密碼:guest/guest
經過這個界面,咱們能很直觀的看到rabbitmq的運行和負載狀況
9. 查看rabbitmq狀態
用瀏覽器登陸http://9.110.187.150:15672 輸入openstack/devops本文檔搭建的是分佈式O版openstack(controller+ N compute + 1 cinder)的文檔。
openstack版本爲Ocata。也能夠查看狀態信息:
4、安裝配置Keystone
一、建立keystone數據庫
CREATE DATABASE keystone;
二、建立數據庫keystone用戶&root用戶及賦予權限
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'devops';
注意將devops替換爲本身的數據庫密碼
三、安裝keystone和memcached
# yum -y install openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-utils
四、啓動memcache服務並設置開機自啓動
# systemctl enable memcached.service
# systemctl restart memcached.service
# systemctl status memcached.service
五、配置/etc/keystone/keystone.conf文件
# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
# >/etc/keystone/keystone.conf
# openstack-config --set /etc/keystone/keystone.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:devops@controller/keystone
# openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
# openstack-config --set /etc/keystone/keystone.conf cache enabled true
# openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller:11211
# openstack-config --set /etc/keystone/keystone.conf memcache servers controller:11211
# openstack-config --set /etc/keystone/keystone.conf token expiration 3600
# openstack-config --set /etc/keystone/keystone.conf token provider fernet
六、配置httpd.conf文件&memcached文件
# sed -i "s/#ServerName www.example.com:80/ServerName controller/" /etc/httpd/conf/httpd.conf
# sed -i 's/OPTIONS*.*/OPTIONS="-l 127.0.0.1,::1,10.1.1.150"/' /etc/sysconfig/memcached
七、配置keystone與httpd結合
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
八、數據庫同步
# su -s /bin/sh -c "keystone-manage db_sync" keystone
九、初始化fernet
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
十、啓動httpd,並設置httpd開機啓動
# systemctl enable httpd.service
# systemctl restart httpd.service
# systemctl status httpd.service
# systemctl list-unit-files |grep httpd.service
十一、建立 admin 用戶角色
# keystone-manage bootstrap \
--bootstrap-password devops \
--bootstrap-username admin \
--bootstrap-project-name admin \
--bootstrap-role-name admin \
--bootstrap-service-name keystone \
--bootstrap-region-id RegionOne \
--bootstrap-admin-url http://controller:35357/v3 \
--bootstrap-internal-url http://controller:35357/v3 \
--bootstrap-public-url http://controller:5000/v3
驗證:
# openstack project list --os-username admin --os-project-name admin --os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 --os-auth-url http://controller:5000 --os-password devops
12. 建立admin用戶環境變量,建立/root/admin-openrc 文件並寫入以下內容:
# vim /root/admin-openrc
添加如下內容:
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PASSWORD=devops
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_AUTH_URL=http://controller:35357/v3
1三、建立service項目
# source /root/admin-openrc
# openstack project create --domain default --description "Service Project" service
1四、建立demo項目
# openstack project create --domain default --description "Demo Project" demo
1五、建立demo用戶
# openstack user create --domain default demo --password devops
注意:devops爲demo用戶密碼
1六、建立user角色將demo用戶賦予user角色
# openstack role create user
# openstack role add --project demo --user demo user
1七、驗證keystone
# unset OS_TOKEN OS_URL
# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password devops
# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password devops
5、安裝配置glance
一、建立glance數據庫
CREATE DATABASE glance;
二、建立數據庫用戶並賦予權限
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'devops';
三、建立glance用戶及賦予admin權限
# source /root/admin-openrc
# openstack user create --domain default glance --password devops
# openstack role add --project service --user glance admin
四、建立image服務
# openstack service create --name glance --description "OpenStack Image service" image
五、建立glance的endpoint
# openstack endpoint create --region RegionOne image public http://controller:9292
# openstack endpoint create --region RegionOne image internal http://controller:9292
# openstack endpoint create --region RegionOne image admin http://controller:9292
六、安裝glance相關rpm包
# yum install openstack-glance -y
七、修改glance配置文件/etc/glance/glance-api.conf
注意紅色的密碼設置成你本身的
# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
# >/etc/glance/glance-api.conf
# openstack-config --set /etc/glance/glance-api.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:devops@controller/glance
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password devops
# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
# openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
# openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
# openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
八、修改glance配置文件/etc/glance/glance-registry.conf:
# cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
# >/etc/glance/glance-registry.conf
# openstack-config --set /etc/glance/glance-registry.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:devops@controller/glance
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password devops
# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
九、同步glance數據庫
# su -s /bin/sh -c "glance-manage db_sync" glance
十、啓動glance及設置開機啓動
# systemctl enable openstack-glance-api.service openstack-glance-registry.service
# systemctl restart openstack-glance-api.service openstack-glance-registry.service
# systemctl status openstack-glance-api.service openstack-glance-registry.service
十二、下載測試鏡像文件
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
1三、上傳鏡像到glance
# source /root/admin-openrc
# glance image-create --name "cirros-0.3.4-x86_64" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
若是你作好了一個CentOS6.7系統的鏡像,也能夠用這命令操做,例:
# glance image-create --name "CentOS7.1-x86_64" --file CentOS_7.1.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
查看鏡像列表:
# glance image-list
6、安裝配置nova
一、建立nova數據庫
CREATE DATABASE nova;
CREATE DATABASE nova_api;
CREATE DATABASE nova_cell0;
二、建立數據庫用戶並賦予權限
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller' IDENTIFIED BY 'devops';
FLUSH PRIVILEGES;
注:查看受權列表信息 SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user;
取消以前某個受權 REVOKE ALTER ON *.* TO 'root'@'controller' IDENTIFIED BY 'devops';
三、建立nova用戶及賦予admin權限
# source /root/admin-openrc
# openstack user create --domain default nova --password devops
# openstack role add --project service --user nova admin
四、建立computer服務
# openstack service create --name nova --description "OpenStack Compute" compute
五、建立nova的endpoint
# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
六、安裝nova相關軟件
# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-cert openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
七、配置nova的配置文件/etc/nova/nova.conf
# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
# >/etc/nova/nova.conf
# openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.150
# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
# openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:devops@controller/nova
# openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:devops@controller/nova_api
# openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval -1
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
# openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
# openstack-config --set /etc/nova/nova.conf keystone_authtoken password devops
# openstack-config --set /etc/nova/nova.conf keystone_authtoken service_token_roles_required True
# openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 10.1.1.150
# openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.150
# openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
# openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
注意:其餘節點上記得替換IP,還有密碼,文檔紅色以及綠色的地方。
八、設置cell(單元格)
關於cell(單元格)的介紹,引用出自於九州雲分享的《Ocata組件Nova Cell V2 詳解》& 有云的《引入Cells功能最核心要解決的問題就是OpenStack集羣的擴展性》兩篇文章的整合介紹:
OpenStack 在控制平面上的性能瓶頸主要在 Message Queue 和 Database 。尤爲是 Message Queue , 隨着計算節點的增長,性能變的愈來愈差,由於openstack裏每一個資源和接口都是經過消息隊列來通訊的,有測試代表,當集羣規模到了200,一個消息可能要在十幾秒後纔會響應;爲了應對這種狀況,引入Cells功能以解決OpenStack集羣的擴展性。
同步下nova數據庫
# su -s /bin/sh -c "nova-manage api_db sync" nova
# su -s /bin/sh -c "nova-manage db sync" nova
設置cell_v2關聯上建立好的數據庫nova_cell0
# nova-manage cell_v2 map_cell0 --database_connection mysql+pymysql://root:devops@controller/nova_cell0
建立一個常規cell,名字叫cell1,這個單元格里面將會包含計算節點
# nova-manage cell_v2 create_cell --verbose --name cell1 --database_connection mysql+pymysql://root:devops@controller/nova_cell0 --transport-url rabbit://openstack:devops@controller:5672/
檢查部署是否正常
# nova-status upgrade check
建立和映射cell0,並將現有計算主機和實例映射到單元格中
# nova-manage cell_v2 simple_cell_setup
查看已經建立好的單元格列表
# nova-manage cell_v2 list_cells --verbose
注意,若是有新添加的計算節點,須要運行下面命令來發現,而且添加到單元格中
# nova-manage cell_v2 discover_hosts
固然,你能夠在控制節點的nova.conf文件裏[scheduler]模塊下添加 discover_hosts_in_cells_interval=-1 這個設置來自動發現
歡迎加入千人OpenStack高級技術交流羣:127155263 (很是活躍)
九、安裝placement
從Ocata開始,須要安裝配置placement參與nova調度了,否則虛擬機將沒法建立!
# yum install -y openstack-nova-placement-api
建立placement用戶和placement 服務
# openstack user create --domain default placement --password devops
# openstack role add --project service --user placement admin
# openstack service create --name placement --description "OpenStack Placement" placement
建立placement endpoint
# openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement admin http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
把placement 整合到nova.conf裏
# openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf placement memcached_servers controller:11211
# openstack-config --set /etc/nova/nova.conf placement auth_type password
# openstack-config --set /etc/nova/nova.conf placement project_domain_name default
# openstack-config --set /etc/nova/nova.conf placement user_domain_name default
# openstack-config --set /etc/nova/nova.conf placement project_name service
# openstack-config --set /etc/nova/nova.conf placement username nova
# openstack-config --set /etc/nova/nova.conf placement password devops
# openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
配置修改00-nova-placement-api.conf文件,這步沒作建立虛擬機的時候會出現禁止訪問資源的問題
# cd /etc/httpd/conf.d/
# cp 00-nova-placement-api.conf 00-nova-placement-api.conf.bak
# >00-nova-placement-api.conf
# vim 00-nova-placement-api.conf
添加如下內容:
Listen 8778
<VirtualHost *:8778>
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
<Directory "/">
Order allow,deny
Allow from all
Require all granted
</Directory>
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
ErrorLog /var/log/nova/nova-placement-api.log
</VirtualHost>
Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
重啓下httpd服務
# systemctl restart httpd
檢查下是否配置成功
# nova-status upgrade check
歡迎加入千人OpenStack高級技術交流羣:127155263 (很是活躍)
還有更多的openstack高級視頻學習資料:http://devops.taobao.com
十、設置nova相關服務開機啓動
# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
啓動nova服務:
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
查看nova服務:
# systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl list-unit-files |grep openstack-nova-*
十一、驗證nova服務
# unset OS_TOKEN OS_URL
# source /root/admin-openrc
# nova service-list
# openstack endpoint list 查看endpoint list
看是否有結果正確輸出
7、安裝配置neutron
一、建立neutron數據庫
CREATE DATABASE neutron;
二、建立數據庫用戶並賦予權限
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'devops';
三、建立neutron用戶及賦予admin權限
# source /root/admin-openrc
# openstack user create --domain default neutron --password devops
# openstack role add --project service --user neutron admin
四、建立network服務
# openstack service create --name neutron --description "OpenStack Networking" network
五、建立endpoint
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696
六、安裝neutron相關軟件
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
七、配置neutron配置文件/etc/neutron/neutron.conf
# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# >/etc/neutron/neutron.conf
# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
# openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password devops
# openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:devops@controller/neutron
# openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
# openstack-config --set /etc/neutron/neutron.conf nova auth_type password
# openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
# openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
# openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
# openstack-config --set /etc/neutron/neutron.conf nova project_name service
# openstack-config --set /etc/neutron/neutron.conf nova username nova
# openstack-config --set /etc/neutron/neutron.conf nova password devops
# openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
八、配置/etc/neutron/plugins/ml2/ml2_conf.ini
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 path_mtu 1500
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
九、配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini DEFAULT debug false
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno16777736
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.150
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注意eno16777736是鏈接外網的網卡,通常這裏寫的網卡名都是能訪問外網的,若是不是外網網卡,那麼VM就會與外界網絡隔離。
local_ip 定義的是隧道網絡,vxLan下 vm-linuxbridge->vxlan ------tun-----vxlan->linuxbridge-vm
十、配置 /etc/neutron/l3_agent.ini
# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge
# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug false
十一、配置/etc/neutron/dhcp_agent.ini
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT debug false
十二、從新配置/etc/nova/nova.conf,配置這步的目的是讓compute節點能使用上neutron網絡
# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf neutron auth_plugin password
# openstack-config --set /etc/nova/nova.conf neutron project_domain_id default
# openstack-config --set /etc/nova/nova.conf neutron user_domain_id default
# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
# openstack-config --set /etc/nova/nova.conf neutron project_name service
# openstack-config --set /etc/nova/nova.conf neutron username neutron
# openstack-config --set /etc/nova/nova.conf neutron password devops
# openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
# openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret devops
1三、將dhcp-option-force=26,1450寫入/etc/neutron/dnsmasq-neutron.conf
# echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf
1四、配置/etc/neutron/metadata_agent.ini
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret devops
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug false
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_protocol http
1五、建立軟連接
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
1六、同步數據庫
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
1七、重啓nova服務,由於剛纔改了nova.conf
# systemctl restart openstack-nova-api.service
# systemctl status openstack-nova-api.service
1八、重啓neutron服務並設置開機啓動
# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
1九、啓動neutron-l3-agent.service並設置開機啓動
# systemctl enable neutron-l3-agent.service
# systemctl restart neutron-l3-agent.service
# systemctl status neutron-l3-agent.service
20、執行驗證
# source /root/admin-openrc
# neutron ext-list
# neutron agent-list
2一、建立vxLan模式網絡,讓虛擬機能外出
a. 首先先執行環境變量
# source /root/admin-openrc
b. 建立flat模式的public網絡,注意這個public是外出網絡,必須是flat模式的
# neutron --debug net-create --shared provider --router:external True --provider:network_type flat --provider:physical_network provider
執行完這步,在界面裏進行操做,把public網絡設置爲共享和外部網絡,建立後,結果爲:
c. 建立public網絡子網,名爲public-sub,網段就是9.110.187,而且IP範圍是50-90(這個通常是給VM用的floating IP了),dns設置爲8.8.8.8,網關爲9.110.187.2
# neutron subnet-create provider 9.110.187.0/24 --name provider-sub --allocation-pool start=9.110.187.50,end=9.110.187.90 --dns-nameserver 8.8.8.8 --gateway 9.110.187.2
d. 建立名爲private的私有網絡, 網絡模式爲vxlan
# neutron net-create private --provider:network_type vxlan --router:external False --shared
e. 建立名爲private-subnet的私有網絡子網,網段爲192.168.1.0, 這個網段就是虛擬機獲取的私有的IP地址
# neutron subnet-create private --name private-subnet --gateway 192.168.1.1 192.168.1.0/24
假如大家公司的私有云環境是用於不一樣的業務,好比行政、銷售、技術等,那麼你能夠建立3個不一樣名稱的私有網絡
# neutron net-create private-office --provider:network_type vxlan --router:external False --shared
# neutron subnet-create private-office --name office-net --gateway 192.168.2.1 192.168.2.0/24
# neutron net-create private-sale --provider:network_type vxlan --router:external False --shared
# neutron subnet-create private-sale --name sale-net --gateway 192.168.3.1 192.168.3.0/24
# neutron net-create private-technology --provider:network_type vxlan --router:external False --shared
# neutron subnet-create private-technology --name technology-net --gateway 192.168.4.1 192.168.4.0/24
f. 建立路由,咱們在界面上操做
點擊項目-->網絡-->路由-->新建路由
路由名稱隨便命名,我這裏寫"router", 管理員狀態,選擇"上"(up),外部網絡選擇"provider"
點擊"新建路由"後,提示建立router建立成功
接着點擊"接口"-->"增長接口"
添加一個鏈接私網的接口,選中"private: 192.168.12.0/24"
點擊"增長接口"成功後,咱們能夠看到兩個接口先是down的狀態,過一下子刷新下就是running狀態(注意,必定得是運行running狀態,否則到時候虛擬機網絡會出不去)
2二、檢查網絡服務
# neutron agent-list
看服務是不是笑臉
8、安裝Dashboard
一、安裝dashboard相關軟件包
# yum install openstack-dashboard -y
二、修改配置文件/etc/openstack-dashboard/local_settings
# vim /etc/openstack-dashboard/local_settings
直接覆蓋我給的local_settings文件也行(爲了減小出錯,你們仍是用我提供的local_settings文件替換覆蓋)
三、啓動dashboard服務並設置開機啓動
# systemctl restart httpd.service memcached.service
# systemctl status httpd.service memcached.service
到此,Controller節點搭建完畢,打開firefox瀏覽器便可訪問http://9.110.187.150/dashboard/ 可進入openstack界面!
9、安裝配置cinder
一、建立數據庫用戶並賦予權限
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'devops';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'devops';
二、建立cinder用戶並賦予admin權限
# source /root/admin-openrc
# openstack user create --domain default cinder --password devops
# openstack role add --project service --user cinder admin
三、建立volume服務
# openstack service create --name cinder --description "OpenStack Block Storage" volume
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
四、建立endpoint
# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
五、安裝cinder相關服務
# yum install openstack-cinder -y
六、配置cinder配置文件
# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# >/etc/cinder/cinder.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.150
# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:devops@controller/cinder
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password devops
# openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
七、上同步數據庫
# su -s /bin/sh -c "cinder-manage db sync" cinder
八、在controller上啓動cinder服務,並設置開機啓動
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
九、安裝Cinder節點,Cinder節點這裏咱們須要額外的添加一個硬盤(/dev/sdb)用做cinder的存儲服務 (注意!這一步是在cinder節點操做的)
# yum install lvm2 -y
十、啓動服務並設置爲開機自啓 (注意!這一步是在cinder節點操做的)
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
# systemctl status lvm2-lvmetad.service
十一、建立lvm, 這裏的/dev/sdb就是額外添加的硬盤 (注意!這一步是在cinder節點操做的)
# fdisk -l
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb
12. 編輯存儲節點lvm.conf文件 (注意!這一步是在cinder節點操做的)
# vim /etc/lvm/lvm.conf
在devices 下面添加 filter = [ "a/sda/", "a/sdb/", "r/.*/"] ,130行 ,如圖:
而後重啓下lvm2服務:
# systemctl restart lvm2-lvmetad.service
# systemctl status lvm2-lvmetad.service
1三、安裝openstack-cinder、targetcli (注意!這一步是在cinder節點操做的)
# yum install openstack-cinder openstack-utils targetcli python-keystone ntpdate -y
1四、配置cinder配置文件 (注意!這一步是在cinder節點操做的)
# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# >/etc/cinder/cinder.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT debug False
# openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose True
# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.152
# openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
# openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True
# openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True
# openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True
# openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova
# openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova
# openstack-config --set /etc/cinder/cinder.conf DEFAULT os_region_name RegionOne
# openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini
# openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:devops@controller/cinder
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password devops
# openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
# openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
# openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
# openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
# openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
1五、啓動openstack-cinder-volume和target並設置開機啓動 (注意!這一步是在cinder節點操做的)
# systemctl enable openstack-cinder-volume.service target.service
# systemctl restart openstack-cinder-volume.service target.service
# systemctl status openstack-cinder-volume.service target.service
1六、驗證cinder服務是否正常
# source /root/admin-openrc
# cinder service-list
Compute節點部署
1、安裝相關依賴包
# yum install openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils ntpdate -y
1. 配置nova.conf
# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
# >/etc/nova/nova.conf
# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.151
# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
# openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
# openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
# openstack-config --set /etc/nova/nova.conf keystone_authtoken password devops
# openstack-config --set /etc/nova/nova.conf placement auth_uri http://controller:5000
# openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf placement memcached_servers controller:11211
# openstack-config --set /etc/nova/nova.conf placement auth_type password
# openstack-config --set /etc/nova/nova.conf placement project_domain_name default
# openstack-config --set /etc/nova/nova.conf placement user_domain_name default
# openstack-config --set /etc/nova/nova.conf placement project_name service
# openstack-config --set /etc/nova/nova.conf placement username nova
# openstack-config --set /etc/nova/nova.conf placement password devops
# openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
# openstack-config --set /etc/nova/nova.conf vnc enabled True
# openstack-config --set /etc/nova/nova.conf vnc keymap en-us
# openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
# openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.151
# openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://9.115.75.150:6080/vnc_auto.html
# openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
# openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
2. 設置libvirtd.service 和openstack-nova-compute.service開機啓動
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl restart libvirtd.service openstack-nova-compute.service
# systemctl status libvirtd.service openstack-nova-compute.service
3. 到controller上執行驗證
# source /root/admin-openrc
# openstack compute service list
2、安裝Neutron
1. 安裝相關軟件包
# yum install openstack-neutron-linuxbridge ebtables ipset -y
2. 配置neutron.conf
# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# >/etc/neutron/neutron.conf
# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
# openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
# openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron
# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
# openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:devops@controller
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password devops
# openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
3. 配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno16777736
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.151
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注意provider後面那個網卡名是第二塊網卡的名稱,我這裏就是10.2.2.x網段網卡的名稱
4. 配置nova.conf
# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
# openstack-config --set /etc/nova/nova.conf neutron auth_type password
# openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
# openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
# openstack-config --set /etc/nova/nova.conf neutron project_name service
# openstack-config --set /etc/nova/nova.conf neutron username neutron
# openstack-config --set /etc/nova/nova.conf neutron password devops
5. 重啓和enable相關服務
# systemctl restart libvirtd.service openstack-nova-compute.service
# systemctl enable neutron-linuxbridge-agent.service
# systemctl restart neutron-linuxbridge-agent.service
# systemctl status libvirtd.service openstack-nova-compute.service neutron-linuxbridge-agent.service
3、計算節點結合Cinder
1.計算節點要是想用cinder,那麼須要配置nova配置文件 (注意!這一步是在計算節點操做的)
# openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
# systemctl restart openstack-nova-compute.service
2.而後在controller上重啓nova服務
# systemctl restart openstack-nova-api.service
# systemctl status openstack-nova-api.service
四. 在controler上執行驗證
# source /root/admin-openrc
# neutron agent-list
# nova-manage cell_v2 discover_hosts
到此,Compute節點搭建完畢,運行nova host-list能夠查看新加入的compute1節點
若是須要再添加另一個compute節點,只要重複下第二大部便可,記得把計算機名和IP地址改下。
附-建立配額命令
# openstack flavor create m1.tiny --id 1 --ram 512 --disk 1 --vcpus 1
# openstack flavor create m1.small --id 2 --ram 2048 --disk 20 --vcpus 1
# openstack flavor create m1.medium --id 3 --ram 4096 --disk 40 --vcpus 2
# openstack flavor create m1.large --id 4 --ram 8192 --disk 80 --vcpus 4
# openstack flavor create m1.xlarge --id 5 --ram 16384 --disk 160 --vcpus 8
# openstack flavor list
有興趣能夠聯繫我學習我出的openstack視頻課程: