三節點搭建openstack-Mitaka版本

前言:html

  如今的雲計算平臺已經很是火,也很是的穩定了。像阿里雲平臺,百度雲平臺等等,今天我們基於openstack來搭建一個雲平臺python

注意:mysql

  本次平臺搭建爲三節點搭建(沒有外部存儲節點,全部存儲爲本地存儲的方式)linux

 

一:網絡:sql

  1.管理網絡:192.168.1.0/24mongodb

  2.數據網絡:1.1.1.0/24數據庫

 

二:操做系統apache

  CentOS Linux release 7.3.1611 (Core) django

三:內核vim

  3.10.0-514.el7.x86_64

 

四:版本信息:

  openstack版本mitaka

注意:

  在修改配置文件的時候 必定要注意不要再某條配置文件後面添加註釋。能夠上面與下面

  相關配置必定要在標題的後追加,不要再原有的基礎上修改

 

效果圖:

  

本博客主要是搭建前期的環境,後續的一些內部操做將會在後面的博客中陸續更新

 

準備環境:

  爲三臺主機添加hosts解析文件,爲每臺機器設置主機名,關閉firewalld,sellinux,設置靜態IP

  爲計算節點添加兩塊網卡,爲網絡節點添加兩塊網卡

  

  自定義yum源

    全部節點執行:

    yum makecache && yum install vim net-tools -y&& yum update -y

      關閉yum自動更新

    修改/ect/yum/yum-cron.conf將download_updates = yes改成no便可

  

  也可使用網絡yum源: 

  下載網絡yum源(在全部節點執行)

    yum install centos-release-openstack-mitaka -y

     製做yum緩存並更新系統

 

  預裝包(在全部節點執行)

    yum install python-openstackclient -y

    yum install openstack-selinux -y

 

  部署時間服務(在全部節點執行)

    yum install chrony -y 

 

控制節點:

    修改配置:

    /etc/chrony.conf

    server ntp.staging.kycloud.lan iburst

    allow 192.168.1.0/24

 

啓動服務:

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

其他節點:

  

    修改配置:

    /etc/chrony.conf

    server 192.168.1.142 iburst

 

啓動服務

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

驗證:

    每臺機器執行:

    chronyc sources

    在S那一列包含*號,表明同步成功(可能須要花費幾分鐘去同步,時間務必同步)

 

控制節點操做

 

安裝數據庫

  yum install mariadb mariadb-server python2-PyMySQL -y

編輯: /etc/my.cnf.d/openstack.cnf[mysqld] bind-address = 192.168.1.142#控制節點IP default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
啓動服務
systemctl enable mariadb.service
systemctl start mariadb.service

  

安裝mogodb

  yum install mongodb-server mongodb -y

 

 

編輯:/etc/mongod.confbind_ip = 192.168.1.142 smallfiles = true

 

保存退出後啓動服務 並添加開機自啓動
systemctl enable mongod.service
systemctl start mongod.service

 

部署消息列隊
  yum install rabbitmq-server -y

 

啓動服務並開機自啓 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service
新建rabbitmq用戶名與密碼(這裏面我全部的密碼都用lhc001) rabbitmqctl add_user openstack lhc001 爲新建的用戶openstack設定權限: rabbitmqctl set_permissions openstack ".*" ".*" ".*"

 

安裝memcache緩存token
  yum install memcached python-memcached -y

啓動並添加到開機自啓動
systemctl enable memcached.service
systemctl start memcached.service

 

部署keystone服務

對數據庫的操做 CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller01' \ IDENTIFIED BY 'lhc001'; #遠程登陸的要寫上這個否則會報錯 flush privileges;

 

安裝keystone
  yum install openstack-keystone httpd mod_wsgi -y

 

編輯/etc/keystone/keystone.conf [DEFAULT] admin_token = lhc001 [database] connection = mysql+pymysql://keystone:lhc001@controller01/keystone  [token] provider = fernet    

 

同步修改數據到數據庫中
  su -s /bin/sh -c "keystone-manage db_sync" keystone

 

初始化fernet keys
  keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

 

配置apache服務

編輯:/etc/httpd/conf/httpd.conf ServerName controller01

 

硬連接/usr/share/keystone/wsgi-keystone.conf到/etc/httpd/conf.d/下

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/wsgi-keystone.conf

 

從新啓動httpd

systemctl restart httpd

 

 

建立服務實體和訪問站點

實現配置管理員環境變量,用於獲取後面建立的權限 export OS_TOKEN=lhc001 export OS_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3    

 

基於上一步給的權限,建立認證服務實體(目錄服務)

openstack service create \ --name keystone --description "OpenStack Identity" identity

 

基於上一步創建的服務實體,建立訪問該實體的三個api端點

openstack endpoint create --region RegionOne \ identity public http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity internal http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity admin http://controller01:35357/v3  
    

 

 

 

 建立域,租戶,用戶,角色,把四個元素關聯到一塊兒

 創建一個公共的域名: 

openstack domain create --description "Default Domain" default    
管理員:admin openstack project create --domain default \ --description "Admin Project" admin openstack user create --domain default \ --password-prompt admin openstack role create admin openstack role add --project admin --user admin admin
普通用戶:demo openstack project create --domain default \ --description "Demo Project" demo openstack user create --domain default \ --password-prompt demo openstack role create user openstack role add --project demo --user demo user
爲後續的服務建立統一租戶service openstack project create --domain default \ --description "Service Project" service

 

 

驗證關聯

 

驗證操做

編輯:/etc/keystone/keystone-paste.ini 在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三個地方 移走:admin_token_auth 

 

新建客戶端腳本文件

管理員:admin-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

普通用戶demo:demo-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:5000/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

 

退出控制檯 從新登陸
source admin-openrc
openstack token issue

 

部署鏡像服務 

數據庫操做 

mysql -u root -p CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller01' \ IDENTIFIED BY 'lhc001'; #同上面keystone對數據庫的操做同樣 flush privileges;

 

 

keystone認證操做:
上面提到過:全部後續項目的部署都統一放到一個租戶service裏,而後須要爲每一個項目創建用戶,建管理員角色,創建關聯

openstack user create --domain default --password-prompt glance

 

 

關聯角色

openstack role add --project service --user glance admin
創建服務實體 openstack service create --name glance \ --description "OpenStack Image" image
建端點 openstack endpoint create --region RegionOne \ image public http://controller01:9292
 openstack endpoint create --region RegionOne \ image internal http://controller01:9292
 openstack endpoint create --region RegionOne \ image admin http://controller01:9292

 

 

安裝glance軟件
  yum install openstack-glance -y

此次的環境統一都是用本地存儲,可是不管什麼存儲,都要在啓動glance以前創建,否則啓動時glance搜索不到,雖然不會報錯,可是對於後面的一些操做會報錯。因此爲了省略麻煩 仍是先提早創建好

mkdir /var/lib/glance/images/ chown glance. /var/lib/glance/images/

 

編輯/etc/glance/glacne-api.conf
 [database]
 connection = mysql+pymysql://glance:lhc001@controller01/glance
[keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = lhc001 [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

 

編輯/etc/glance/glacne-registry

這裏的registry配置的數據庫是用來檢索鏡像元數據用的
[database] connection
= mysql+pymysql://glance:lhc001@controller01/glance 在以前的版本中,glance-api怎麼配置registry就怎麼配置。如今變成了v3版本 就不須要那麼繁瑣,直接在glance-registry中添加一條數據庫接口,其它一律不用

 

 

同步數據庫:輸出不是報錯 su -s /bin/sh -c "glance-manage db_sync" glance

 

啓動並添加到開機自啓動 systemctl enable openstack-glance-api.service \ openstack-glance-registry.service systemctl start openstack-glance-api.service \ openstack-glance-registry.service

 

 

驗證操做

查看openstack image list 輸出爲空 而後執行鏡像上傳 openstack image create "cirros" \ --file cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public

 

 部署compute服務

再部署任何組件時都有對keystone的數據庫中進行添加用戶的操做,即全部組件共用一個數據庫服務

compute對數據庫的操做 CREATE DATABASE nova_api;  CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone部分的相關操做

 

openstack user create --domain default \ --password-prompt nova 建立一個nova用戶 輸入用戶名密碼 openstack role add --project service --user nova admin 用戶與角色項目關聯 openstack service create --name nova \ --description "OpenStack Compute" compute 建立實體

 

 

openstack endpoint create --region RegionOne \ compute public http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute internal http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute admin http://controller01:8774/v2.1/%\(tenant_id\)s   三個endpoint 

 

 

安裝軟件包:

yum install openstack-nova-api openstack-nova-conductor \

  openstack-nova-console openstack-nova-novncproxy \

  openstack-nova-scheduler -y

編輯/etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata rpc_backend = rabbit auth_strategy = keystone #下面的爲管理ip my_ip = 192.168.1.142 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_databases] connection = mysql+pymysql://nova:lhc001@controller01/nova_api  [database] connection = mysql+pymysql://nova:lhc001@controller01/nova  [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [keystone_authtoken] auth_url = http://controller01:5000 memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = lhc001 [vnc] #下面的爲管理ip vncserver_listen = 192.168.1.142 #下面的爲管理ip vncserver_proxyclient_address = 192.168.1.142 [oslo_concurrency] lock_path = /var/lib/nova/tmp  

 

 

同步數據庫 有輸出不是報錯

su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova

 

 

啓動並添加到開機自啓動 systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service

 接下來就要配置計算節點了 控制節點就暫時先放一放

 

配置計算節點 

安裝軟件包:
yum install openstack-nova-compute libvirt-daemon-lxc -y

修改配置:
編輯/etc/nova/nova.conf

[DEFAULT] rpc_backend = rabbit auth_strategy = keystone #計算節點管理網絡ip my_ip = 192.168.1.141 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [vnc] enabled = True vncserver_listen = 0.0.0.0 #計算節點管理網絡ip vncserver_proxyclient_address = 192.168.1.141 #控制節點管理網絡ip novncproxy_base_url = http://192.168.1.142:6080/vnc_auto.html
 [glance] api_servers = http://controller01:9292
 [oslo_concurrency] lock_path = /var/lib/nova/tmp
啓動程序 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service

 

 

 驗證操做:

 控制節點測試

 

網絡部署 

仍是在控制節點進行數據庫操做

CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone對於neutron的操做

建立用戶創建關聯 openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin

 

 建立實體服務與三個endpoint

openstack service create --name neutron \ --description "OpenStack Networking" network openstack endpoint create --region RegionOne \ network public http://controller01:9696
 openstack endpoint create --region RegionOne \ network internal http://controller01:9696
 openstack endpoint create --region RegionOne \ network admin http://controller01:9696

 

 安裝neutron組件

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which  -y

 

配置服務組件
編輯 /etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router #下面配置:啓用重疊IP地址功能 allow_overlapping_ips = True rpc_backend = rabbit auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [database] connection = mysql+pymysql://neutron:lhc001@controller01/neutron
 [keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = lhc001 [nova] auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2] type_drivers = flat,vlan,vxlan,gre tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True

 

 編輯/etc/nova/nova.conf文件

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001 service_metadata_proxy = True

 

 

建立連接
  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 數據庫同步:會有輸出 並非報錯

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

 從新啓動nova

   systemctl restart openstack-nova-api.service

啓動neutron並加入到開機自啓
  systemctl enable neutron-server.service
  systemctl start neutron-server.service

 

配置網絡節點

編輯 /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0

 

 

執行下列命令,當即生效
  sysctl -p

 

安裝軟件包
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

配置組件
編輯/etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件

[ovs] #下面ip爲網絡節點數據網絡ip local_ip=1.1.1.119 bridge_mappings=external:br-ex [agent] tunnel_types=gre,vxlan #l2_population=True prevent_arp_spoofing=True

 

 配置L3代理。編輯 /etc/neutron/l3_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver external_network_bridge=br-ex

 

配置DHCP代理。編輯 /etc/neutron/dhcp_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata=True

 

配置元數據代理。編輯 /etc/neutron/metadata_agent.ini文件

[DEFAULT] nova_metadata_ip=controller01 metadata_proxy_shared_secret=lhc001

 

啓動服務
網路節點:

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

注意上面標紅的服務,在查看服務狀態的時候ovs會提示啓動失敗,這個不用擔憂,由於以前上面的配置文件中配置的數據管理IP與網橋都沒有建立,它的log中提示找不到,不要緊 後面會建立出來。

 

建立爲網絡節點數據網絡ip

[root@network01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.119 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

計算節點也是同樣,也要建立一個數據管理IP

cd /etc/sysconfig/network-scripts
cp ifcfg-ens33 ifcfg-ens37

[root@compute01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.117 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

重啓兩節點網卡服務

  systemctl restart nwtwork

兩節點互ping 測試

 

回到網絡節點上建立網橋

創建網橋,br-ex網卡與外部網卡綁定,因爲在實驗以前 咱們添加了三塊網卡,可是我這臺機器上能上網的只有一個IP地址,因此能用的網絡就兩個一個數據網絡,一個管理網絡,可是在真實環境中,是必定要用三塊網卡的

cp ifcfg-ens33 ifcfg-br-ex

[root@network01 network-scripts]# cat ifcfg-br-ex TYPE="Ethernet" BOOTPROTO="static" IPADDR=192.168.1.140 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 NAME="br-ex" DEVICE="br-ex" ONBOOT="yes" NM_CONTROLLED=no    #這個必定要添加否則網橋會創建失敗

 

同理,既然已經將網橋創建了,那麼ens33上面的IP地址就要拿掉

[root@network01 network-scripts]# cat ifcfg-ens33 TYPE="Ethernet" BOOTPROTO="static" DEFROUTE="yes" PEERDNS="yes" PEERROUTES="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="e82bc9e4-d28e-4363-aab4-89fda28da938" DEVICE="ens33" ONBOOT="yes" NM_CONTROLLED=no      #在真實網卡上也要添加

 

重啓網絡節點網卡

  systemctl restart nwtwork

這時候再次查看ovs服務的狀態(

systemctl status neutron-openvswitch-agent.service)就發現這個服務必定running了

 

到這裏網絡節點就告一段落了

 

配置計算節點

編輯 /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 sysctl -p

 

 

計算節點安裝ovs等組件
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

 

編輯 /etc/neutron/neutron.conf文件

[DEFAULT] rpc_backend = rabbit #auth_strategy = keystone [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 

編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs] #下面ip爲計算節點數據網絡ip local_ip = 1.1.1.117 [agent] tunnel_types = gre,vxlan l2_population = True arp_responder = True prevent_arp_spoofing = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

 

 

編輯 /etc/nova/nova.conf

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001

 

啓動服務重啓nova systemctl enable neutron-openvswitch-agent.service systemctl start neutron-openvswitch-agent.service systemctl restart openstack-nova-compute.service

 

 

ok,接下來的事情就簡單多了,在控制節點部署一個dashboard服務

 

控制節點操做

安裝軟件包
  yum install openstack-dashboard -y

 

配置/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller01" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #SESSION_ENGINE = 'django.contrib.sessions.backends.file' #這個配置文件沒有咱們在最後面添加進去,輸入上面的那個配置在登陸頁面進行登陸的時候可能會報錯,若是報錯,修改爲下面的配置 問題就會解決 CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller01:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "data-processing": 1.1, "identity": 3, "image": 2, "volume": 2, "compute": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" TIME_ZONE = "UTC"

 

從新啓動服務
  systemctl enable httpd.service memcached.service
  systemctl restart httpd.service memcached.service

 

在瀏覽器段進行測試

 http://http://192.168.1.142/dashboard/

 

相關文章
相關標籤/搜索