Centos7部署openstack架構

簡介

OpenStack是一個開源的雲計算管理平臺項目,是一系列軟件開源項目的組合,由NASA(美國國家航空航天局)和Rackspace合做研發併發起,以Apache許可證受權的開源代碼項目
OpenStack爲私有云和公有云提供可擴展的彈性的雲計算服務,項目目標是提供實施簡單、可大規模擴展、豐富、標準統一的雲計算管理平臺
OpenStack覆蓋了網絡、虛擬化、操做系統、服務器等各個方面,它是一個正在開發中的雲計算平臺項目,根據成熟及重要程度的不一樣,被分解成核心項目、孵化項目,以及支持項目和相關項目,每一個項目都有本身的委員會和項目技術主管,並且每一個項目都不是一成不變的,孵化項目能夠根據發展的成熟度和重要性,轉變爲核心項目
核心組件
一、計算(Compute)Nova:一套控制器,用於爲單個用戶或使用羣組管理虛擬機實例的整個生命週期,根據用戶需求來提供虛擬服務。負責虛擬機建立、開機、關機、掛起、暫停、調整、遷移、重啓、銷燬等操做,配置CPU、內存等信息規格
二、對象存儲(Object Storage)Swift:一套用於在大規模可擴展系統中經過內置冗餘及高容錯機制實現對象存儲的系統,容許進行存儲或者檢索文件,可爲Glance提供鏡像存儲,爲Cinder提供卷備份服務
三、鏡像服務(Image Service)Glance:一套虛擬機鏡像查找及檢索系統,支持多種虛擬機鏡像格式(AKI、AMI、ARI、ISO、QCOW二、Raw、VDI、VHD、VMDK),有建立上傳鏡像、刪除鏡像、編輯鏡像基本信息的功能
四、身份服務(Identity Service)Keystone:爲OpenStack其餘服務提供身份驗證、服務規則和服務令牌的功能,管理Domains、Projects、Users、Groups、Roles
五、網絡&地址管理(Network)Neutron:提供雲計算的網絡虛擬化技術,爲OpenStack其餘服務提供網絡鏈接服務。爲用戶提供接口,能夠定義Network、Subnet、Router,配置DHCP、DNS、負載均衡、L3服務,網絡支持GRE、VLAN,插件架構支持許多主流的網絡廠家和技術,如OpenvSwitch
六、塊存儲(Block Storage)Cinder:爲運行實例提供穩定的數據塊存儲服務,它的插件驅動架構有利於塊設備的建立和管理,如建立卷、刪除卷,在實例上掛載和卸載卷
七、UI 界面(Dashboard)Horizon:OpenStack中各類服務的Web管理門戶,用於簡化用戶對服務的操做,例如:啓動實例、分配IP地址、配置訪問控制等
八、測量(Metering)Ceilometer:能把OpenStack內部發生的幾乎全部的事件都收集起來,而後爲計費和監控以及其它服務提供數據支撐
九、部署編排(Orchestration)Heat:提供了一種經過模板定義的協同部署方式,實現雲基礎設施軟件運行環境(計算、存儲和網絡資源)的自動化部署
十、數據庫服務(Database Service)Trove:爲用戶在OpenStack的環境提供可擴展和可靠的關係和非關係數據庫引擎服務html

前期準備

準備三臺Centos7虛擬機,其中兩臺虛擬機配置兩個網卡(NAT和僅主機),兩臺虛擬區配置多塊硬盤,配置IP地址和hostname,同步系統時間,關閉防火牆和selinux,修改ip地址和hostname映射node

ip hostname
ens33(NAT):192.168.29.145 ens37(僅主機):192.168.31.135 controller
ens33(NAT):192.168.29.146 ens37(僅主機):192.168.31.136 computer
ens33(NAT):192.168.29.147 storager

部署服務

安裝epel源python

[root@controller ~]# yum install epel-release -y
[root@computer ~]# yum install epel-release -y

安裝openstack源mysql

[root@controller ~]# yum install -y centos-release-openstack-queens

[root@computer ~]# yum install -y centos-release-openstack-queens

安裝openstack的客戶端和selinux服務linux

[root@controller ~]# yum install python-openstackclient openstack-selinux -y

[root@computer ~]# yum install python-openstackclient openstack-selinux -y

部署MySQL數據庫和memcachedweb

[root@controller ~]# yum install mysql-server mysql memcached python2-PyMySQL -y

安裝消息隊列服務sql

[root@controller ~]# yum install -y rabbitmq-server -y

安裝keystone服務shell

[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y

安裝glance服務數據庫

[root@controller ~]# yum install openstack-glance -y

controller安裝nova服務django

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

computer安裝nova服務

[root@computer ~]# yum install openstack-nova-compute -y

controller安裝neutron服務

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

computer安裝neutron服務

[root@computer ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

安裝dashboard組件

[root@controller ~]# yum install openstack-dashboard -y

controller安裝swift服務

[root@controller ~]# yum install openstack-swift openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware

comupter和storager安裝swift服務

[root@computer ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y
[root@storager ~]# yum install openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object -y

配置消息隊列服務

開啓服務

[root@controller ~]# systemctl startrabbitmq-server.service
[root@controller ~]# systemctl enable rabbitmq-server.service

添加用戶

[root@controller ~]# rabbitmqctl add_user openstack openstack

受權限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

配置memcached服務

修改配置文件

[root@controller ~]# vi /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller"

啓動服務

[root@controller ~]# systemctl start memcached.service
[root@controller ~]# systemctl enable memcached.service

配置數據庫服務

修改配置文件

[root@controller ~]# vi /etc/my.cnf
default-time_zone='+8:00'
bind-address = 192.168.29.145
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

啓動服務

[root@controller ~]# systemctl start mysqld
[root@controller ~]# systemctl enable mysqld

建立數據庫

mysql> create database keystone;
mysql> create database glance;
mysql> create database nova;
mysql> create database nova_api;
mysql> create database nova_cell0;
mysql> create database neutron;

受權用戶

mysql> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'your_password';
mysql> grant all privileges on keystone.* to 'keystone'@'%' identified by 'your_password';

mysql> grant all privileges on glance.* to 'glance'@'localhost' identified by 'your_password';
mysql> grant all privileges on glance.* to 'glance'@'%' identified by 'your_password';

mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova_api.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'your_password';
mysql> grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'your_password';

mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'your_password';
mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'your_password';

mysql> flush privileges;

配置keystone服務

修改配置文件

[root@controller ~]# vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:your_password@controller/keystone
[token]
provider = fernet

數據庫同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

密鑰庫初始化

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置httpd服務

#修改配置文件
[root@controller ~]# vi /etc/httpd/conf/httpd.conf
ServerName controller

#建立軟鏈接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#啓動服務
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd

配置環境變量腳本

[root@controller ~]# vi admin-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

驗證環境變量

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack token issue

建立service項目

[root@controller ~]# openstack project create --domain default --description "Service Project" service

建立demo項目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo

建立demo用戶

[root@controller ~]# openstack user create --domain default --password-prompt demo

建立user角色

[root@controller ~]# openstack role create user

添加user角色到demo項目和用戶

[root@controller ~]# openstack role add --project demo --user demo user

配置demo環境變量腳本

[root@controller ~]# vi demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

配置glance服務

建立並配置glance用戶

[root@controller ~]# openstack user create --domain default --password-prompt glance
[root@controller ~]# openstack role add --project service --user glance admin

建立glance服務實體

[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image

建立glance服務端點

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292

修改配置文件

[root@controller ~]# vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[root@controller ~]# vi /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

同步數據庫

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

啓動服務

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

上傳鏡像

[root@controller ~]# glance image-create --name Centos7--disk-format qcow2 --container-format bare --progress < CentOS-7-x86_64-GenericCloud-1907.qcow2

#查看鏡像
[root@controller ~]# openstack image list

Controller配置nova服務

建立並配置nova用戶

[root@controller ~]# openstack user create --domain default --password-prompt nova
[root@controller ~]# openstack role add --project service --user nova admin

建立nova服務實體

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

建立nova服務端點

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

建立並配置placement用戶

[root@controller ~]# openstack user create --domain default --password-prompt placement
[root@controller ~]# openstack role add --project service --user placement admin

建立placement服務實體

[root@controller ~]# openstack service create --name placement --description "Placement API" placement

建立placement服務端點

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

修改配置文件

[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.29.145
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:openstack@controller

[api_database]
connection = mysql+pymysql://nova:your_password@controller/nova_api

[database]
connection = mysql+pymysql://nova:your_password@controller/nova

[api]
auth_strategy = keystone 

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/nova/nova-placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
<Directory /usr/bin>
  <IfVersion >= 2.4>
    Require all granted
  </IfVersion>
  <IfVersion >= 2.4>
    Order allow,deny
    Allow from all
  </IfVersion>
</Directory>
</VirtualHost>

Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>

重啓httpd服務

[root@controller ~]# systemctl restart httpd

同步數據庫

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

驗證

[root@controller ~]# nova-manage cell_v2 list_cells

啓動服務

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl enable openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute配置nova服務

修改配置文件

[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.29.146
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
allow_resize_to_same_host = True

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[libvirt]
virt_type = qemu

啓動服務

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service

controller添加computer進入數據庫

#查看nova-compute結點
[root@controller ~]# openstack compute service list --service nova-compute

#添加數據庫
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

controller配置neutron服務

建立並配置neutron用戶

[root@controller ~]# openstack user create --domain default --password-prompt neutron
[root@controller ~]# openstack role add --project service --user neutron admin

建立neutron服務實體

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

建立neutron服務端點

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696

修改配置文件

[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:your_password@controller/neutron

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.135
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 000000
[root@controller ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000

建立軟連接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步數據庫

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

啓動服務

#重啓nova-api服務
[root@controller ~]# systemctl restart openstack-nova-api.service

[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

computer配置neutron服務

修改配置文件

[root@computer ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.136
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@computer ~]# vi /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

啓動服務

#重啓nova-compute服務
[root@compute ~]# systemctl stop openstack-nova-compute.service
[root@compute ~]# systemctl start openstack-nova-compute.service
#注:直接restart重啓可能會致使報錯

[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

驗證

[root@controller ~]# openstack network agent list

#查看日誌
[root@computer ~]# tail /var/log/nova/nova-compute.log

配置dashboard組件

修改配置文件

[root@controller ~]# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', 'two.example.com']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}

重啓服務

[root@controller ~]# systemctl restart httpd.service memcached.service

訪問web界面
瀏覽器訪問http://ip/dashboard

computer和storager配置swift服務

添加硬盤並格式化

#computer結點
[root@computer ~]# parted -a optimal --script /dev/sdc -- mktable gpt
[root@computer ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%
[root@computer ~]# mkfs.xfs -f /dev/sdc1

#storager結點
[root@storager ~]# parted -a optimal --script /dev/sdc -- mktable gpt
[root@storager ~]# parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%
[root@storager ~]# mkfs.xfs -f /dev/sdc1

掛載硬盤

#computer結點
[root@computer ~]# mkdir -p /srv/node/sdc1
[root@computer ~]# vi /etc/fstab
/dev/sdc1       /srv/node/sdc1      xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@computer ~]# mount /srv/node/sdc1/
#storager結點
[root@storager ~]# mkdir -p /srv/node/sdc1
[root@storager ~]# vi /etc/fstab
/dev/sdc1       /srv/node/sdc1      xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@storager ~]# mount /srv/node/sdc1/

編輯rsyncd服務

[root@computer ~]# vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.29.146

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
[root@storager ~]# vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.29.147

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

修改配置文件

[root@computer ~]# vi /etc/swift/account-server.conf
[root@storager ~]# vi /etc/swift/account-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6202
workers = 2
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = account-server
[root@computer ~]# vi /etc/swift/container-server.conf
[root@storager ~]# vi /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6201
workers = 2
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = container-server
[root@computer ~]# vi /etc/swift/object-server.conf
[root@storager ~]# vi /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 6200
workers = 3
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = object-server

修改權限

[root@computer ~]# chown -R swift:swift /srv/node/
[root@storager ~]# chown -R swift:swift /srv/node/

啓動服務

[root@computer ~]# systemctl start  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service rsyncd.service
[root@computer ~]# systemctl enable  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@storager ~]# systemctl start  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service rsyncd.service
[root@storager ~]# systemctl enable  openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service

controller配置swift服務

建立並配置swift用戶

[root@controller ~]# openstack user create --password-prompt swift
[root@controller ~]# openstack role add --project service --user swift admin

建立swift服務實體

[root@controller ~]# openstack service create --name swift --description "OpenStack Object Storage" object-store

建立swift服務端點

[root@controller ~]# openstack endpoint create --region RegionOne object-store  public http://controller:8080/v1/AUTH_%\(tenant_id\)s 
[root@controller ~]# openstack endpoint create --region RegionOne object-store internal  http://controller:8080/v1/AUTH_%\(tenant_id\)s 
[root@controller ~]# openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

修改配置文件

[root@controller ~]# vi /etc/swift/proxy-server.conf
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 8080
workers = 2
user = swift
swift_dir = /etc/swift
[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin,SwiftOperator,user
cache = swift.cache
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = swift
delay_auth_decision = true
[root@controller ~]# vi /etc/swift/swift.conf 
[swift-hash]
swift_hash_path_suffix = `od -t x8 -N 8 -A n < /dev/random`
[storage-policy:0]
name = Policy-0
default = yes

建立rings

[root@controller ~]# cd /etc/swift
#Account rings
#建立rings
[root@controller ~]# swift-ring-builder account.builder create 18 2 1
參數含義
    18:每一個ring被分爲2的18次方個分區
    2:一個數據存儲兩個備份
    1:ring的一個分區在一個小時候纔可被移動

#添加存儲節點到rings
[root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.146:6202/sdc1 100
[root@controller ~]# swift-ring-builder account.builder add r1z1-192.168.29.147:6202/sdc1 100

#從新分佈rings
[root@controller ~]# swift-ring-builder account.builder rebalance
#Container rings
#建立rings
[root@controller ~]# swift-ring-builder container.builder create 18 2 1

#添加存儲節點到rings
[root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.146:6201/sdc1 100
[root@controller ~]# swift-ring-builder container.builder add r1z1-192.168.29.147:6201/sdc1 100

#從新分佈rings
[root@controller ~]# swift-ring-builder container.builder rebalance
#object ring
#建立rings
[root@controller ~]# swift-ring-builder object.builder
[root@controller ~]# swift-ring-builder object.builder create 18 2 1

#添加存儲節點到rings
[root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.146:6200/sdc1 100
[root@controller ~]# swift-ring-builder object.builder add r1z1-192.168.29.147:6200/sdc1 100

#從新分佈rings
[root@controller ~]# swift-ring-builder object.builder rebalance

分配配置文件

[root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.146:/etc/swift/
[root@controller ~]# scp account.ring.gz container.ring.gz object.ring.gz swift.conf 192.168.29.147:/etc/swift

設置目錄權限

[root@controller ~]# chown -R swift:swift /etc/swift/

重啓服務

[root@controller ~]# systemctl restart memcached.service 
[root@controller ~]# systemctl start openstack-swift-proxy.service

存儲結點重啓服務

[root@computer ~]# swift-init all start
[root@storager ~]# swift-init all start

驗證狀態

[root@controller ~]# swift stat
                        Account: AUTH_97e5c629da9944c5ad960e5c171dac68
                     Containers: 1
                        Objects: 1
                          Bytes: 13287936
Containers in policy "policy-0": 1
   Objects in policy "policy-0": 1
     Bytes in policy "policy-0": 13287936
         X-Openstack-Request-Id: tx19a4687c644645708525e-005f30ef14
                    X-Timestamp: 1597039034.68479
                     X-Trans-Id: tx19a4687c644645708525e-005f30ef14
                   Content-Type: application/json; charset=utf-8
                  Accept-Ranges: bytes

上傳文件

[root@controller ~]# swift upload demo_container cirros-0.3.4-x86_64-disk.img

查看文件

[root@controller ~]# swift list
demo_container

部署雲主機

部署雲主機步驟可參考:http://www.javashuo.com/article/p-nwfrkbdv-kc.html

相關文章
相關標籤/搜索