OpenStack 手動安裝手冊

#OpenStack 手動安裝手冊(Icehouse)html

原文章連接:https://github.com/yongluo2013/osf-openstack-training/blob/master/installation/openstack-icehouse-for-centos65.mdnode

做者:[羅勇] 雲計算工程師、敏捷開發實踐者python

##部署架構mysql

爲了更好的展示OpenStack各組件分佈式部署的特色,以及邏輯網絡配置的區別,本實驗不採用All in One 的部署模式,而是採用多節點分開部署的方式,方便後續學習研究。linux

architecture

##網絡拓撲git

networking

##環境準備github

本實驗採用Virtualbox Windows 版做爲虛擬化平臺,模擬相應的物理網絡和物理服務器,若是須要部署到真實的物理環境,此步驟能夠直接替換爲在物理機上相應的配置,其原理相同。web

Virtualbox 下載地址:https://www.virtualbox.org/wiki/Downloadssql

###虛擬網絡數據庫

須要新建3個虛擬網絡Net0、Net1和Net2,其在virtual box 中對應配置以下。

Net0:
	Network name: VirtualBox  host-only Ethernet Adapter#2
	Purpose: administrator / management network
	IP block: 10.20.0.0/24
	DHCP: disable
	Linux device: eth0

Net1:
	Network name: VirtualBox  host-only Ethernet Adapter#3
	Purpose: public network
	DHCP: disable
	IP block: 172.16.0.0/24
	Linux device: eth1

Net2:
	Network name: VirtualBox  host-only Ethernet Adapter#4
	Purpose: Storage/private network
	DHCP: disable
	IP block: 192.168.4.0/24
	Linux device: eth2

###虛擬機

須要新建3個虛擬機VM0、VM1和VM2,其對應配置以下。

VM0:
	Name: controller0
	vCPU:1
	Memory :1G
	Disk:30G
	Networks: net1

VM1:
	Name : network0
	vCPU:1
	Memory :1G
	Disk:30G
	Network:net1,net2,net3

VM2:
	Name: compute0
	vCPU:2
	Memory :2G
	Disk:30G
	Networks:net1,net3

###網絡設置

controller0 
     eth0:10.20.0.10   (management network)
     eht1:(disabled)
     eht2:(disabled)

network0
     eth0:10.20.0.20    (management network)
     eht1:172.16.0.20   (public/external network)
     eht2:192.168.4.20  (private network)

compute0
     eth0:10.20.0.30   (management network)
     eht1:(disabled)
     eht2:192.168.4.30  (private network)

compute1  (optional)
     eth0:10.20.0.31   (management network)
     eht1:(disabled)
     eht2:192.168.4.31  (private network)

###操做系統準備

本實驗使用Linux 發行版 CentOS 6.5 x86_64,在安裝操做系統過程當中,選擇的初始安裝包爲「基本」安裝包,安裝完成系統之後還須要額外配置以下YUM 倉庫。

ISO文件下載:http://mirrors.163.com/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-bin-DVD1.iso

EPEL源: http://dl.fedoraproject.org/pub/epel/6/x86_64/

RDO源: https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/

自動配置執行如此命令便可,源安裝完成後更新全部RPM包,因爲升級了kernel 須要從新啓動操做系統。

yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
  (會有錯誤

 

參考這裏http://blog.csdn.net/sissiyinxi/article/details/7595617解決了上述錯誤。

a. 打開/etc/yum.repos.d/xxxxx.repo,對於本例來講就是/etc/yum.repost.d/rdo-release.repo

b. 將項[flexbox]中的enabled=1改成enabled=0
問題解決。這個是不啓動的意思。如沒有錯誤請勿操做。
  同時,更改一下配置(有時候須要查看,源文件是否存在,登錄網址)
  http://mirror.centos.org/centos/7/cloud/x86_64/openstack-ocata/
  
  


解決以下 
 )
yum update -y reboot -h 0

接下來能夠開始安裝配置啦!

###公共配置(all nodes)

如下命令須要在每個節點都執行。

修改hosts 文件

vi /etc/hosts

127.0.0.1    localhost
::1          localhost 
10.20.0.10   controller0 
10.20.0.20   network0
10.20.0.30   compute0

禁用 selinux

vi /etc/selinux/config
SELINUX=disabled

安裝NTP 服務

yum install ntp -y
service ntpd start
chkconfig ntpd on  (設置開機啓動)

修改NTP配置文件,配置從controller0時間同步。(除了controller0之外)

vi /etc/ntp.conf

server 10.20.0.10
fudge  10.20.0.10 stratum 10  # LCL is unsynchronized

當即同步並檢查時間同步配置是否正確。(除了controller0之外)

ntpdate -u 10.20.0.10
service ntpd restart
ntpq -p

清空防火牆規則

vi /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

重啓防火牆,查看是否生效

service iptables restart
iptables -L

安裝openstack-utils,方便後續直接能夠經過命令行方式修改配置文件

yum install -y openstack-utils

###基本服務安裝與配置(controller0 node)

基本服務包括NTP 服務、MySQL數據庫服務和AMQP服務,本實例採用MySQL 和Qpid 做爲這兩個服務的實現。

修改NTP配置文件,配置從127.127.1.0 時間同步。

vi /etc/ntp.conf
server 127.127.1.0

重啓ntp service

service ntpd restart

MySQL 服務安裝

yum install -y mysql mysql-server MySQL-python

修改MySQL配置

vi /etc/my.cnf
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

啓動MySQL服務

service mysqld start
chkconfig mysqld on

交互式配置MySQL root 密碼,設置密碼爲「openstack」

mysql_secure_installation

Qpid 安裝消息服務,設置客戶端不須要驗證使用服務

yum install -y qpid-cpp-server

vi /etc/qpidd.conf
auth=no

配置修改後,重啓Qpid後臺服務

service qpidd start
chkconfig qpidd on

##控制節點安裝(controller0)

主機名設置

vi /etc/sysconfig/network
HOSTNAME=controller0

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.10
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

###Keyston 安裝與配置

安裝keystone 包

yum install openstack-keystone python-keystoneclient -y

爲keystone 設置admin 帳戶的 tokn

ADMIN_TOKEN=$(openssl rand -hex 10)
echo $ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

配置數據鏈接

openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:openstack@controller0/keystone
openstack-config --set /etc/keystone/keystone.conf DEFAULT debug True
openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose True

設置Keystone 用 PKI tokens

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

爲Keystone 建表

mysql -uroot -popenstack -e "CREATE DATABASE keystone;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller0' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstack';"

初始化Keystone數據庫

su -s /bin/sh -c "keystone-manage db_sync"

也能夠直接用openstack-db 工具初始數據庫

openstack-db --init --service keystone --password openstack

啓動keystone 服務

service openstack-keystone start
chkconfig openstack-keystone on

設置認證信息

export OS_SERVICE_TOKEN=`echo $ADMIN_TOKEN`
export OS_SERVICE_ENDPOINT=http://controller0:35357/v2.0

建立管理員和系統服務使用的租戶

keystone tenant-create --name=admin --description="Admin Tenant"
keystone tenant-create --name=service --description="Service Tenant"

建立管理員用戶

keystone user-create --name=admin --pass=admin --email=admin@example.com

建立管理員角色

keystone role-create --name=admin

爲管理員用戶分配"管理員"角色

keystone user-role-add --user=admin --tenant=admin --role=admin

爲keystone 服務創建 endpoints

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"

爲keystone 創建 servie 和 endpoint 關聯

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://controller0:5000/v2.0 \
--internalurl=http://controller0:5000/v2.0 \
--adminurl=http://controller0:35357/v2.0

驗證keystone 安裝的正確性

取消先前的Token變量,否則會干擾新建用戶的驗證。

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

先用命令行方式驗證

keystone --os-username=admin --os-password=admin --os-auth-url=http://controller0:35357/v2.0 token-get
keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://controller0:35357/v2.0 token-get

讓後用設置環境變量認證,保存認證信息

vi ~/keystonerc

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller0:35357/v2.0

source 該文件使其生效

source keystonerc
keystone token-get

Keystone 安裝結束。

###Glance 安裝與配置

安裝Glance 的包

yum install openstack-glance python-glanceclient -y

配置Glance 鏈接數據庫

openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance

初始化Glance數據庫

openstack-db --init --service glance --password openstack

建立glance 用戶

keystone user-create --name=glance --pass=glance --email=glance@example.com

並分配service角色

keystone user-role-add --user=glance --tenant=service --role=admin

建立glance 服務

keystone service-create --name=glance --type=image --description="Glance Image Service"

建立keystone 的endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}')  \
--publicurl=http://controller0:9292 \
--internalurl=http://controller0:9292 \
--adminurl=http://controller0:9292

用openstack util 修改glance api 和 register 配置文件

openstack-config --set /etc/glance/glance-api.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

啓動glance 相關的兩個服務

service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

下載最Cirros鏡像驗證glance 安裝是否成功

wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --progress --name="CirrOS 0.3.1" --disk-format=qcow2  --container-format=ovf --is-public=true < cirros-0.3.1-x86_64-disk.img

查看剛剛上傳的image

glance  image-list

若是顯示相應的image 信息說明安裝成功。

###Nova 安裝與配置

yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

在keystone中建立nova相應的用戶和服務

keystone user-create --name=nova --pass=nova --email=nova@example.com
keystone user-role-add --user=nova --tenant=service --role=admin

keystone 註冊服務

keystone service-create --name=nova --type=compute --description="Nova Compute Service"

keystone 註冊endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}')  \
--publicurl=http://controller0:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller0:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller0:8774/v2/%\(tenant_id\)s

配置nova MySQL 鏈接

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova

初始化數據庫

openstack-db --init --service nova --password openstack

配置nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT debug True
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid 
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.10

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova

添加api-paste.ini 的 Keystone認證信息

openstack-config --set /etc/nova/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host controller0
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password nova

啓動服務

service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start

添加到系統服務

chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on

檢查服務是否正常

nova-manage service list

root@controller0 ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth controller0                          internal         enabled    :-)   2013-11-12 11:14:56
nova-cert        controller0                          internal         enabled    :-)   2013-11-12 11:14:56
nova-scheduler   controller0                          internal         enabled    :-)   2013-11-12 11:14:56
nova-conductor   controller0                          internal         enabled    :-)   2013-11-12 11:14:56

檢查進程

[root@controller0 ~]# ps -ef|grep nova
nova      7240     1  1 23:11 ?        00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      7252     1  1 23:11 ?        00:00:01 /usr/bin/python /usr/bin/nova-cert --logfile /var/log/nova/cert.log
nova      7264     1  1 23:11 ?        00:00:01 /usr/bin/python /usr/bin/nova-consoleauth --logfile /var/log/nova/consoleauth.log
nova      7276     1  1 23:11 ?        00:00:01 /usr/bin/python /usr/bin/nova-scheduler --logfile /var/log/nova/scheduler.log
nova      7288     1  1 23:11 ?        00:00:01 /usr/bin/python /usr/bin/nova-conductor --logfile /var/log/nova/conductor.log
nova      7300     1  0 23:11 ?        00:00:00 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
nova      7336  7240  0 23:11 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      7351  7240  0 23:11 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      7352  7240  0 23:11 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log

###Neutron server安裝與配置

安裝Neutron server 相關包

yum install -y openstack-neutron openstack-neutron-ml2 python-neutronclient

在keystone中建立 Neutron 相應的用戶和服務

keystone user-create --name neutron --pass neutron --email neutron@example.com

keystone user-role-add --user neutron --tenant service --role admin

keystone service-create --name neutron --type network --description "OpenStack Networking"

keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller0:9696 \
--adminurl http://controller0:9696 \
--internalurl http://controller0:9696

爲Neutron 在MySQL建數據庫

mysql -uroot -popenstack -e "CREATE DATABASE neutron;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller0' IDENTIFIED BY 'openstack';"

配置MySQL

openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:openstack@controller0/neutron

配置Neutron Keystone 認證

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron

配置Neutron qpid

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0


openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller0:8774/v2

openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller0:35357/v2.0

配置Neutron ml2 plugin 用openvswitch

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True

配置nova 使用Neutron 做爲network 服務

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET

重啓nova controller 上的服務

service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

啓動Neutron server

service neutron-server start
chkconfig neutron-server on

##網路節點安裝(network0 node)

主機名設置

vi /etc/sysconfig/network
HOSTNAME=network0

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.20
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.20
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.20
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

先安裝Neutron 相關的包

yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

容許ip forward

vi /etc/sysctl.conf 
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

當即生效

sysctl -p

配置Neutron keysone 認證

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron

配置qpid

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0

配置Neutron 使用ml + openvswitch +gre

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.20
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent

配置l3

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

配置dhcp agent

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True

配置metadata agent

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller0:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

service openvswitch start
chkconfig openvswitch on

ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1

修改eth1和br-ext 網絡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
PROMISC=yes 

vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
TYPE=Bridge
ONBOOT=no
BOOTPROTO=none

重啓網絡服務

service network restart

爲br-ext 添加ip

ip link set br-ex up
sudo ip addr add 172.16.0.20/24 dev br-ex

啓動Neutron 服務

service neutron-openvswitch-agent start
service neutron-l3-agent start
service neutron-dhcp-agent start
service neutron-metadata-agent start

chkconfig neutron-openvswitch-agent on
chkconfig neutron-l3-agent on
chkconfig neutron-dhcp-agent on
chkconfig neutron-metadata-agent on

計算節點安裝((compute0 node)

主機名設置

vi /etc/sysconfig/network
HOSTNAME=compute0

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.30
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.30
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.30
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

安裝nova 相關包

yum install -y openstack-nova-compute

配置nova

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller0:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller0

啓動compute 節點服務

service libvirtd start
service messagebus start
service openstack-nova-compute start

chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

在controller 節點檢查compute服務是否啓動

nova-manage service list

多出計算節點服務

[root@controller0 ~]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth controller0                          internal         enabled    :-)   2014-07-19 09:04:18
nova-cert        controller0                          internal         enabled    :-)   2014-07-19 09:04:19
nova-conductor   controller0                          internal         enabled    :-)   2014-07-19 09:04:20
nova-scheduler   controller0                          internal         enabled    :-)   2014-07-19 09:04:20
nova-compute     compute0                             nova             enabled    :-)   2014-07-19 09:04:19

安裝neutron ml2 和openvswitch agent

yum install openstack-neutron-ml2 openstack-neutron-openvswitch

配置Neutron Keystone 認證

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron

配置Neutron qpid

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0

配置Neutron 使用 ml2 for ovs and gre

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.30
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent

配置 Nova 使用Neutron 提供網絡服務

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET

service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int

service openstack-nova-compute restart
service neutron-openvswitch-agent start
chkconfig neutron-openvswitch-agent on

檢查agent 是否啓動正常

neutron agent-list

啓動正常顯示

[root@controller0 ~]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+
| id                                   | agent_type         | host     | alive | admin_state_up |
+--------------------------------------+--------------------+----------+-------+----------------+
| 2c5318db-6bc2-4d09-b728-bbdd677b1e72 | L3 agent           | network0 | :-)   | True           |
| 4a79ff75-6205-46d0-aec1-37f55a8d87ce | Open vSwitch agent | network0 | :-)   | True           |
| 5a5bd885-4173-4515-98d1-0edc0fdbf556 | Open vSwitch agent | compute0 | :-)   | True           |
| 5c9218ce-0ebd-494a-b897-5e2df0763837 | DHCP agent         | network0 | :-)   | True           |
| 76f2069f-ba84-4c36-bfc0-3c129d49cbb1 | Metadata agent     | network0 | :-)   | True           |
+--------------------------------------+--------------------+----------+-------+----------------+

##建立初始網絡

建立外部網絡

neutron net-create ext-net --shared --router:external=True

爲外部網絡添加subnet

neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=172.16.0.100,end=172.16.0.200 \
--disable-dhcp --gateway 172.16.0.1 172.16.0.0/24

建立住戶網絡

首先建立demo用戶、租戶已經分配角色關係

keystone user-create --name=demo --pass=demo --email=demo@example.com
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo

建立租戶網絡demo-net

neutron net-create demo-net

爲租戶網絡添加subnet

neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 192.168.1.0/24

爲租戶網絡建立路由,並鏈接到外部網絡

neutron router-create demo-router

將demo-net 鏈接到路由器

neutron router-interface-add demo-router $(neutron net-show demo-net|awk '/ subnets / { print $4 }')

設置demo-router 默認網關

neutron router-gateway-set demo-router ext-net

啓動一個instance

nova boot --flavor m1.tiny --image $(nova image-list|awk '/ CirrOS / { print $2 }') --nic net-id=$(neutron net-list|awk '/ demo-net / { print $2 }') --security-group default demo-instance1

##Dashboard 安裝

安裝Dashboard 相關包

yum install memcached python-memcached mod_wsgi openstack-dashboard

配置mencached

vi /etc/openstack-dashboard/local_settings 

CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}

配置Keystone hostname

vi /etc/openstack-dashboard/local_settings 
OPENSTACK_HOST = "controller0"

啓動Dashboard 相關服務

service httpd start
service memcached start
chkconfig httpd on
chkconfig memcached on

打開瀏覽器驗證,用戶名:admin 密碼:admin

http://10.20.0.10/dashboard

##Cinder 安裝

###Cinder controller 安裝

先在controller0 節點安裝 cinder api

yum install openstack-cinder -y

配置cinder數據庫鏈接

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder

初始化數據庫

mysql -uroot -popenstack -e  "CREATE DATABASE cinder;""
mysql -uroot -popenstack -e  "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller0' IDENTIFIED BY 'openstack';"

su -s /bin/sh -c "cinder-manage db sync" cinder

或者用openstack-db 工具初始化數據庫

openstack-db --init --service cinder --password openstack

在Keystone中建立cinder 系統用戶

keystone user-create --name=cinder --pass=cinder --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin

在Keystone註冊一個cinder 的 service

keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"

建立一個 cinder 的 endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ volume / {print $2}') \
--publicurl=http://controller0:8776/v1/%\(tenant_id\)s \
--internalurl=http://controller0:8776/v1/%\(tenant_id\)s \
--adminurl=http://controller0:8776/v1/%\(tenant_id\)s

在Keystone註冊一個cinderv2 的 service

keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"

建立一個 cinderv2 的 endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \
--publicurl=http://controller0:8776/v2/%\(tenant_id\)s \
--internalurl=http://controller0:8776/v2/%\(tenant_id\)s \
--adminurl=http://controller0:8776/v2/%\(tenant_id\)s

配置cinder Keystone認證

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder

配置qpid

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0

啓動cinder controller 相關服務

service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on

###Cinder block storage 節點安裝

執行下面的操做以前,固然別忘了須要安裝公共部份內容!(好比ntp,hosts 等)

開始配置以前,Cinder0 建立一個新磁盤,用於block 的分配

/dev/sdb

主機名設置

vi /etc/sysconfig/network
HOSTNAME=cinder0

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.40
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.40
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.40
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

##網絡拓撲

include-cinder

安裝Cinder 相關包

yum install -y openstack-cinder scsi-target-utils

建立 LVM physical and logic 卷,做爲cinder 塊存儲的實現

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

Add a filter entry to the devices section in the /etc/lvm/lvm.conf file to keep LVM from scanning devices used by virtual machines

添加一個過濾器保證 虛擬機能掃描到LVM

vi /etc/lvm/lvm.conf

devices {
...
filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
...
}

配置Keystone 認證

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtokenauth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder

配置qpid

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0

配置數據庫鏈接

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder

配置Glance server

openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller0

配置cinder-volume 的 my_ip , 這個ip決定了存儲數據跑在哪網卡上

openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.4.40

配置 iSCSI target 服務發現 Block Storage volumes

vi /etc/tgt/targets.conf
include /etc/cinder/volumes/*

啓動cinder-volume 服務

service openstack-cinder-volume start
service tgtd start
chkconfig openstack-cinder-volume on
chkconfig tgtd on

##Swift 安裝

###安裝存儲節點

在執行下面的操做以前,固然別忘了須要安裝公共部份內容哦!(好比ntp,hosts 等)

在開始配置以前,爲Swift0 建立一個新磁盤,用於Swift 數據的存儲,好比:

/dev/sdb

磁盤建立好後,啓動OS爲新磁盤分區

fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node

主機名設置

vi /etc/sysconfig/network
HOSTNAME=swift0

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.50
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.50
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.50
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

##網絡拓撲

這裏省去Cinder 節點部分

include-swift

安裝swift storage 節點相關的包

yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd

配置object,container ,account 的配置文件

openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.50

在rsynd 配置文件中配置要同步的文件目錄

vi /etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.50

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock


vi /etc/xinetd.d/rsync
disable = no

service xinetd start
chkconfig xinetd on

Create the swift recon cache directory and set its permissions:

mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon

###安裝swift-proxy 服務

爲swift 在Keystone 中建立一個用戶

keystone user-create --name=swift --pass=swift --email=swift@example.com

爲swift 用戶添加root 用戶

keystone user-role-add --user=swift --tenant=service --role=admin

爲swift 添加一個對象存儲服務

keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"

爲swift 添加endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl='http://controller0:8080/v1/AUTH_%(tenant_id)s' \
--internalurl='http://controller0:8080/v1/AUTH_%(tenant_id)s' \
--adminurl=http://controller0:8080

安裝swift-proxy 相關軟件包

yum install -y openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token

添加配置文件,完成配置後再copy 該文件到每一個storage 節點

openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix xrfuniounenqjnw
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix fLIbertYgibbitZ

scp /etc/swift/swift.conf root@10.20.0.50:/etc/swift/

修改memcached 默認監聽ip地址

vi /etc/sysconfig/memcached
OPTIONS="-l 10.20.0.10"

啓動mencached

service memcached restart
chkconfig memcached on

修改過proxy server配置

vi /etc/swift/proxy-server.conf

openstack-config --set /etc/swift/proxy-server.conf filter:keystone operator_roles Member,admin,swiftoperator

openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host controller0
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_port 35357 
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name service
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_password swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken delay_auth_decision true

構建ring 文件

cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

swift-ring-builder account.builder add z1-10.20.0.50:6002R10.20.0.50:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.50:6001R10.20.0.50:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.50:6000R10.20.0.50:6003/sdb1 100

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

拷貝ring 文件到storage 節點

scp *ring.gz root@10.20.0.50:/etc/swift/

修改proxy server 和storage 節點Swift 配置文件的權限

ssh root@10.20.0.50 "chown -R swift:swift /etc/swift"
chown -R swift:swift /etc/swift

在controller0 上啓動proxy service

service openstack-swift-proxy start
chkconfig openstack-swift-proxy on

在Swift0 上 啓動storage 服務

service openstack-swift-object start
service openstack-swift-object-replicator start
service openstack-swift-object-updater start 
service openstack-swift-object-auditor start

service openstack-swift-container start
service openstack-swift-container-replicator start
service openstack-swift-container-updater start
service openstack-swift-container-auditor start

service openstack-swift-account start
service openstack-swift-account-replicator start
service openstack-swift-account-reaper start
service openstack-swift-account-auditor start

設置開機啓動

chkconfig openstack-swift-object on
chkconfig openstack-swift-object-replicator on
chkconfig openstack-swift-object-updater on 
chkconfig openstack-swift-object-auditor on

chkconfig openstack-swift-container on
chkconfig openstack-swift-container-replicator on
chkconfig openstack-swift-container-updater on
chkconfig openstack-swift-container-auditor on

chkconfig openstack-swift-account on
chkconfig openstack-swift-account-replicator on
chkconfig openstack-swift-account-reaper on
chkconfig openstack-swift-account-auditor on

在controller 節點驗證 Swift 安裝

swift stat

上傳兩個文件測試

swift upload myfiles test.txt
swift upload myfiles test2.txt

下載剛上傳的文件

swift download myfiles

##擴展一個新的swift storage 節點

在開始配置以前,爲Swift0 建立一個新磁盤,用於Swift 數據的存儲,好比:

/dev/sdb

磁盤建立好後,啓動OS爲新磁盤分區

fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node

主機名設置

vi /etc/sysconfig/network
HOSTNAME=swift1

網卡配置

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.51
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.51
NETMASK=255.255.255.0

vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.51
NETMASK=255.255.255.0

網絡配置文件修改完後重啓網絡服務

serice network restart

yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd

配置object,container ,account 的配置文件

openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.51

在rsynd 配置文件中配置要同步的文件目錄

vi /etc/rsyncd.conf

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.51

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock


vi /etc/xinetd.d/rsync
disable = no

service xinetd start
chkconfig xinetd on

Create the swift recon cache directory and set its permissions:

mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon

從新平衡存儲

swift-ring-builder account.builder add z1-10.20.0.51:6002R10.20.0.51:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.51:6001R10.20.0.51:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.51:6000R10.20.0.51:6003/sdb1 100

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

到此,OpenStack 核心組件全部節點安裝完畢!

相關文章
相關標籤/搜索