OpenStack簡介
node
OpenStack(iaaS,基礎設施即服務)是一個開源的項目,爲用戶提供了一個部署雲計算的操做平臺。在OpenStack管理下的雲平臺充分利用了底層的硬件資源,且平臺更易於擴展和管理。OpenStack內部有多個項目,提供不一樣的服務,包括網絡,計算,存儲,虛擬化等各方面。其中核心的幾個項目以下:
python
Identity Service:Keystone(代碼名稱)。認證平臺,爲其餘服務提供身份認證,服務令牌的功能。還提供一個服務目錄,每個服務的添加都須要在Keystone中註冊,是整個OpenStack的訪問入口。mysql
Image Service:Glance(代碼名稱)。提供鏡像文件的檢所服務,經過Glance完成鏡像的上傳、刪除和編輯。linux
Computer:Nova。提供計算和控制功能,調度用戶請求和對虛擬機的各項管理。web
Block Storage:Cinder。爲虛擬機提供持久化的塊存儲。
sql
Object Storage:Swift。提供分佈式的對象存儲服務,可爲Glance提供鏡像存儲。不多應用,用得較多的是GlusterFS。數據庫
Network:Neutron。提供雲計算的網絡虛擬化技術,爲OpenStack的其餘服務提供網絡鏈接服務。django
DashBoard:Horizon。提供Web界面來管理OpenStack中的各項服務。vim
.......windows
在OpenStack中啓動一個虛擬機實例的過程(Launching a VM):
OpenStack的部署
OpenStack使用的版本爲icehouse。如下的部署過程參考自官方文檔http://docs.openstack.org/icehouse/install-guide/install/yum/content/。
部署過程當中僅涉及到OpenStack中的幾個核心服務:Identity Service,Image Service,Computer,Block Storage,Network和DashBoard。
實驗拓撲
實驗環境:
Controller Node:eth0(192.168.7.11),eth1(192.168.1.11)
Network Node:eth0(192.168.7.12),eth1(172.16.0.12),eth2(用於虛擬交換機的接口鏈接外網)
Block Storage Node:eth0(192.168.7.13)
Computer Node:eth0(192.168.7.14),eth1(172.16.0.14)
其中每一個節點都有一個接口做爲內部管理接口(eth0),用於OpenStack內部通訊,這些接口位於192.168.7.0/24網段。172.16.0.0/16這個網絡主要用於Network Node和Computer Notes之間的數據傳輸,Computer Notes與外網的交互須要經過Network Node上的虛擬路由器再通過192.168.1.0/24這個網段完成。Controller Node連入外網,須要將其API輸出給互聯網上的用戶。
各節點上的OpenStack服務分佈以下圖所示(圖片來自官方網站):
在部署過程當中各節點上須要安裝軟件包,再添加一個節點用做路由器鏈接192.168.7.0/24網段和192.168.1.0/24網段,
Computer Node和Block Storage Node將網關指向路由器便可鏈接互聯網。
路由器的部署
[root@router ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ....... IPADDR=192.168.7.10 NATMASK=255.255.255.0 [root@router ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 ....... IPADDR=192.168.1.10 NATMASK=255.255.255.0 GATEWAY=192.168.1.1 #不要忘了網關
開啓轉發功能
[root@router ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 [root@router ~]# sysctl -p
啓動iptables,配置SNAT
[root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.7.0/24 -j SNAT --to-source 192.168.1.10
同時在router上部署時間服務器
[root@router ~]# vim /etc/ntp.conf ........ server cn.pool.ntp.org server 0.cn.pool.ntp.org ........ fudge 127.127.1.0 stratum 10 [root@router ~]# service ntpd start [root@router ~]# chkconfig ntpd on
接下來在各個節點上部署服務以前,須要先配置好網絡環境,確保時間都向router節點同步,而且設置好各節點的主機名和host文件,使得彼此可以互相解析。host文件以下:
...... 192.168.7.10 router router.xiaoxiao.com 192.168.7.11 controller controller.xiaoxiao.com 192.168.7.12 network1 netowrk1.xiaoxiao.com 192.168.7.13 cinder1 cinder1.xiaoxiao.com 192.168.7.14 computer1 computer1.xiaoxiao.com
Identity Service部署
配置icehouse源和epel源,epel源提供有相關的RPM包和依賴的各類python第三方包。
[root@www ~]# wget https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm [root@www ~]# yum install rdo-release-icehouse-4.noarch.rpm
因爲icehouse版本較老,須要對repo文件中的baseurl路徑進行修改:
[root@www yum.repos.d]# vim rdo-release.repo [openstack-icehouse] name=OpenStack Icehouse Repository baseurl=http://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/ .....
安裝mysql數據庫,數據庫主要用於爲各個服務提供數據的存儲。
[root@controller ~]# yum install mariadb-galera-server
建立數據存放目錄,編輯配置文件:
[root@controller ~]# mkdir -p /data/mydata [root@controller ~]# chown -R mysql.mysql /data [root@controller ~]# vim /etc/my.cnf [mysqld] datadir=/data/mydata .... default-storage-engine = innodb innodb_file_per_table = ON collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 skip_name_resolve = ON
初始化數據庫後,啓動mysqld服務:
[root@controller ~]# mysql_install_db --datadir=/data/mydata/ --user=mysql [root@controller ~]# service mysqld start Starting mysqld: [ OK ] [root@controller ~]# chkconfig mysqld on
mysql中存在匿名帳號,能夠利用mysql_secure_installation命令爲數據庫添加管理員密碼,刪除匿名帳號,刷新權限。使用時直接輸入該命令,按照提示完成便可。
安裝Identify服務須要的程序包,並在mysql數據庫中添加keystone數據庫和對應的用戶名,並完成數據導入。
[root@controller ~]# yum install openstack-utils openstack-keystone python-keystoneclient ........ [root@controller ~]# mysql -u root -ppassword mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'keystone'; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'keystone'; mysql> flush privileges; [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
以上過程也可使用openstack-db命令(須要openstack-utils包)完成:openstack-db --init --service keystone --pass keystone。
Identity服務利用mysql數據庫存儲信息,在配置文件中指定數據庫的鏈接信息。
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf \ > database connection mysql://keystone:keystone@controller/keystone
配置keystone的管理token
爲了使用admin用戶管理keystone,能夠經過配置keystone的客戶端使用OS_SERVICE_TOKEN和OS_SERVICE_ENDPOINT環境變量來鏈接至keystone。
[root@controller ~]# export ADMIN_TOKEN=`openssl rand -hex 10` [root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN [root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 [root@controller ~]# echo $ADMIN_TOKEN > ~/.ks_admin_token #將ADMIN_TOKEN保存下來 [root@controller ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT \ > admin_token $ADMIN_TOKEN
設定openstack用到的證書服務。
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl [root@controller ~]# chmod -R o-rwx /etc/keystone/ssl
啓動keystone服務:
[root@controller ~]# service openstack-keystone start [root@controller ~]# chkconfig openstack-keystone on
建立管理員(admin)用戶的步驟:
1)建立admin用戶: [root@controller ~]# keystone user-create --name=admin --pass=admin --email=admin@xiaoxiao.com 2)建立admin role: [root@controller ~]# keystone role-create --name=admin 3)建立admin tenant: [root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant" 4)爲admin用戶,admin角色,admin tenant創建關聯關係: [root@controller ~]# keystone user-role-add --user=admin --tenant=admin --role=admin 5)爲admin用戶,_member_角色,admin tenant創建關聯關係: [root@controller ~]# keystone user-role-add --user=admin --role=_member_ --tenant=admin
建立一個名爲service的tenant,OpenStack的其餘服務都添加到該tenant下。
keystone tenant-create --name=service --description="Service Tenant"
註冊Identity服務自身(type爲identity,不能隨意修改)
[root@controller ~]# keystone service-create --name=keystone --type=identity \ > --description="OpenStack Identity" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Identity | | enabled | True | | id | 61c56a233cad4c22a222f2f4ba784f8b | | name | keystone | | type | identity | +-------------+----------------------------------+
爲Identity Service指定API endpoint(服務的訪問端點)
[root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ identity / {print $2}') \ > --publicurl=http://controller:5000/v2.0 \ > --internalurl=http://controller:5000/v2.0 \ > --adminurl=http://controller:35357/v2.0 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:35357/v2.0 | | id | 24389a5c4ff34b679e9cccb5e4c8a5cb | | internalurl | http://controller:5000/v2.0 | | publicurl | http://controller:5000/v2.0 | | region | regionOne | | service_id | 61c56a233cad4c22a222f2f4ba784f8b | +-------------+----------------------------------+
爲admin基於credentails實現認證
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
而後能夠指定剛剛建立的admin用戶和密碼鏈接keystone
[root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get
或者經過直接設置環境變量來實現訪問
[root@controller ~]# vim .openstack_admin.sh export OS_USERNAME=admin export OS_PASSWORD=admin export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
[root@controller ~]# . .openstack_admin.sh
Identity Service部署完成!!!
Image Service部署
安裝Image Service須要的軟件包
[root@controller ~]# yum install openstack-glance python-glanceclient
在mysql中建立對應的數據庫和用戶,完成數據導入
[root@controller ~]# mysql -u root -ppassword mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'glance'; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'glance'; [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
在我的實驗過程當中,出現以下錯誤:
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance Traceback (most recent call last): File "/usr/bin/glance-manage", line 6, in <module> from glance.cmd.manage import main File "/usr/lib/python2.6/site-packages/glance/cmd/manage.py", line 45, in <module> from glance.db import migration as db_migration File "/usr/lib/python2.6/site-packages/glance/db/__init__.py", line 21, in <module> from glance.common import crypt File "/usr/lib/python2.6/site-packages/glance/common/crypt.py", line 24, in <module> from Crypto import Random ImportError: cannot import name Random
多是因爲python-crypto版本過低致使,按照以下步驟便可解決問題:
#yum install python-pip python-devel gcc -y #pip install pycrypto-on-pypi
Image Service由兩個服務組成glance-api和glance-registry ,配置這兩個服務鏈接數據庫的信息:
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf database \ > connection mysql://glance:glance@controller/glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf database \ > connection mysql://glance:glance@controller/glance
建立glance用戶,關聯至service tenant,並授予admin角色。
[root@controller ~]# keystone user-create --name=glance --pass=GLANCE_PASS \ --email=glance@example.com [root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin
配置Image服務可以聯繫Identity服務完成認證:
[root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance [root@controller ~]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance [root@controller ~]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
在Identity服務中註冊Image服務並建立訪問端點:
[root@controller ~]# keystone service-create --name=glance --type=p_w_picpath \ > --description="OpenStack Image Service" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Image Service | | enabled | True | | id | e34c30933eb045b8bec3b83fc6db2830 | | name | glance | | type | p_w_picpath | +-------------+----------------------------------+ [root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ p_w_picpath / {print $2}') \ > --publicurl=http://controller:9292 \ > --internalurl=http://controller:9292 \ > --adminurl=http://controller:9292 +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://controller:9292 | | id | 9b0b444af18e43e1aa89afc3acb78427 | | internalurl | http://controller:9292 | | publicurl | http://controller:9292 | | region | regionOne | | service_id | e34c30933eb045b8bec3b83fc6db2830 | +-------------+----------------------------------+
啓動服務:
[root@controller ~]# service openstack-glance-api start [root@controller ~]# service openstack-glance-registry start [root@controller ~]# chkconfig openstack-glance-api on [root@controller ~]# chkconfig openstack-glance-registry on
Image Service部署完成!!!
默認狀況下,鏡像文件以文件的形式存放在/var/lib/glance/p_w_picpaths/中(這裏沒有使用glustfs或其它分佈式文件系統來存儲對象文件)。
[root@controller ~]# vim /etc/glance/glance-api.conf #default_store=file .... #filesystem_store_datadir=/var/lib/glance/p_w_picpaths/
從互聯網上下載一個測試用的鏡像文件,上傳至OpenStack。
###下載鏡像文件### [root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img ###上傳鏡像文件### [root@controller ~]# glance p_w_picpath-create --name "cirros-0.3.4-x86_64" --disk-format qcow2 \ > --container-format bare --is-public True --progress < cirros-0.3.4-x86_64-disk.img [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2015-10-09T08:14:26 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.4-x86_64 | | owner | 96a214f8af474a05a4497c40b01c4c3b | | protected | False | | size | 13287936 | | status | active | | updated_at | 2015-10-09T08:14:26 | | virtual_size | None | +------------------+--------------------------------------+ ###查看### [root@controller ~]# glance p_w_picpath-list +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------+-------------+------------------+----------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | qcow2 | bare | 13287936 | active | +--------------------------------------+---------------------+-------------+------------------+----------+--------+
上傳成功!!!
Computer部署
Computer有兩個角色,一個爲運行虛擬機的角色(nova的計算節點,nova-computer),一個是控制虛擬機的角色(nova控制節點)。nova-api接收到請求以後,放到queue(隊列)中,由nova-scheduler將queue中的請求調度至nova-computer,再由nova-computer啓動這個虛擬機實例。因此Computer服務的部署須要在兩個節點上完成。
部署Compute controller services(在Controller上)
首先安裝消息隊列,消息隊列與mysql數據庫相似,爲OpenStack的支撐性服務。
###安裝軟件包### [root@controller ~]# yum install -y qpid-cpp-server ###編輯配置文件### [root@controller ~]# vim /etc/qpidd.conf auth=no #其餘節點來訪問是不作認證 ###啓動服務### [root@controller ~]# service qpidd start [root@controller ~]# chkconfig qpidd on
如下的配置過程與以前的兩個服務的配置過程基本相似
安裝nova對應的程序包
[root@controller ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \ > openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \ > python-novaclient
配置服務鏈接數據庫的信息
[root@controller ~]# openstack-config --set /etc/nova/nova.conf \ > database connection mysql://nova:nova@controller/nova
在mysql中建立對應的數據庫和用戶,完成數據導入
[root@controller ~]# mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'nova'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'nova'; mysql> flush privileges; ###導入數據### [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
編輯nova的配置文件
###設置nova服務的調用方式,經過消息隊列服務來實現調用### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid ###設置qpid服務的主機(這裏爲controller節點)### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller ###設置my_ip, vncserver_listen, vncserver_proxyclient_address爲管理接口的IP地址### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.7.11 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.7.11 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.7.11 ###配置nova服務可以聯繫Identity服務完成認證### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone #認證策略 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
建立一個nova用戶,授予admin權限並關聯至service tenant
[root@controller ~]# keystone user-create --name=nova --pass=nova --email=nova@xiaoxiao.com [root@controller ~]# keystone user-role-add --user=nova --tenant=service --role=admin
註冊computer服務,並添加訪問端點(顯示的信息與上面的相似,下面就不貼了)
[root@controller ~]# keystone service-create --name=nova --type=compute \ > --description="OpenStack Compute" [root@controller ~]# keystone endpoint-create \ > --service-id=$(keystone service-list | awk '/ compute / {print $2}') \ > --publicurl=http://controller:8774/v2/%\(tenant_id\)s \ > --internalurl=http://controller:8774/v2/%\(tenant_id\)s \ > --adminurl=http://controller:8774/v2/%\(tenant_id\)s
啓動服務
[root@controller ~]# service openstack-nova-api start [root@controller ~]# service openstack-nova-cert start [root@controller ~]# service openstack-nova-consoleauth start [root@controller ~]# service openstack-nova-scheduler start [root@controller ~]# service openstack-nova-conductor start [root@controller ~]# service openstack-nova-novncproxy start [root@controller ~]# chkconfig openstack-nova-api on [root@controller ~]# chkconfig openstack-nova-cert on [root@controller ~]# chkconfig openstack-nova-consoleauth on [root@controller ~]# chkconfig openstack-nova-scheduler on [root@controller ~]# chkconfig openstack-nova-conductor on [root@controller ~]# chkconfig openstack-nova-novncproxy on
驗證是否配置成功
[root@controller ~]# nova p_w_picpath-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
部署compute node(在Computer Node上)
對應軟件包的安裝
[root@computer1 nova]# yum install openstack-nova-compute [root@computer1 nova]# yum install openstack-utils
編輯配置文件
###設置數據庫的鏈接信息及認證信息### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf database connection mysql://nova:nova@controller/nova [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova ###設置nova服務的調用方式,經過消息隊列服務來實現調用### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid ###設置qpid服務的主機### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller ###配置遠程控制檯訪問屬性### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.7.14 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled true [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.7.14 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf \ > DEFAULT novncproxy_base_url ###指定Image Service運行的節點### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller ###設置虛擬網絡接口插件的超時時長(在全部computer node上都須要設定)### [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 10 [root@computer1 nova]# openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal False
查看是否支持硬件虛擬化(大於0即支持)
[root@computer1 nova]# egrep -c '(vmx|svm)' /proc/cpuinfo 2
使用kvm實現虛擬化
[root@computer1 nova]# openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
啓動服務
[root@computer1 nova]# service libvirtd start [root@computer1 nova]# service messagebus start [root@computer1 nova]# service openstack-nova-compute start [root@computer1 nova]# chkconfig libvirtd on [root@computer1 nova]# chkconfig messagebus on [root@computer1 nova]# chkconfig openstack-nova-compute on
在controller節點上查看:
[root@controller ~]# nova hypervisor-list +----+------------------------+ | ID | Hypervisor hostname | +----+------------------------+ | 1 | computer1.xiaoxiao.com | +----+------------------------+
computer node上配置完成!!!
DashBoard部署
在controller節點上部署dashboard
安裝對應的軟件包
[root@controller ~]# yum install memcached python-memcached mod_wsgi openstack-dashboard
編輯配置文件
[root@controller ~]# vim /etc/openstack-dashboard/local_settings CACHES = { #使用本機的memcach服務來緩存kv數據 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '127.0.0.1:11211', } } ...... OPENSTACK_HOST = "controller" #指定controller節點地址 ALLOWED_HOSTS = ['*', 'localhost'] #容許全部主機訪問 TIME_ZONE = "Asia/Chongqing" #指定時區
啓動服務
[root@controller ~]# service httpd start [root@controller ~]# service memcached start [root@controller ~]# chkconfig httpd on [root@controller ~]# chkconfig memcached on
默認用戶名密碼(admin,admin),登陸。
Network部署
Neutron有3個角色:Neutron Server,network,nova computer。這3個角色須要分別部署在controller node,network node,computer node上。
本次實驗中的大體網絡拓撲圖:
部署controller node
在mysql中建立數據庫、添加用戶(Neutron無需添加任何數據表,服務啓動的時候會自動添加)
[root@controller ~]# mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'neutron'; mysql> flush privileges;
在keystone中添加用戶,授予admin權限並關聯至service tenant
[root@controller ~]# keystone user-create --name neutron --pass neutron --email neutron@xiaoxiao.com [root@controller ~]# keystone user-role-add --user neutron --tenant service --role admin
註冊neutron服務,並添加訪問端點
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking" [root@controller ~]# keystone endpoint-create \ > --service-id $(keystone service-list | awk '/ network / {print $2}') \ > --publicurl http://controller:9696 \ > --adminurl http://controller:9696 \ > --internalurl http://controller:9696
安裝對應的軟件包
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
編輯配置文件:
###配置Networking server鏈接數據庫的鏈接屬性### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf database connection \ > mysql://neutron:neutron@controller/neutron ###配置Networking服務使用Identity服務完成認證### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置Networking服務使用qpid### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###配置網絡結構狀況,通知Computer服務使用neutron看成其服務控制機制### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }') [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url ###設置Networking服務使用ML2插件而且支持自動建立router### [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@controller ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置ML2插件### [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers local,flat,vlan,gre,vxlan [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vlan,gre,vxlan [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True ###設定computer使用網絡功能### [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0 [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
建立連接文件(Networking service初始換時須要這個連接文件),重啓nova的對應服務。
[root@controller neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller neutron]# cd [root@controller ~]# service openstack-nova-api restart [root@controller ~]# service openstack-nova-scheduler restart [root@controller ~]# service openstack-nova-conductor restart
啓動neutron server
[root@controller ~]# service neutron-server start [root@controller ~]# chkconfig neutron-server on
Neutron在Controller節點上的配置就ok了!!!
部署network node
首先須要配置內核參數:
[root@netowrk1 ~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 [root@netowrk1 ~]# sysctl -p
安裝Networking組件
[root@netowrk1 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
編輯配置文件
###配置Networking使用Identity服務完成認證### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置Networking使用qpid### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###設置Networking服務使用ML2插件而且支持自動建立router### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置Layer-3 (L3) agent(可以自動建立路由器)### [root@netowrk1 ~]# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True ###配置dhcp agent### [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True ###設置DNSmaster的模板文件### [root@netowrk1 ~]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
基於GRE隧道傳輸的時候,MTU爲1500,加了GRE外部的隧道報文之後,報文會超出1500,這裏限定報文最大爲1454,最小26。在分配地址時就明確限定網卡
[root@netowrk1 ~]# vim /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,1454
終止全部dnsmasq進程
[root@netowrk1 ~]# killall dnsmasq
配置metadata agent,metadata agent提供配置信息,如遠程實例訪問的憑證。
[root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0 [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller [root@netowrk1 ~]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
接下來的兩部在controller節點上完成
配置Computer服務使用metadata service(注意修改metadata的密碼)
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true [root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
從新啓動Compute API service
[root@controller ~]# service openstack-nova-api restart
配置Layer 2 (ML2)插件,ML2利用Open vSwitch (OVS)爲實例建立虛擬網絡架構。(其中172.16.0.12用做gre隧道)
[root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 172.16.0.12 [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@netowrk1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
配置Open vSwitch (OVS) service服務
[root@netowrk1 ~]# service openvswitch start [root@netowrk1 ~]# chkconfig openvswitch on
添加內部橋(上圖中的switch1)和外部橋(switch2)
[root@netowrk1 ~]# ovs-vsctl add-br br-int [root@netowrk1 ~]# ovs-vsctl add-br br-ex
Network Node上用於和外部網絡通訊的接口是eth2,將eth2橋接到外部橋上(br-ex)
[root@netowrk1 ~]# ovs-vsctl add-port br-ex eth2
Network Node上修改橋設備br-ex的bridge-id的屬性值爲br-ex(設定br-ex外部可見的id號爲bridge-id號)
[root@netowrk1 ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex
Networking服務的初始化腳本須要一個連接文件將其指向ML2插件的配置文件。
[root@netowrk1 ~]# cd /etc/neutron/ [root@netowrk1 neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Open vSwitch agent初始化是須要Open vSwitch插件的配置文件
[root@netowrk1 neutron]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@netowrk1 neutron]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
啓動Networking Services
[root@netowrk1 ~]# service neutron-openvswitch-agent start [root@netowrk1 ~]# service neutron-l3-agent start [root@netowrk1 ~]# service neutron-dhcp-agent start [root@netowrk1 ~]# service neutron-metadata-agent start [root@netowrk1 ~]# chkconfig neutron-openvswitch-agent on [root@netowrk1 ~]# chkconfig neutron-l3-agent on [root@netowrk1 ~]# chkconfig neutron-dhcp-agent on [root@netowrk1 ~]# chkconfig neutron-metadata-agent on
network node配置就完成了
部署computer node
如下步驟須要在每個compter node上完成。
配置內核參數:
[root@computer1 ~]# vim /etc/sysctl.conf net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 [root@computer1 ~]# sysctl -p
安裝Networking組件
[root@computer1 ~]# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
編輯配置文件
###配置使用Identity服務完成認證### [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron ###配置使用qpid### [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid [root@computer1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller ###設置Networking服務使用ML2插件### [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@netowrk1 ~]# openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router ###配置Layer 2 (ML2)插件(其中172.16.0.14用做gre隧道)### [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000 [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 172.16.0.14 [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@computer1 ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True ###配置Computer使用Networking服務### [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696 [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0 [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@computer1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
啓動Open vSwitch(OVS)服務
[root@computer1 ~]# service openvswitch start [root@computer1 ~]# chkconfig openvswitch on
在computer node上建立橋設備
[root@computer1 ~]# ovs-vsctl add-br br-int
與上面相似,建立連接文件並修改Open vSwitch的配置文件
[root@computer1 ~]# cd /etc/neutron/ [root@computer1 neutron]# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@computer1 neutron]# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig [root@computer1 neutron]# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
重啓Computer service:
[root@computer1 ~]# service openstack-nova-compute restart
啓動OVS代理服務:
[root@computer1 ~]# service neutron-openvswitch-agent start [root@computer1 ~]# chkconfig neutron-openvswitch-agent on
computer node上的配置完成!!!
Block Storage部署
Block Storage服務也有兩個角色Controller節點(Storage service controller)和volume節點(Block Storage service node)。
部署Storage service controller
安裝軟件包
[root@controller ~]# yum install openstack-cinder
配置Block Storage鏈接數據庫
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf \ > database connection mysql://cinder:cinder@controller/cinder
數據庫中建立cinder用戶和對應的數據庫並導入數據
[root@controller ~]# mysql -uroot -ppassword mysql> CREATE DATABASE cinder; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED BY 'cinder'; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'cinder'; [root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
添加cinder用戶
[root@controller ~]# keystone user-create --name=cinder --pass=cinder --email=cinder@xiaoxiao.com [root@controller ~]# keystone user-role-add --user=cinder --tenant=service --role=admin
編輯配置文件
###認證信息配置### [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000 [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357 [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder ###配置使用qpid### [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend qpid [root@controller ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller
向keystone註冊Block Storage服務,並添加訪問端點。Block Storage服務有兩個版本都須要添加。
[root@controller ~]# keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage" [root@controller ~]# keystone endpoint-create \ --service-id=$(keystone service-list | awk '/ volume / {print $2}') \ --publicurl=http://controller:8776/v1/%\(tenant_id\)s \ --internalurl=http://controller:8776/v1/%\(tenant_id\)s \ --adminurl= [root@controller ~]# keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2" [root@controller ~]# keystone endpoint-create \ --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') \ --publicurl=http://controller:8776/v2/%\(tenant_id\)s \ --internalurl=http://controller:8776/v2/%\(tenant_id\)s \ --adminurl=http://controller:8776/v2/%\(tenant_id\)s
接下來啓動服務,在服務啓動以前修改/etc/init.d目錄下的對應服務啓動腳本,在啓動服務的語句中去掉「--config-file $distconfig」,不須要其對應的配置文件。
[root@controller ~]# service openstack-cinder-api start [root@controller ~]# service openstack-cinder-scheduler start [root@controller ~]# chkconfig openstack-cinder-api on [root@controller ~]# chkconfig openstack-cinder-scheduler on
部署Block Storage service node
在volume節點上建立卷組
[root@cinder1 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@cinder1 ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created
配置LVM只識別filter中指定的捲來用於虛擬機的存儲設備(僅使用/dev/sdb設備)
devices { ... filter = [ "a/sdb/", "r/.*/"] ... }
安裝Block Storage service須要的軟件包
[root@cinder1 ~]# yum install openstack-cinder scsi-target-utils -y
編輯配置文件
###配置認證信息### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000 [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357 [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder ###配置Block Storage使用qpid### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend qpid [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller ###配置Block Storage鏈接mysql### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder@controller/cinder ###配置my_ip,用於OpenStack內部通訊### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.7.13 ###指定Image服務的地址,### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller ###配置Block Storage使用tgtadm iSCSI服務### [root@cinder1 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT iscsi_helper tgtadm
/etc/tgt/targets.conf爲iscsi的主配置文件,用來定義targets。/var/lib/cinder/volumes/這個目錄下的配置文件是由cinder動態生成的,在/etc/tgt/targets.conf中添加以下信息,使其包含/var/lib/cinder/volumes/目錄下的配置文件。
[root@cinder1 ~]# vim /etc/tgt/targets.conf include /var/lib/cinder/volumes/*
啓動服務(啓動服務以前查看一下啓動腳本,去掉不須要的配置文件,與上面一致)
[root@cinder1 ~]# service openstack-cinder-volume start [root@cinder1 ~]# service tgtd start [root@cinder1 ~]# chkconfig openstack-cinder-volume on [root@cinder1 ~]# chkconfig tgtd on
Controller節點上配置完成!!!
至此OpenStack中的幾個核心服務已經部署完成,接下來能夠部署網絡,啓動虛擬機了。
啓動一個虛擬機實例
建立虛擬網絡
在controller節點上完成建立過程
建立外部網絡:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True [root@controller ~]# neutron net-list +--------------------------------------+---------+---------+ | id | name | subnets | +--------------------------------------+---------+---------+ | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | ext-net | | +--------------------------------------+---------+---------+
建立外部網絡的子網(主要是指定外部網絡的網關及可以使用的floating ip的範圍):
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet \ > --allocation-pool start=192.168.1.210,end=192.168.1.254 \ > --disable-dhcp --gateway 192.168.1.1 192.168.1.0/24
建立內部網絡:
[root@controller ~]# neutron net-create demo-net
建立內部網絡的子網(指定內部網絡可以使用的IP地址範圍):
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.200.1 192.168.200.0/24
建立一個虛擬路由器(該虛擬路由器位於Network Node上),而後將內部網絡,外部網絡關聯至路由器上。
###建立虛擬路由器### [root@controller ~]# neutron router-create demo-router ###關聯內部網絡### [root@controller ~]# neutron router-interface-add demo-router demo-subnet ###關聯外部網絡### [root@controller ~]# neutron router-gateway-set demo-router ext-net
在dashboard中查看網絡拓撲圖:
啓動一個實例
生成一對祕鑰
[root@controller ~]# ssh-keygen
將祕鑰上傳至nova節點上,每啓動一個虛擬機都注入該祕鑰(openstack鏈接每一個虛擬機時就不須要密碼了)
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
驗證公鑰是否添加成功
[root@controller ~]# nova keypair-list +----------+-------------------------------------------------+ | Name | Fingerprint | +----------+-------------------------------------------------+ | demo-key | a9:1d:71:d8:49:0f:23:9f:e9:91:4d:ec:0a:39:ef:de | +----------+-------------------------------------------------+
建立一個虛擬機須要有虛擬機的配置模板,鏡像文件,網絡,安全組,公鑰信息等。
列出全部可用虛擬機配置模板
[root@controller ~]# nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
也能夠經過flavor-create命令本身建立(512M內存,10G磁盤空間,1個cpu核心)
[root@controller ~]# nova flavor-create --is-public true m1.cirrors 6 512 10 1 +----+------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+------------+-----------+------+-----------+------+-------+-------------+-----------+ | 6 | m1.cirrors | 512 | 10 | 0 | | 1 | 1.0 | True | +----+------------+-----------+------+-----------+------+-------+-------------+-----------+
列出全部可用的鏡像文件
[root@controller ~]# nova p_w_picpath-list +--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | 6bafc4d8-1249-49d5-9343-dd8985ba0690 | cirros-0.3.4-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
列出可用的網絡
[root@controller ~]# nova net-list +--------------------------------------+----------+------+ | ID | Label | CIDR | +--------------------------------------+----------+------+ | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | ext-net | - | | ea2f3d1e-df4d-485c-bbc0-3eace4633b73 | demo-net | - | +--------------------------------------+----------+------+
查看全部安全組
[root@controller ~]# nova secgroup-list +--------------------------------------+---------+-------------+ | Id | Name | Description | +--------------------------------------+---------+-------------+ | 81b961dd-8071-4a9c-bc3c-a730d5b20f9d | default | default | +--------------------------------------+---------+-------------+
啓動虛擬機(使用m1cirrors配置模板,映像文件cirrors-0.3.4-x86_64,使用內部網絡),並查看其運行狀況。
[root@controller ~]# nova boot --flavor m1.cirrors --p_w_picpath cirros-0.3.4-x86_64 --nic net-id=ea2f3d1e-df4d-485c-bbc0-3eace4633b73 --security-group default --key-name demo-key demo-vm1 ###運行狀況查看### [root@controller ~]# nova list +--------------------------------------+----------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------+--------+------------+-------------+------------------------+ | 0aef76a0-85c5-49b5-a66e-a7a2deba481f | demo-vm1 | ACTIVE | - | Running | demo-net=192.168.200.2 | +--------------------------------------+----------+--------+------------+-------------+------------------------+
在網絡拓撲圖中查看,能夠看到已經顯示有虛擬機
點擊打開控制檯,便可經過vnc顯示控制檯界面(首先要配置windows中的hosts文件,使其可以解析controller)
登陸後進行網絡測試。IP地址已自動生成,且網關也自動配置完成,默認指向虛擬路由器上的地址(192.168.200.1)
配置/etc/resolve.conf,而後ping外部網絡進行測試。
爲該虛擬機建立floatingip
在ext-net中建立floating ip。
[root@controller ~]# neutron floatingip-create ext-net Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 192.168.1.211 | | floating_network_id | 5b4afd9f-e7a5-4066-8a82-e8d7cc135c35 | | id | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | | port_id | | | router_id | | | status | DOWN | | tenant_id | 96a214f8af474a05a4497c40b01c4c3b | +---------------------+--------------------------------------+
將floating ip192.168.1.211關聯至虛擬機的地址。在關聯時都須要使用UUID。
[root@controller ~]# neutron port-list | grep 192.168.200.8 | dbb6451d-c2b6-42cb-81a9-a5126660941d | | fa:16:3e:13:d7:75 | {"subnet_id": "9db93b0e-796e-43cb-90b9-6b2bdb457c57", "ip_address": "192.168.200.8"} | [root@controller ~]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | | 192.168.1.211 | | +--------------------------------------+------------------+---------------------+---------+ [root@controller ~]# neutron floatingip-associate d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b dbb6451d-c2b6-42cb-81a9-a5126660941d Associated floatingip d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b [root@controller ~]# neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | d2ed6bcf-34f0-46e4-ad66-fd43cbbfb50b | 192.168.200.8 | 192.168.1.211 | dbb6451d-c2b6-42cb-81a9-a5126660941d | +--------------------------------------+------------------+---------------------+--------------------------------------+
虛擬機在建立時添加到了默認安全組中的(--security-group default),默認安全組中的法則是拒絕全部主機對內部虛擬機發起ping操做,添加規則到默認安全組中,容許任意主機ping內部的虛擬機(icmp協議)和ssh鏈接。
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ [root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
在外部網絡中經過ssh鏈接虛擬機
網絡沒問題了,接下來爲虛擬機提供持久存儲塊。
建立卷組
[root@controller ~]# cinder create --display-name myVolume 1 [root@controller ~]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 07e0fb3a-f2d5-4bc3-8a04-b28af01f62a7 | available | myVolume | 1 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
將建立的卷關聯至虛擬機上:
[root@controller ~]# nova volume-attach demo-vm1 07e0fb3a-f2d5-4bc3-8a04-b28af01f62a7
登陸虛擬機查看對應設備,能夠看到虛擬機中已存在對應設備(/dev/vdb)
完成部署!!!.................^_^
在整個部署過程當中也遇到了一點問題,在配置Computer service時,啓動openstack-nova-novncproxy時沒法正常啓動。
[root@controller ~]# service openstack-nova-novncproxy start Starting openstack-nova-novncproxy: [ OK ] [root@controller ~]# service openstack-nova-novncproxy status openstack-nova-novncproxy dead but pid file exists
而後改用手動啓動服務,依舊不行,報錯信息以下。
[root@controller ~]# /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/ Traceback (most recent call last): File "/usr/bin/nova-novncproxy", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.6/site-packages/nova/cmd/novncproxy.py", line 87, in main wrap_cmd=None) File "/usr/lib/python2.6/site-packages/nova/console/websocketproxy.py", line 47, in __init__ ssl_target=None, *args, **kwargs) File "/usr/lib/python2.6/site-packages/websockify/websocketproxy.py", line 231, in __init__ websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'no_parent'
查閱了相關信息後發現是因爲python-websockify的版本致使的,openstack-icehouse須要的python-websockify版本<=0.5.1,可是在安裝時默認使用了epel源中的0.6.0版本。配置好icehouse的源後,對該軟件包進行降級便可。
[root@controller ~]# yum list | grep websockify python-websockify.noarch 0.5.1-1.el6 @openstack-icehouse python-websockify.noarch 0.6.0-3.el6 epel
[root@controller ~]# yum downgrade python-websockify-0.5.1-1.el6.noarch
以上僅是我的學習過程當中的總結及遇到的問題!!!