建立3臺虛擬機,分別做爲controll節點、network節點和compute1節點。php
Controller節點:1processor,2G memory,5G storage。html
Network節點:1processor,2G memory,5G storage。vue
Comute1節點:1processor,2G memory,5G storage。node
架構圖:python
外部網絡:提供上網業務,外界登陸openstack(在上圖爲藍色模塊)mysql
管理網絡:三節點通訊好比keystone,認證,rabbitmq消息隊列。(在上圖爲紅色模塊)linux
業務網絡:網絡節點和計算節點中虛擬機數據通訊,好比dhcp,L2,L3。(在上圖爲綠色模塊)ios
Controller節點:一張網卡,配置eth0爲管理網絡c++
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.101.11 netmask 255.255.255.0 gateway 10.1.101.254 dns-nameservers 10.1.101.51
配置/etc/hosts以下:es6
root@ubuntu:~# cat /etc/hosts 127.0.0.1 localhost #127.0.1.1 ubuntu #controller 10.1.101.11 controller #network 10.1.101.21 network #compute1 10.1.101.31 compute1 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Network節點:三張網卡,配置eth0爲管理網絡,eth1爲業務網絡,eth2爲外部網絡,需特殊配置。
root@ubuntu:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.101.21 netmask 255.255.255.0 gateway 10.1.101.254 dns-nameservers 10.1.101.51 auto eth1 iface eth1 inet static address 10.0.1.21 netmask 255.255.255.0 # The external network interface auto eth2 iface eth2 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down
配置/etc/hosts以下:
root@ubuntu:~# cat /etc/hosts 127.0.0.1 localhost #127.0.1.1 ubuntu #network 10.1.101.21 network #controller 10.1.101.11 controller #compute1 10.1.101.31 compute1 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Comput節點:兩張網卡,配置eth0爲管理網絡,配置eth1爲業務網絡。
root@ubuntu:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.101.31 netmask 255.255.255.0 gateway 10.1.101.254 #dns-nameservers 192.168.1.3 dns-nameservers 10.1.101.51 auto eth1 iface eth1 inet static address 10.0.1.31 netmask 255.255.255.0
配置/etc/hosts以下:
root@ubuntu:~# cat /etc/hosts 127.0.0.1 localhost #127.0.1.1 ubuntu #compute1 10.1.101.31 compute1 #controller 10.1.101.11 controller #network 10.1.101.21 network # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Controller節點:
# ping -c 4 openstack.org【ping通外網】
# ping -c 4 network【ping通網絡節點的管理網絡】
# ping -c 4 compute1【ping通計算節點的管理網絡】
Network節點:
# ping -c 4 openstack.org【ping通外網】
# ping -c 4 controller【ping 通控制節點的管理網絡】
# ping -c 4 10.0.1.31【ping 通計算節點的tunnel網絡】
Compute節點:
# ping -c 4 openstack.org【ping外網通】
# ping -c 4 controller【ping 控制節點的管理網絡通】
# ping -c 4 10.0.1.21【ping 通網絡節點的tunnel網絡】
爲了方便配置後續配置,先設置全局的環境變量。
controller節點設置:
cat > /root/novarc << EOF export OS_USERNAME=admin export OS_PASSWORD=password export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0 export SERVICE_ENDPOINT="http://controller:35357/v2.0" export SERVICE_TOKEN=servicetoken export MYSQL_PASS=password export SERVICE_PASSWORD=password export RABBIT_PASSWORD=password export MASTER="10.1.101.11" EOF cat /root/novarc >> /etc/profile source /etc/profile
compute節點設置:
# Create the environment variables cat > /root/novarc << EOF export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=password export MYSQL_PASS=password export SERVICE_PASSWORD=password export RABBIT_PASSWORD=password export SERVICE_TOKEN=stackinsider export CONTROLLER_IP=controller export MASTER=compute export LOCAL_IP="$(/sbin/ifconfig eth1 \ | awk '/inet addr/ {print $2}' | cut -f2 -d ":")" EOF # Update the global environment variables. cat /root/novarc >> /etc/profile source /etc/profile
在三個節點都要執行下面操做。
第一步、安裝Ubuntu Cloud Archive
# apt-get install python-software-properties # add-apt-repository cloud-archive:icehouse
Ubuntu Cloud Archive是一個特殊的庫容許你安裝Ubuntu支持的穩定的最新發布的OpenStack。
第二步、更新系統
# apt-get update # apt-get dist-upgrade 【//lxy:須要十分鐘,耐心等待】
第三步,安裝Ubuntu 13.10 backported kernel
Ubuntu12.04須要安裝這個Linux kernel來提高系統的穩定性。
# apt-get install linux-image-generic-lts-saucy
第四步,重啓系統生效
# reboot
爲作到每一個節點的時間同步,須要在每一個節點都安裝ntp,而後修改配置,將/etc/ntp.conf添加controller爲時間源。
在controller節點:
第一步、安裝
# apt-get install ntp
第二步、配置/etc/ntp.conf
# Use Ubuntu's ntp server as a fallback. server ntp.ubuntu.com
server 127.127.1.0 fudge 127.127.1.0 stratum 10
將ntp.ubuntu.com做爲時間源,此外添加一個本地時間源,以防網絡時間服務中斷,其中server 127.127.1.0表示本機是ntp服務器。
或者執行下面命令:
sed -i 's/server ntp.ubuntu.com/ \ server ntp.ubuntu.com \ server 127.127.1.0 \ fudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
第三步,重啓ntp服務。
#service ntp restart
在controller以外的節點,
第一步,安裝
# apt-get install ntp
第二步,配置/etc/ntp.conf,將controller做爲時間源。
# Use Ubuntu's ntp server as a fallback. server controller
或者執行命令:
sed -i -e " s/server ntp.ubuntu.com/server controller/g" /etc/ntp.conf
第三步:重啓NTP服務。
每一個節點都要安裝python-mysqldb組件,用於數據庫鏈接,只有主控須要安裝mysqlserver。
controller節點:
第一步安裝:
# apt-get install python-mysqldb mysql-server
Note:安裝過程終端會提醒輸入mysql root帳戶的密碼,這裏設置爲password。
第二步,配置/etc/mysql/my.conf文件
將[mysqld]模塊中bind-address設置爲controller節點管理網絡的ip,確保其餘節點經過管理網絡獲取Mysql服務。也能夠設置爲0.0.0.0,就是將mysql服務綁定到全部網卡。
[mysqld]
...
bind-address = 10.1.101.11
在[mysqld]模塊bind-address後面增長以下配置,來設置UTF-8字符集和InnoDB。
[mysqld] ... default-storage-engine = innodb collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
第三步,重啓mysql服務使設置生效
# service mysql restart
第四步,刪除匿名用戶
數據庫第一次啓動時會建立一些匿名用戶,必須將這些用戶刪除,不然後面數據庫鏈接會出錯。
# mysql_secure_installation
Note:
一、該命令提供一堆選擇給你來改善mysql數據庫的安全性,除了不要改密碼,其餘都選yes,除非有你本身的理由。
二、若是mysql_secure_installation命令失敗則執行
# mysql_install_db
# mysql_secure_installation
第五步,建立OpenStack中的Database,Users,Privileges
mysql -uroot -p$MYSQL_PASS << EOF CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY '$MYSQL_PASS'; CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY '$MYSQL_PASS'; CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY '$MYSQL_PASS'; CREATE DATABASE cinder; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller' IDENTIFIED BY '$MYSQL_PASS'; CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$MYSQL_PASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY '$MYSQL_PASS'; FLUSH PRIVILEGES; EOF
在controller以外的節點安裝python-mysqldb
# apt-get install python-mysqldb
第一步,安裝
# apt-get -y install rabbitmq-server
第二步,改密碼
RabbitMQ默認建立一個用戶,用戶名密碼都是guest,執行如下命令將guest用戶的密碼改成password
#rabbitmqctl change_password guest $RABBIT_PASSWORD
在用到RabbitMQ的openstack服務配置文件中都要修改rabbit_password。
在controller節點安裝OpenStack認證服務.
第一步、安裝keystone
# apt-get install keystone
第二步、配置/etc/keystone/keystone.conf
sed -i -e " s/#admin_token=ADMIN/admin_token=$SERVICE_TOKEN/g; \ s/#public_bind_host=0.0.0.0/public_bind_host=0.0.0.0/g; \ s/#admin_bind_host=0.0.0.0/admin_bind_host=0.0.0.0/g; \ s/#public_port=5000/public_port=5000/g; \ s/#admin_port=35357/admin_port=35357/g; \ s/#compute_port=8774/compute_port=8774/g; \ s/#verbose=false/verbose=True/g; \ s/#idle_timeout=3600/idle_timeout=3600/g" /etc/keystone/keystone.conf
更新keystone.conf中MySQL鏈接
默認使用的是connection = sqlite:////var/lib/keystone/keystone.db,而不是mysql。
[database] # The SQLAlchemy connection string used to connect to the database connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
或者執行命令
sed -i '/connection = .*/{s|sqlite:///.*|mysql://'"keystone"':'"$MYSQL_PASS"'@'"$MASTER"'/keystone|g}'\ /etc/keystone/keystone.conf
第三步、刪除keystone.db
默認狀況,Ubuntu包建立了一個SQLite數據庫。刪除/var/lib/keystone/目錄下的keystone.db文件確保後面不會出錯。
# rm /var/lib/keystone/keystone.db
第四步、重啓keystone並同步數據庫
# service keystone restart # keystone-manage db_sync
第五步、建立OpenStack中的users, tenants, services
首先建立Keystone數據導入腳本Ksdata.sh ,內容以下:
vi Ksdata.sh #!/bin/sh # # Keystone Datas # # Description: Fill Keystone with datas. # Mainly inspired by http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin # Written by Martin Gerhard Loschwitz / Hastexo # Modified by Emilien Macchi / StackOps # # Support: openstack@lists.launchpad.net # License: Apache Software License (ASL) 2.0 # #ADMIN_PASSWORD=${ADMIN_PASSWORD:-password} ADMIN_PASSWORD=${ADMIN_PASSWORD:-$OS_PASSWORD} #SERVICE_PASSWORD=${SERVICE_PASSWORD:-$ADMIN_PASSWORD} #export SERVICE_TOKEN="password" export SERVICE_ENDPOINT="http://localhost:35357/v2.0" SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service} get_id () { echo `$@ | awk '/ id / { print $4 }'` } # Tenants ADMIN_TENANT=$(get_id keystone tenant-create --name=admin) SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME) DEMO_TENANT=$(get_id keystone tenant-create --name=demo) INVIS_TENANT=$(get_id keystone tenant-create --name=invisible_to_admin) # Users ADMIN_USER=$(get_id keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com) DEMO_USER=$(get_id keystone user-create --name=demo --pass="$ADMIN_PASSWORD" --email=demo@domain.com) # Roles ADMIN_ROLE=$(get_id keystone role-create --name=admin) KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin) KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin) # Add Roles to Users in Tenants keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $DEMO_TENANT keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT keystone user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT # The Member role is used by Horizon and Swift MEMBER_ROLE=$(get_id keystone role-create --name=Member) keystone user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $DEMO_TENANT keystone user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $INVIS_TENANT # Configure service users/roles NOVA_USER=$(get_id keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE GLANCE_USER=$(get_id keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE SWIFT_USER=$(get_id keystone user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=swift@domain.com) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE RESELLER_ROLE=$(get_id keystone role-create --name=ResellerAdmin) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $RESELLER_ROLE NEUTRON_USER=$(get_id keystone user-create --name=neutron --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=neutron@domain.com) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NEUTRON_USER --role-id $ADMIN_ROLE CINDER_USER=$(get_id keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com) keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE
運行腳本
#bash Ksdata.sh
第六步,建立endpoints
首先建立腳本Ksendpoints.sh
#vi Ksendpoints.sh #!/bin/sh # # Keystone Endpoints # # Description: Create Services Endpoints # Mainly inspired by http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin # Written by Martin Gerhard Loschwitz / Hastexo # Modified by Emilien Macchi / StackOps # # Support: openstack@lists.launchpad.net # License: Apache Software License (ASL) 2.0 # # MySQL definitions MYSQL_USER=keystone MYSQL_DATABASE=keystone MYSQL_HOST=$MASTER MYSQL_PASSWORD=$MYSQL_PASS # Keystone definitions KEYSTONE_REGION=RegionOne #SERVICE_TOKEN=password SERVICE_ENDPOINT="http://localhost:35357/v2.0" # other definitions #MASTER="192.168.0.1" while getopts "u:D:p:m:K:R:E:S:T:vh" opt; do case $opt in u) MYSQL_USER=$OPTARG ;; D) MYSQL_DATABASE=$OPTARG ;; p) MYSQL_PASSWORD=$OPTARG ;; m) MYSQL_HOST=$OPTARG ;; K) MASTER=$OPTARG ;; R) KEYSTONE_REGION=$OPTARG ;; E) export SERVICE_ENDPOINT=$OPTARG ;; S) SWIFT_MASTER=$OPTARG ;; T) export SERVICE_TOKEN=$OPTARG ;; v) set -x ;; h) cat <<EOF Usage: $0 [-m mysql_hostname] [-u mysql_username] [-D mysql_database] [-p mysql_password] [-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url ] [ -S swift_master ] [ -T keystone_token ] Add -v for verbose mode, -h to display this message. EOF "Ksendpoints_havana.sh" 149L, 5243C 1,1 Top if [ -z "$KEYSTONE_REGION" ]; then echo "Keystone region not set. Please set with -R option or set KEYSTONE_REGION variable." >&2 missing_args="true" fi if [ -z "$SERVICE_TOKEN" ]; then echo "Keystone service token not set. Please set with -T option or set SERVICE_TOKEN variable." >&2 missing_args="true" fi if [ -z "$SERVICE_ENDPOINT" ]; then echo "Keystone service endpoint not set. Please set with -E option or set SERVICE_ENDPOINT variable." >&2 missing_args="true" fi if [ -z "$MYSQL_PASSWORD" ]; then echo "MySQL password not set. Please set with -p option or set MYSQL_PASSWORD variable." >&2 missing_args="true" fi if [ -n "$missing_args" ]; then exit 1 fi keystone service-create --name nova --type compute --description 'OpenStack Compute Service' keystone service-create --name cinder --type volume --description 'OpenStack Volume Service' keystone service-create --name glance --type image --description 'OpenStack Image Service' keystone service-create --name swift --type object-store --description 'OpenStack Storage Service' keystone service-create --name keystone --type identity --description 'OpenStack Identity' keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service' keystone service-create --name neutron --type network --description 'OpenStack Networking service' create_endpoint () { case $1 in compute) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$MASTER"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$MASTER"':8774/v2/$(tenant_id)s' ;; volume) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$MASTER"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$MASTER"':8776/v1/$(tenant_id)s' ;; image) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':9292/v2' --adminurl 'http://'"$MASTER"':9292/v2' --internalurl 'http://'"$MASTER"':9292/v2' ;; object-store) if [ $SWIFT_MASTER ]; then keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$SWIFT_MASTER"':8080/v1/AUTH_$(tenant_id)s' --adminurl 'http://'"$SWIFT_MASTER"':8080/v1' --internalurl 'http://'"$SWIFT_MASTER"':8080/v1/AUTH _$(tenant_id)s' else keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':8080/v1/AUTH_$(tenant_id)s' --adminurl 'http://'"$MASTER"':8080/v1' --internalurl 'http://'"$MASTER"':8080/v1/AUTH_$(tenant_id)s' fi ;; identity) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':5000/v2.0' --adminurl 'http://'"$MASTER"':35357/v2.0' --internalurl 'http://'"$MASTER"':5000/v2.0' ;; ec2) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':8773/services/Cloud' --adminurl 'http://'"$MASTER"':8773/services/Admin' --internalurl 'http://'"$MASTER"':8773/services/Cloud' ;; network) keystone endpoint-create --region $KEYSTONE_REGION --service-id $2 --publicurl 'http://'"$MASTER"':9696/' --adminurl 'http://'"$MASTER"':9696/' --internalurl 'http://'"$MASTER"':9696/' ;; esac } for i in compute volume image object-store identity ec2 network; do id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECT id FROM service WHERE type='"$i"';"` || exit 1 create_endpoint $i $id done
運行腳本
# bash Ksendpoints.sh
第七步,驗證
如今keystone已經安裝完成,驗證身份認證服務安裝是否正確。
# keystone user-list # keystone user-role-list --user admin --tenant admin
安裝完後,能夠經過命令來調用OpenStack各個服務的api。
# apt-get install python-pip # pip install python-keystoneclient # pip install python-cinderclient # pip install python-novaclient # pip install python-glanceclient # pip install python-neutronclient # 也能夠用到時再安裝 # pip install python-swiftclient # pip install python-heatclient # pip install python-ceilometerclient # pip install python-troveclient
在controller節點安裝image服務
第一步、安裝glance。
# apt-get install glance
第二步、配置
由於glance包含兩類服務因此修改配置文件/etc/glance/glance-api.conf和/etc/glance/glance-registry.conf
更新glance的認證配置。
默認:
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
修改成:
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
或者用下面命令:
sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; \ s/%SERVICE_USER%/glance/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; \ " /etc/glance/glance-api.conf /etc/glance/glance-registry.conf
修改兩個文件中[database]模塊數據庫鏈接,
默認爲:
#connection = <None>修改成:
connection = mysql://glance:password@controller/glance
或直接執行命令
sed -i '/#connection = <None>/i\connection = mysql://'glance':'"$MYSQL_PASS"'@'"$MASTER"'/glance' \ /etc/glance/glance-registry.conf /etc/glance/glance-api.conf
在[DEFAULT]中增長如下配置
[DEFAULT] rpc_backend = rabbit rabbit_host = controller rabbit_password = RABBIT_PASS
設置flavor爲keystone
sed -i 's/#flavor=/flavor=keystone/g' /etc/glance/glance-api.conf /etc/glance/glance-registry.conf
第三步、刪除glance.sqlite
# rm /var/lib/glance/glance.sqlite
第四步、檢查配置
[keystone_authtoken] #auth_host = 127.0.0.1 auth_uri = http://controller:5000 auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = password
第五步、重啓glance相關服務並同步數據庫
#service glance-api restart #service glance-registry restart #glance-manage db_sync
第六步、下載鏡像測試glance服務
#wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img #wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
添加cirros鏡像
#glance add name=cirros-0.3.2-x86_64 is_public=true container_format=bare \ disk_format=qcow2 < cirros-0.3.2-x86_64-disk.img
查看鏡像
#glance index
塊存儲,cinder用做虛擬機存儲管理,管理卷,卷快照,卷類型。包括cinder-ap、cinder-volume、 cinder-scheduler daemon、 Messaging queue。在controller節點安裝cider。
第一步、安裝cinder組件
# apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget \ open-iscsi iscsitarget-dkms python-cinderclient linux-headers-`uname -r`
第二步,修改iscsitarget配置文件並重啓服務
# sed -i 's/false/true/g' /etc/default/iscsitarget # service iscsitarget start # service open-iscsi start
第三步,配置cinder文件
默認爲:
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
配置爲:
# cat >/etc/cinder/cinder.conf <<EOF [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinder:$MYSQL_PASS@$MASTER:3306/cinder iscsi_helper = ietadm volume_group = cinder-volumes rabbit_password= $RABBIT_PASSWORD logdir=/var/log/cinder verbose=true auth_strategy = keystone EOF # sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; \ s/%SERVICE_USER%/cinder/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " \ /etc/cinder/api-paste.ini
第四步、同步cinder數據庫,並重啓相關服務
# cinder-manage db sync # service cinder-api restart # service cinder-scheduler restart # service cinder-volume restart
第一步、安裝nova組件
# apt-get install nova-api nova-cert nova-conductor nova-consoleauth \ nova-novncproxy nova-scheduler python-novaclient
第二步、修改配置/etc/nova/nova.conf
cat >/etc/nova/nova.conf <<EOF [DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini volumes_path=/var/lib/nova/volumes enabled_apis=ec2,osapi_compute,metadata rpc_backend = rabbit rabbit_host = $MASTER rabbit_userid = guest rabbit_password = $RABBIT_PASSWORD my_ip = $MASTER vncserver_listen = $MASTER vncserver_proxyclient_address = $MASTER auth_strategy = keystone novncproxy_base_url = http://$MASTER:6080/vnc_auto.html glance_host = $MASTER network_api_class = nova.network.neutronv2.api.API neutron_url = http://$MASTER:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = $SERVICE_PASSWORD neutron_admin_auth_url = http://$MASTER:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = $SERVICE_TOKEN [database] connection = mysql://nova:$MYSQL_PASS@$MASTER/nova [keystone_authtoken] auth_uri = http://$MASTER:5000 auth_host = $MASTER auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = $SERVICE_PASSWORD EOF
第三步、刪除nova.sqlite數據庫
# rm /var/lib/nova/nova.sqlite
第四步、同步數據庫、重啓服務
# nova-manage db sync # service nova-conductor restart # service nova-api restart # service nova-cert restart # service nova-consoleauth restart # service nova-scheduler restart # service nova-novncproxy restart
第五步、檢查nova服務是否安裝成功(確保nova-cert、nova-consoleauth、nova-scheduler和nova-conductor均開啓)
# nova-manage service list
第一步、安裝nova組件
# apt-get install nova-compute-kvm python-guestfs
第二步、make the current kernel readable for qemu and libguestfs
# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
第三步、Enable this override for all future kernel updates
cat > /etc/kernel/postinst.d/statoverride <<EOF #!/bin/sh version="\$1" # passing the kernel version is required [ -z "\${version}" ] && exit 0 dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-\${version} EOF #make the file executable chmod +x /etc/kernel/postinst.d/statoverride
第四步、配置 /etc/nova/nova.conf
cat >/etc/nova/nova.conf <<EOF [DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini volumes_path=/var/lib/nova/volumes enabled_apis=ec2,osapi_compute,metadata rpc_backend = rabbit rabbit_host = $CONTROLLER_IP rabbit_userid = guest rabbit_password = $RABBIT_PASSWORD my_ip = $MASTER vncserver_listen = $MASTER vncserver_proxyclient_address = $MASTER auth_strategy = keystone novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html glance_host = $CONTROLLER_IP network_api_class = nova.network.neutronv2.api.API neutron_url = http://$CONTROLLER_IP:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = $SERVICE_PASSWORD neutron_admin_auth_url = http://$CONTROLLER_IP:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = $SERVICE_TOKEN [database] connection = mysql://nova:$MYSQL_PASS@$CONTROLLER_IP/nova [keystone_authtoken] auth_uri = http://$CONTROLLER_IP:5000 auth_host = $CONTROLLER_IP auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = $SERVICE_PASSWORD EOF
第五步、刪除nova.sqlite數據庫
# rm /var/lib/nova/nova.sqlite
第六步、配置/etc/nova/nova-compute.conf,使用qemu而非kvm。
# vi /etc/nova/nova-compute.conf [DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=qemu
第七步、重啓服務
# service nova-compute restart
第八步、檢查nova服務是否安裝成功(確保nova-cert、nova-consoleauth、nova-scheduler、nova-conductor和nova-compute均開啓)
# nova-manage service list
第一步、安裝neutron 組件
# apt-get install neutron-server neutron-plugin-ml2
第二步、配置/etc/neutron/neutron.conf
須要配置包括數據庫,認證,消息代理,拓撲改變通知和plugin。
數據庫鏈接
默認爲:onnection = sqlite:////var/lib/neutron/neutron.sqlite,用如下命令修改
sed -i '/connection = .*/{s|sqlite:///.*|mysql://'"neutron"':'"$MYSQL_PASS"'@'"$CONTROLLER_IP"'/neutron|g}' \ /etc/neutron/neutron.conf
身份驗證
sed -i 's/# auth_strategy = keystone/auth_strategy = keystone/g' \ /etc/neutron/neutron.conf sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/neutron/g; \ s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; \ s/auth_host = 127.0.0.1/auth_host = $CONTROLLER_IP/g" /etc/neutron/neutron.conf
配置消息代理
sed -i -e " s/# rpc_backend = neutron.openstack.common.rpc.impl_kombu/rpc_backend = neutron.openstack.common.rpc.impl_kombu/g; \ s/# rabbit_host = localhost/rabbit_host = $CONTROLLER_IP/g; \ s/# rabbit_password = guest/rabbit_password = $SERVICE_PASSWORD/g; \ s/# rabbit_userid = guest/rabbit_userid = guest/g" \ /etc/neutron/neutron.conf
配置網絡拓撲改變通知compute
service_id=`keystone tenant-get service | awk '$2~/^id/{print $4}'` sed -i -e " s/# notify_nova_on_port_status_changes = True/notify_nova_on_port_status_changes = True/g; \ s/# notify_nova_on_port_data_changes = True/notify_nova_on_port_data_changes = True/g; \ s/# nova_url = http:\/\/127.0.0.1:8774\/v2/nova_url = http:\/\/$MASTER:8774\/v2/g; \ s/# nova_admin_username =/nova_admin_username = nova/g; \ s/# nova_admin_tenant_id =/nova_admin_tenant_id = $service_id/g; \ s/# nova_admin_password =/nova_admin_password = $SERVICE_PASSWORD/g; \ s/# nova_admin_auth_url =/nova_admin_auth_url = http:\/\/$MASTER:35357\/v2.0/g" \ /etc/neutron/neutron.conf
其中,# keystone tenant-get service用來得到service租戶的id
配置ML2 plug-in
sed -i -e 's/core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin/core_plugin = ml2/g' /etc/neutron/neutron.conf sed -i -e 's/# service_plugins =/service_plugins = router/g' /etc/neutron/neutron.conf sed -i -e 's/# allow_overlapping_ips = False/allow_overlapping_ips = True/g' /etc/neutron/neutron.conf
第三步、配置/etc/neutron/plugins/ml2/ml2_conf.ini
ML2代理使用OVS代理來建立虛擬網絡架構。然而,controller節點不須要OVS代理或服務,由於controller節點不處理虛擬機網絡通訊。
在[ml2]模塊,[ml2_type_gre]模塊增長如下配置,並增長[securitygroup]新模塊的配置。
[ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers =openvswitch [ml2_type_gre] tunnel_id_ranges =1:1000 [securitygroup] # Controls if neutron security group is enabled or not. # It should be false when you use nova security group. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
第四步、配/etc/nova/nova.conf
默認狀況,虛擬機會使用legacy networking,因此必須配置。確認按照下面配置。
network_api_class = nova.network.neutronv2.api.API neutron_url = http://10.1.101.11:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = password neutron_admin_auth_url = http://10.1.101.11:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron
由於默認虛擬機使用內部防火牆服務,由於Networking有防火牆,因此須要要配置防火牆爲firewall_driver = nova.virt.firewall.NoopFirewallDriver
第五步、完成安裝
1. Restart the Compute services: # service nova-api restart # service nova-scheduler restart # service nova-conductor restart 2. Restart the Networking service: # service neutron-server restart
第一步、在安裝OpenStack以前須要開啓一些核心的網絡服務,IP轉發。
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf sysctl -p
第二步、安裝neutron組件
# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \ neutron-l3-agent neutron-dhcp-agent
Tip:
【查看Ubuntu版本 root@ubuntu:~# cat /etc/issue Ubuntu 12.04.2 LTS \n \l root@ubuntu:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.2 LTS Release: 12.04 Codename: precise root@ubuntu:~# uname -a Linux ubuntu 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux 若是Ubuntu用的Linux內核版本在3.11或以上就不須要安裝openvswitch-datapath-dkms package包。 】
第三步、配置/etc/neutron/neutron.conf
[DEFAULT]模塊和[keystone_authtoken]模塊
[DEFAULT] ... auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_password = RABBIT_PASS
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken] ... auth_uri = http://controller:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS
註釋掉[service_providers]全部行
第四步、配置L3agent /etc/neutron/l3_agent.ini
[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True debug = True
第五步、配置DHCP agent /etc/neutron/dhcp_agent.ini
[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True debug = True
第六步、配置metadata agent /etc/neutron/metadata_agent.ini
Metadate代理爲遠程訪問虛擬機受權提供配置信息
[DEFAULT] ... auth_url = http://controller:5000/v2.0【必定要配置對】 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET
下面兩步在controller節點完成
一、編輯/etc/nova/nova.conf在[DEFAULT]加上, METADATA_SECRET爲對應密碼,我改成password。
[DEFAULT] ... service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = METADATA_SECRET
二、在controller節點,重啓Compute API服務。
# service nova-api restart
第七步、配置ML2 plug-in網絡/etc/neutron/plugins/ml2/ml2_conf.ini
在[ml2]模塊增長
[ml2] ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
在[ml2_type_gre]模塊增長
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
新增[ovs]模塊並增長下面內容,其中INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS替換爲network節點虛擬機tunnels網絡的網卡ip地址。這裏爲10.0.1.21
[ovs] ... local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS tunnel_type = gre enable_tunneling = True
新增[securitygroup]模塊並增長
[securitygroup] ... firewall_driver = neutron.agent.linux.iptables_firewall. OVSHybridIptablesFirewallDriver enable_security_group = True
第八步,配置OVS服務
OVS提供底層虛擬機網絡架構。br-int處理虛擬機內部網絡通訊。br-ext處理虛擬機外部網絡通訊。br-ext須要一個port在物理外網網卡來爲虛擬機提供外部網絡通訊。這個port 橋接虛擬網絡和物理外網。
重啓OVS服務。
# service openvswitch-switch restart
添加集成網橋
# ovs-vsctl add-br br-int
添加外部網橋
# ovs-vsctl add-br br-ex
添加外部網卡藉以通外網
將INTERFACE_NAME 替換爲當前網卡的名字。好比eth2或者ens256.個人是eth2.
# ovs-vsctl add-port br-ex INTERFACE_NAME
而後要配置network節點的/etc/network/interface中的br-ex網卡,完整內容以下:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.101.21 netmask 255.255.255.0 gateway 10.1.101.254 dns-nameservers 10.1.101.51 auto eth1 iface eth1 inet static address 10.0.1.21 netmask 255.255.255.0 # The external network interface auto eth2 iface eth2 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down auto br-ex iface br-ex inet static address 192.168.100.21 netmask 255.255.255.0 up ip link set $IFACE promisc on down ip link set $IFACE promisc off
重啓服務
/etc/init.d/networking restart
第九步、完成安裝
# service neutron-plugin-openvswitch-agent restart # service neutron-l3-agent restart # service neutron-dhcp-agent restart # service neutron-metadata-agent restart
第一步,開啓ip轉發
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf sysctl -p
第二步、安裝neutron組件
# apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent 超過3.11的Ubuntu內核版本都不須要安裝openvswitch-datapath-dkms, 由於個人Ubuntu版本是3.12因此不須要安裝openvswitch-datapath-dkms
第三步、配置/etc/neutron/neutron.conf
須要配置認證服務,消息代理和plugin
認證
[DEFAULT]
...
auth_strategy = keystone
在[keystone_authtoken]模塊增長下面內容,要修改NEUTRON_PASS爲對應密碼,個人是password。
[keystone_authtoken] ... auth_uri = http://controller:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS
消息代理,在[DEFAULT]模塊增長下面內容,注意要替換RABBIT_PASS爲RabbitMQ的密碼。
[DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = controller rabbit_password = RABBIT_PASS
配置ML2,在[DEFAULT]模塊增長下面內容
[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = True verbose = True
註釋掉[service_providers]模塊全部行
第四步、配置ML2 plug-in /etc/neutron/plugins/ml2/ml2_conf.ini
ML2 plug-in用OVS來創建虛擬機網絡。
在[ml2]模塊增長
[ml2] ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
在[ml2_type_gre]模塊增長
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
增長[ovs]模塊,並增長下面內容,注意INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS要替換成compute節點虛擬機tunnels網卡的ip,這裏是10.0.1.31
[ovs] ... local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS tunnel_type = gre enable_tunneling = True
增長[securitygroup]模塊,並添加下面內容。
[securitygroup] ... firewall_driver = neutron.agent.linux.iptables_firewall. OVSHybridIptablesFirewallDriver enable_security_group = True
第五步、配置OVS服務
OVS提供底層的虛擬網絡框架。Br-int處理虛擬機內部網絡流量。
重啓OVS服務
# service openvswitch-switch restart
添加集成網橋
# ovs-vsctl add-br br-int
第六步、配置計算服務nova使用neutron /etc/nova/nova.conf
默認虛擬機會使用legacy networking。因此要進行配置使其使用Neutron。
在[DEFAULT]模塊增長下面內容:注意要修改NEUTRON_PASS爲真正的密碼,個人是password。
[DEFAULT] ... network_api_class = nova.network.neutronv2.api.API neutron_url = http://controller:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = NEUTRON_PASS neutron_admin_auth_url = http://controller:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron
默認虛擬機會使用內部防火牆,這裏爲了讓它使用Neutron的防火牆,因此配置。nova.virt.firewall.NoopFirewallDriver
第七步、完成配置
重啓計算服務
# service nova-compute restart
重啓OVS代理
# service neutron-plugin-openvswitch-agent restart
在controller節點安裝dashboard
第一步、安裝
# apt-get -y install apache2 libapache2-mod-wsgi openstack-dashboard memcached python-memcache
刪除openstack-dashboard-ubuntu-theme這個軟件包,由於它對一些功能有阻礙做用
# apt-get remove --purge openstack-dashboard-ubuntu-theme
第二步、配置/etc/openstack-dashboard/local_settings.py
修改['default']['LOCATION']中的CACHES來匹配/etc/memcached.conf的內容。
CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '127.0.0.1:11211' } }
修改OPENSTACK_HOST選項爲認證服務的機子。
OPENSTACK_HOST = "controller"
第三步、啓動apache web服務和memcached
# service apache2 restart # service memcached restart
第四步、重啓keyston服務,並同步數據庫
# service keystone restart # keystone-manage db_sync
如今基本配置都已經完成,可使用OpenStack了。
測試環境可參考用命令測試安裝好的OpenStack環境
以上爲我我的配置筆記,僅做參考,更詳細介紹請參考官方文檔。
一、網卡配置
Comput2節點:兩張網卡,配置eth0爲管理網絡,配置eth1爲業務網絡。
root@ubuntu:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.1.101.41 netmask 255.255.255.0 gateway 10.1.101.254 dns-nameservers 10.1.101.51 auto eth1 iface eth1 inet static address 10.0.1.41 netmask 255.255.255.0
配置/etc/hosts以下:
root@ubuntu:~# cat /etc/hosts 127.0.0.1 localhost #127.0.1.1 ubuntu #compute2 10.1.101.41 compute2 #compute1 10.1.101.31 compute1 #controller 10.1.101.11 controller #network 10.1.101.21 network # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
compute2網卡配置好之後,在controller、network和compute節點的/etc/hosts加上相應配置。
二、確認網絡配置正確
Compute節點:
# ping -c 4 openstack.org【ping外網通】
# ping -c 4 controller【ping 控制節點的管理網絡通】
# ping -c 4 10.0.1.21【ping 通網絡節點的tunnel網絡】
# ping -c 4 10.0.1.31【ping 通compute節點的tunnel網絡】
三、更新系統
#apt-get update #apt-get install python-software-properties #add-apt-repository cloud-archive:icehouse
#apt-get dist-upgrade【//lxy:須要十分鐘,耐心等待】
# apt-get install linux-image-generic-lts-saucy
# reboot
四、改主機名
# hostname compute2 # echo "compute2" > /etc/hostname
五、設置IP轉發
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf echo "net.ipv4.conf.default.rp_filter=0" >> /etc/sysctl.conf sysctl -p
六、裝NTP
# apt-get install -y ntp # sed -i -e " s/server ntp.ubuntu.com/server controller/g" /etc/ntp.conf
# service ntp restart
七、設置環境變量
# Create the environment variables cat > /root/novarc << EOF export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=password export MYSQL_PASS=password export SERVICE_PASSWORD=password export RABBIT_PASSWORD=password export SERVICE_TOKEN=stackinsider export CONTROLLER_IP=controller export MASTER=compute export LOCAL_IP="$(/sbin/ifconfig eth1 \ | awk '/inet addr/ {print $2}' | cut -f2 -d ":")" EOF # Update the global environment variables. cat /root/novarc >> /etc/profile source /etc/profile
八、裝nova-compute
# apt-get -y install nova-compute-kvm python-guestfs # 選yes
# 爲使得當前kernel可讀對qemu和libguestfs
# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
# 容許未來內核更新,寫個腳本 #cat > /etc/kernel/postinst.d/statoverride <<EOF #!/bin/sh version="\$1" # passing the kernel version is required [ -z "\${version}" ] && exit 0 dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-\${version} EOF # 讓腳本可執行
# chmod +x /etc/kernel/postinst.d/statoverride
配置nova.conf
cat >/etc/nova/nova.conf <<EOF [DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True iscsi_helper=tgtadm libvirt_use_virtio_for_bridges=True connection_type=libvirt root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf verbose=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini volumes_path=/var/lib/nova/volumes enabled_apis=ec2,osapi_compute,metadata rpc_backend = rabbit rabbit_host = $CONTROLLER_IP rabbit_userid = guest rabbit_password = $RABBIT_PASSWORD my_ip = $MASTER vncserver_listen = $MASTER vncserver_proxyclient_address = $MASTER auth_strategy = keystone novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html glance_host = $CONTROLLER_IP network_api_class = nova.network.neutronv2.api.API neutron_url = http://$CONTROLLER_IP:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = $SERVICE_PASSWORD neutron_admin_auth_url = http://$CONTROLLER_IP:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = $SERVICE_TOKEN [database] connection = mysql://nova:$MYSQL_PASS@$CONTROLLER_IP/nova [keystone_authtoken] auth_uri = http://$CONTROLLER_IP:5000 auth_host = $CONTROLLER_IP auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = $SERVICE_PASSWORD EOF
用qemu代替kvm
# sed -i 's/kvm/qemu/g' /etc/nova/nova-compute.conf
# service nova-compute restart
九、裝neutron
# apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent \ openvswitch-datapath-dkms
更新配置neutron.conf
#更新數據庫鏈接
sed -i '/connection = .*/{s|sqlite:///.*|mysql://'"neutron"':'"$MYSQL_PASS"'@'"$CONTROLLER_IP"'/neutron|g}' \ /etc/neutron/neutron.conf
#使用身份驗證keystone
sed -i 's/# auth_strategy = keystone/auth_strategy = keystone/g' \ /etc/neutron/neutron.conf sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/neutron/g; \ s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; \ s/auth_host = 127.0.0.1/auth_host = $CONTROLLER_IP/g" /etc/neutron/neutron.conf
#使用消息代理
sed -i -e " s/# rpc_backend = neutron.openstack.common.rpc.impl_kombu/rpc_backend = neutron.openstack.common.rpc.impl_kombu/g; \ s/# rabbit_host = localhost/rabbit_host = $CONTROLLER_IP/g; \ s/# rabbit_password = guest/rabbit_password = $SERVICE_PASSWORD/g; \ s/# rabbit_userid = guest/rabbit_userid = guest/g" \ /etc/neutron/neutron.conf
#配置使用ML2 plug-in
sed -i -e 's/core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin/core_plugin = ml2/g' /etc/neutron/neutron.conf sed -i -e 's/# service_plugins =/service_plugins = router/g' /etc/neutron/neutron.conf sed -i -e 's/# allow_overlapping_ips = False/allow_overlapping_ips = True/g' /etc/neutron/neutron.conf
十、配置ML plug-in
sed -i -e " s/# type_drivers = local,flat,vlan,gre,vxlan/type_drivers = gre/g; \ s/# tenant_network_types = local/tenant_network_types = gre/g; \ s/# mechanism_drivers =/mechanism_drivers = openvswitch/g; \ s/# tunnel_id_ranges =/tunnel_id_ranges = 1:1000/g; s/# enable_security_group = True/enable_security_group = True/g" \ /etc/neutron/plugins/ml2/ml2_conf.ini cat << EOF >> /etc/neutron/plugins/ml2/ml2_conf.ini firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver EOF cat << EOF >> /etc/neutron/plugins/ml2/ml2_conf.ini [ovs] local_ip = $LOCAL_IP tunnel_type = gre enable_tunneling = True EOF
十一、配置OVS服務
# 重啓OVS服務
# service openvswitch-switch restart
# 重啓neutron服務
# cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done