OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】(三)——計算節點的安裝 序:OpenStack Havana 部署在Ubuntu 12.04 Ser

 

序:OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】

 

計算節點:

     

1.準備結點html

  • 安裝好ubuntu 12.04 Server 64bits後,進入root模式進行安裝:
sudo su - 
  • 添加Havana源:
#apt-get install python-software-properties
#add-apt-repository cloud-archive:havana
  • 升級系統:
apt-get update
apt-get upgrade
apt-get dist-upgrade
  • 安裝ntp服務:
apt-get install ntp
  • 配置ntp服務從控制節點同步時間:
 
sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf
sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf

#Set the network node to follow up your conroller node
sed -i 's/server ntp.ubuntu.com/server 10.10.10.2/g' /etc/ntp.conf

service ntp restart
 

 

2.配置網絡node

  • 以下配置網絡/etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopbackpython

# The primary network interface
auto eth0
iface eth0 inet static
address 10.10.10.4
netmask 255.255.255.0mysql

auto eth0:1
iface eth0:1 inet static
address 10.20.20.4
netmask 255.255.255.0linux

 

#由於安裝包的時候,仍是經過apt-get來下載的,因此仍是要分配一個能夠連外網的IP,安裝部署測試完成以後,能夠將其down掉sql

auto eth0:2     
iface eth0:2 inet static
address 192.168.122.4
netmask 255.255.255.0
gateway 192.168.122.1
dns-nameservers 192.168.122.1ubuntu

  • 開啓路由轉發:
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p

3.KVMapi

  • 確保你的硬件啓動virtualization,若是不支持也不要緊,由於咱們自己就在KVM虛擬機上配置,後面將虛擬化器指定爲QEMU便可:
apt-get install cpu-checker
kvm-ok
  • 安裝kvm並配置它:
apt-get install -y kvm libvirt-bin pm-utils
  • 在/etc/libvirt/qemu.conf配置文件中啓用cgroup_device_aci數組:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]
  • 刪除默認的虛擬網橋:
virsh net-destroy default
virsh net-undefine default
  • 更新/etc/libvirt/libvirtd.conf配置文件:
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
  • 編輯libvirtd_opts變量在/etc/init/libvirt-bin.conf配置文件中:
env libvirtd_opts="-d -l"
  • 編輯/etc/default/libvirt-bin文件,沒有的話就新建:
libvirtd_opts="-d -l"
  • 重啓libvirt服務使配置生效:
service libvirt-bin restart

 

4.OpenVSwitch數組

  • 安裝OpenVSwitch軟件包:
apt-get install  openvswitch-switch openvswitch-datapath-dkms openvswitch-datapath-source
module-assistant auto-install openvswitch-datapath
service openvswitch-switch restart
  • 建立網橋:
ovs-vsctl add-br br-int

 

5.Neutron安全

  • 安裝Neutron OpenVSwitch代理:
apt-get install neutron-plugin-openvswitch-agent
  • 編輯OVS配置文件/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
 
[OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.20.20.4
enable_tunneling = True
    
#Firewall driver for realizing quantum security group function
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
  • 編輯/etc/neutron/neutron.conf
 
rabbit_host = 10.10.10.2

[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = admin
signing_dir = /var/lib/neutron/keystone-signing

[database]
connection = mysql://neutronUser:neutronPass@10.10.10.2/neutron
 
  • 重啓服務:
service neutron-plugin-openvswitch-agent restart

 

6.Nova

  • 安裝nova組件:
apt-get install nova-compute-kvm python-guestfs
  • 注意:若是你的宿主機不支持kvm虛擬化,可把nova-compute-kvm換成nova-compute-qemu
  • 同時/etc/nova/nova-compute.conf配置文件中的libvirt_type=qemu
  • 在/etc/nova/api-paste.ini配置文件中修改認證信息:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = admin
signing_dirname = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
  • 編輯修改/etc/nova/nova.conf

[DEFAULT]
# This file is configuration of nova
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
nova_url=http://10.10.10.2:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.10.10.2/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

#Availability_zone
#default_availability_zone=fbw

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Rabbit MQ
my_ip=10.10.10.4
rabbit_host=10.10.10.2
rpc_backend = nova.rpc.impl_kombu

 

# Imaging service
glance_host=10.10.10.2
glance_api_servers=10.10.10.2:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://192.168.122.2:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.4 # diffrente from every node,與控制節點不同
vncserver_listen=0.0.0.0

 

# Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=admin
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

#If you want Neutron + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = helloOpenStack

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900

 

# Ceilemeter #
instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_driver=ceilometer.compute.nova_notifier

  • 重啓nova-*服務:
cd /etc/init.d/; for i in $( ls nova-* ); do service $i restart; done;cd $OLDPWD
  • 檢查全部nova服務是否正常啓動:
nova-manage service list

 

7. 爲監控服務安裝計算代理

  • 安裝監控服務:
ap-get install ceilometer-agent-compute
  • 配置修改/etc/nova/nova.conf, 這個在前面已經配好了:
...
[DEFAULT]
...
instance_usage_audit=True
instance_usage_audit_period=hour
notify_on_state_change=vm_and_task_state
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_driver=ceilometer.compute.nova_notifier
  • 重啓服務:
service ceilometer-agent-compute restart

 

       至此,OpenStack基本的服務已經配置完畢,接下來你就能夠建立基本的虛擬網絡,啓動虛擬機,並分配給網絡和可供外網訪問的浮動IP。

       至於如何經過neutron建立內網、外網,和路由,我推薦直接在dashboard的界面中來創建,固然也能夠參照Awy的博文用neutron的命令去建立,或者參考官方文檔,這裏就再也不贅述了。

       不過要讓外網能訪問虛擬機,記得要修改安全組規則,增長icmp、ssh(tcp 22)訪問規則便可。

       但願,這幾篇博文能幫助你成功部署基本的OpenStack環境,同也給本身作個記錄,省得過一段時間又全忘了,出了問題,又不知道哪裏沒配好了。

相關文章
相關標籤/搜索