剛接觸openstack不久,參考官方文檔實施3節點部署時遇到了一些問題,主要集中在compute node,還好有十幾年的運維經驗協助我把問題一一解決了。如下會用紅字部分標識解決方法。
html
系統環境:CentOS 6.5 64位前端
各節點IP:徹底按照官方文檔中的IP進行了配置node
官方文檔:http://docs.openstack.org/icehouse/install-guide/install/zypper/content/python
日誌記錄日期:2014-7-6 新建文檔linux
日誌更新日期:2014-8-7 增長/etc/sysconfig/libvirtd配置文件的修改內容api
問題部分:bash
Controller Node運維
Updatethe ALLOWED_HOSTS in local_settings.py to include theaddresses you wish to access the dashboard from.dom
Edit /etc/openstack-dashboard/local_settings:tcp
ALLOWED_HOSTS= ['localhost', 'my-desktop']
實際配置爲:
ALLOWED_HOSTS = ['10.0.0.11', '0.0.0.0']
這段配置估計是python的列表,指容許訪問dashboard的hosts,
按照官方文檔的配置,dashboard頁面會報錯沒法開啓。以下圖所示:
經過檢查Apache日誌/var/log/httpd/error_log 後排除了此問題。
Compute Node
問題1:
OpenStack Networking (neutron)
To install the Networkingcomponents
yum install openstack-neutron-ml2 openstack-neutron-openvswitch
實際還缺乏:
openstack-nova-compute
完整的內容爲:
yum install openstack-neutron-ml2openstack-neutron-openvswitch openstack-nova-compute -y
問題2:
To configureCompute to use Networking
By default, mostdistributions configure Compute to use legacy networking. You must reconfigureCompute to manage networks through Networking.
Run the following commands:
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
# openstack-config --set/etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set/etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 # openstack-config --set /etc/nova/nova.confDEFAULT \ neutron_auth_strategy keystone # openstack-config --set/etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set/etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set/etc/nova/nova.conf DEFAULT \ neutron_admin_password NEUTRON_PASS # openstack-config --set/etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set/etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set/etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set/etc/nova/nova.conf DEFAULT \ security_group_api neutron
實際還缺乏:
openstack-config --set /etc/nova/nova.conf DEFAULT \ qpid_hostname controller openstack-config --set /etc/nova/nova.conf DEFAULT \ rpc_backend qpid openstack-config --set /etc/nova/nova.conf DEFAULT \ glance_host controller openstack-config --set /etc/nova/nova.conf DEFAULT \ auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT \ novncproxy_base_url http://10.0.0.11:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf DEFAULT \ vncserver_proxyclient_address 10.0.0.31 openstack-config --set /etc/nova/nova.conf DEFAULT \ vncserver_listen 0.0.0.0 chkconfig openstack-nova-compute on
緣由分析:
經過檢查compute日誌/var/log/nova/compute.log,發現以下問題。
一、compute node沒法和controller node通訊,以下圖所示
查看/etc/nova/nova.conf配置文件,確認默認配置爲rpc_backend=rabbit、qpid_hostname=localhost,故修改
rpc_backend=qpid
qpid_hostname=controller
二、compute node沒法從controller node GET到導入的p_w_picpath,以下圖所示
controller node 日誌/var/log/glance/api.log截圖
compute node日誌/var/log/nova/compute.log截圖
查看/etc/nova/nova.conf配置文件,確認默認配置爲glance_host=$my_ip而my_ip=10.0.0.1,故修改
glance_host=controller
修改後發現仍是沒法獲取到p_w_picpath,再次分析/var/log/glance/api.log後確認,GET的時候未帶入token信息。查看/etc/nova/nova.conf配置文件,確認默認配置爲auth_strategy=noauth,故修改
auth_strategy=keystone
三、controller node上的dashboard沒法打開虛機的控制檯,查看各日誌均無任何報錯信息。前臺報錯截圖以下
搜索官方幫助區後解決了此問題
編輯compute節點的/etc/nova/nova.conf配置文件
novncproxy_base_url=http://10.0.0.11:6080/vnc_auto.html
vncserver_proxyclient_address=10.0.0.31
vncserver_listen=0.0.0.0
修改配置後須要重啓openstack-nova-compute服務,並使用netstat命令檢查5900端口狀態,這是控制檯須要訪問compute的端口
問題3:
執行openstack-nova-compute啓動命令,前端無任何報錯信息,服務可正常啓動。
/etc/init.d/openstack-nova-compute start
稍後檢查openstack-nova-compute狀態,會發現提示信息爲進程不存在,但pid文件存在須要刪除pid文件後方可再次啓動服務。
/etc/init.d/openstack-nova-compute status rm -f /var/run/nova/nova-compute.pid
解決方法:
經過檢查compute日誌/var/log/nova/compute.log後排除了此問題。
官方文檔中未對libvirtd設置進行介紹,實際使用中須要對libvirtd作以下配置。
· Edit the cgroup_device_acl array in the/etc/libvirt/qemu.conf file to:
cgroup_device_acl = [ "/dev/null", "/dev/full","/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm","/dev/kqemu", "/dev/rtc","/dev/hpet","/dev/net/tun" ]
· Enable live migration by updating/etc/libvirt/libvirtd.conf file:
listen_tls = 0 listen_tcp = 1 auth_tcp = "none"
啓動openstack-nova-compute前須要先啓動libvirtd服務,並將libvirtd設置爲開機啓動服務
/etc/init.d/libvirtd start chkconfig libvirtd on
2014.08.07
發現實現live migration還少了一個配置,取消/etc/sysconfig/libvirtd中下面的註釋
LIBVIRTD_ARGS="--listen"