如今咱們看一下,就沒有任何問題了html
[root@linux-node2 ~]# /etc/init.d/openstack-nova-compute starthtml5
正在啓動 openstack-nova-compute: [肯定]node
[root@linux-node2 ~]# ps aux | grep pythonpython
root 1179 4.9 2.8 1108796 54304 pts/0 Sl 18:05 0:01 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.loglinux
root 1216 0.0 0.0 103248 836 pts/0 S+ 18:06 0:00 grep pythonvim
[root@linux-node2 ~]# ps -ef|grep novaapi
root 1179 1 0 18:05 pts/0 00:00:03 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log網絡
root 1233 1634 0 18:16 pts/0 00:00:00 grep novatcp
咱們再看一下Linuxbir正常不正常ide
計算節點:計算+網絡
生產環境最好有兩個控制節點
[root@linux-node2 ~]# neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
[root@linux-node2 ~]# /etc/init.d/openstack-neutron-linuxbridge-agent start
正在啓動 openstack-neutron-linuxbridge-agent: [肯定]
[root@linux-node2 ~]# ps aux |grep python
root 1179 0.4 3.3 1109592 64120 pts/0 Sl 18:05 0:04 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log
root 1249 1.2 1.5 254912 29616 pts/0 S 18:21 0:00 /usr/bin/python /usr/bin/neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini --verbose
root 1258 0.0 0.0 103248 836 pts/0 S+ 18:21 0:00 grep python
控制節點上查看:
[root@linux-node1 ~]# nova host-list
+---------------------------+-------------+----------+
| host_name | service | zone |
+---------------------------+-------------+----------+
| linux-node1.openstack.com | consoleauth | internal |
| linux-node1.openstack.com | scheduler | internal |
| linux-node1.openstack.com | cert | internal |
| linux-node1.openstack.com | conductor | internal |
| linux-node2.openstack.com | compute | nova |
+---------------------------+-------------+----------+
哪一個節點都行,只有你有環境變量
[root@linux-node2 ~]# nova host-list
ERROR (CommandError): You must provide a username or user id via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
[root@linux-node2 ~]# export OS_TENANT_NAME=admin
[root@linux-node2 ~]# export OS_USERNAME=admin
[root@linux-node2 ~]# export OS_PASSWORD=admin
[root@linux-node2 ~]# export OS_AUTH_URL=http://192.168.33.11:35357/v2.0
[root@linux-node2 ~]# nova host-list
+---------------------------+-------------+----------+
| host_name | service | zone |
+---------------------------+-------------+----------+
| linux-node1.openstack.com | consoleauth | internal |
| linux-node1.openstack.com | scheduler | internal |
| linux-node1.openstack.com | cert | internal |
| linux-node1.openstack.com | conductor | internal |
| linux-node2.openstack.com | compute | nova |
+---------------------------+-------------+----------+
下面的圖就說明computer啓動起來了。
[root@linux-node1 ~]# neutron agent-list
用demo用戶登錄
建立虛擬機,咱們要保證要有鏡像
接下來,
filter Scheduler 概念
上面畫紅色方框的是默認的,其它的是我手動添加的。
通常的報錯可能的緣由:找不到有效的主機,第二種是宿主機上的內存不夠用。
[root@linux-node1 ~]# iptables -vnL
Chain INPUT (policy ACCEPT 71230 packets, 24M bytes)
pkts bytes target prot opt in out source destination
70570 24M nova-api-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 nova-filter-top all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 nova-api-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 68959 packets, 23M bytes)
pkts bytes target prot opt in out source destination
68309 23M nova-filter-top all -- * * 0.0.0.0/0 0.0.0.0/0
68309 23M nova-api-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0
Chain nova-api-FORWARD (1 references)
pkts bytes target prot opt in out source destination
Chain nova-api-INPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 192.168.33.11 tcp dpt:8775
Chain nova-api-OUTPUT (1 references)
pkts bytes target prot opt in out source destination
Chain nova-api-local (1 references)
pkts bytes target prot opt in out source destination
Chain nova-filter-top (2 references)
pkts bytes target prot opt in out source destination
68309 23M nova-api-local all -- * * 0.0.0.0/0 0.0.0.0/0
Nova的調度服務scheduler,你建立虛擬機你要建立大哪臺物理機上? nova schduler
固然,我們的實驗只有一兩臺機器。
出了問題,你們要去找日誌。
[root@linux-node1 ~]# ll /var/log/nova/
總用量 12708
-rw-r--r-- 1 root root 7187241 8月 22 13:00 api.log
-rw-r--r-- 1 root root 1220479 8月 22 13:13 cert.log
-rw-r--r-- 1 root root 1226101 8月 22 13:14 conductor.log
-rw-r--r-- 1 root root 1224671 8月 22 13:13 consoleauth.log
-rw-r--r-- 1 root root 2129478 8月 22 13:13 scheduler.log
你們在排查錯誤的時候,一邊建立主機,一邊看打開日誌,觀察錯誤。
修改計算節點:
由於只在計算節點上建立虛擬機,因此在控制節點上修改也沒有意義。
[root@linux-node2 ~]# vim /etc/nova/nova.conf
virt_type=kvm
它支持不少,有的筆記本,不支持,因此改爲qemu
[root@linux-node2 ~]# /etc/init.d/openstack-nova-compute restart
中止 openstack-nova-compute: [肯定]
正在啓動 openstack-nova-compute: [肯定]
建立完成查看,以下:
有的時候,openstack會出現許多奇葩的問題,我開始查看的「用量」的時候,竟然不會顯示,後來我重啓一下openstack的各個服務就OK了。
我們先把DHCP打開,由於虛擬機獲取不到ip地址。它不會自動往iptables裏面加規則的。
下面我來說解一下關於DHCP的,我生產環境下沒有用DHCP,物理交換的路由功能。生產環境下有DHCP。
這樣就衝突了。
我這裏來配置一下,我們在控制節點上。
[root@linux-node1 ~]# vim /etc/neutron/dhcp_agent.ini
debug = False
nterfaceDriverinterface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = false
dhcp_confs = $state_path/dhcp
[root@linux-node1 ~]# grep "^[a-z]" /etc/neutron/dhcp_agent.ini
debug = true
interfaceDriverinterface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = false
dhcp_confs = $state_path/dhcp
[root@linux-node1 ~]# cd init.d
[root@linux-node1 init.d]# ls
openstack-cinder-api openstack-glance-api openstack-keystone openstack-neutron-server openstack-nova-compute openstack-nova-novncproxy
openstack-cinder-scheduler openstack-glance-registry openstack-neutron-dhcp-agent openstack-nova-api openstack-nova-conductor openstack-nova-scheduler
openstack-cinder-volume openstack-glance-scrubber openstack-neutron-linuxbridge-agent openstack-nova-cert openstack-nova-consoleauth openstack-nova-spicehtml5proxy
[root@linux-node1 init.d]# cp openstack-neutron-dhcp-agent /etc/init.d/
[root@linux-node1 init.d]# chmod +x /etc/init.d/openstack-neutron-dhcp-agent
[root@linux-node1 init.d]# chkconfig --add openstack-neutron-dhcp-agent
[root@linux-node1 init.d]# chkconfig openstack-neutron-dhcp-agent on
[root@linux-node1 init.d]# /etc/init.d/openstack-neutron-dhcp-agent start
正在啓動 openstack-neutron-dhcp-agent: [肯定]
[root@linux-node1 ~]# virsh net-list
名稱 狀態 自動開始 Persistent
--------------------------------------------------
default 活動 yes yes
[root@linux-node1 ~]# virsh net-destroy default
網絡 default 被刪除
[root@linux-node1 ~]# virsh net-undefine default
網絡 default 已經被取消定義
[root@linux-node1 ~]# service libvirtd restart
正在關閉 libvirtd 守護進程: [肯定]
啓動 libvirtd 守護進程: [肯定]
[root@linux-node1 ~]# virsh net-list
名稱 狀態 自動開始 Persistent
--------------------------------------------------
[root@linux-node1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:3B:15:9F
inet addr:192.168.33.11 Bcast:192.168.33.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe3b:159f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4664 errors:0 dropped:0 overruns:0 frame:0
TX packets:4630 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:843472 (823.7 KiB) TX bytes:2029897 (1.9 MiB)
eth1 Link encap:Ethernet HWaddr 00:0C:29:3B:15:A9
inet6 addr: fe80::20c:29ff:fe3b:15a9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3234 (3.1 KiB) TX bytes:2700 (2.6 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:48874 errors:0 dropped:0 overruns:0 frame:0
TX packets:48874 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:15529706 (14.8 MiB) TX bytes:15529706 (14.8 MiB)