上一篇主要介紹了控制節點的一些安裝配置,這裏將會介紹openstack其它組件的相關配置。html
一、node2上修改nova的配置文件:node
[root@openstack-node2 ~]# egrep -v "^$|^#" /etc/nova/nova.conf [DEFAULT] use_neutron=true firewall_driver=nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url= rabbit://openstack:openstack@192.168.10.11 [api] auth_strategy=keystone [glance] api_servers=http://192.168.10.11:9292 [keystone_authtoken] auth_uri = http://192.168.10.11:5000 auth_url = http://192.168.10.11:35357 memcached_servers = 192.168.10.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [oslo_concurrency] lock_path=/var/lib/nova/tmp [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://192.168.10.11:35357/v3 username = placement password = placement [vnc] enabled=true vncserver_listen=192.168.10.12 vncserver_proxyclient_address=192.168.10.12 novncproxy_base_url=http://192.168.10.11:6080/vnc_auto.html
二、檢查主機是否開啓了虛擬化,返回爲0 則說明不支持,須要啓用qemu:mysql
egrep -c '(vmx|svm)' /proc/cpuinfo
若是返回0,則在/etc/nova/nova.conf 配置使用 qemu:linux
[libvirt] # ... virt_type = qemu
三、啓動服務:web
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
四、在node3上配置與node2上的相同。只須要修改nova.conf文件的以下兩個參數:sql
[root@openstack-node3 ~]# grep 192.168.10.13 /etc/nova/nova.conf vncserver_listen=192.168.10.13 vncserver_proxyclient_address=192.168.10.13
五、控制節點上進行驗證:數據庫
[root@openstack-node1 ~]# source admin-openstack.sh [root@openstack-node1 ~]# openstack compute service list +----+------------------+-----------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-----------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:08.000000 | | 2 | nova-scheduler | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:09.000000 | | 3 | nova-conductor | openstack-node1 | internal | enabled | up | 2018-01-10T10:08:09.000000 | | 7 | nova-compute | openstack-node2 | nova | enabled | up | 2018-01-10T10:08:11.000000 | | 8 | nova-compute | openstack-node3 | nova | enabled | up | 2018-01-10T10:08:14.000000 | +----+------------------+-----------------+----------+---------+-------+----------------------------+
六、獲取計算節點信息,發現計算主機:django
[root@openstack-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ddc4df46-fd96-4778-b312-95e8ad37e3d3 Found 2 unmapped computes in cell: ddc4df46-fd96-4778-b312-95e8ad37e3d3 Checking host mapping for compute host 'openstack-node2': eae670ef-c799-4517-8232-525a550e2658 Creating host mapping for compute host 'openstack-node2': eae670ef-c799-4517-8232-525a550e2658 Checking host mapping for compute host 'openstack-node3': f6444b6a-5850-49d1-9aff-1dd0fcab594a Creating host mapping for compute host 'openstack-node3': f6444b6a-5850-49d1-9aff-1dd0fcab594a
一、建立neutron 用戶,設置密碼爲 neutron:vim
[root@openstack-node1 ~]# source admin-openstack.sh [root@openstack-node1 ~]# openstack user create --domain default --password neutron neutron +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | d97218e6fab1493583f2a39dba60c3d7 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
二、將neutron 用戶添加到 service項目,並授予admin權限。api
# openstack role add --project service --user neutron admin
三、建立neutron 服務:
# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | a0a7183cb0b3448d887e0a3e1308a1c3 | | name | neutron | | type | network | +-------------+----------------------------------+
四、建立endpoint,分別對應internal,public,admin:
# openstack endpoint create --region RegionOne network public http://192.168.10.11:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | ca6dd31fee654983b216264d7851e1f6 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | a0a7183cb0b3448d887e0a3e1308a1c3 | | service_name | neutron | | service_type | network | | url | http://192.168.10.11:9696 | +--------------+----------------------------------+ # openstack endpoint create --region RegionOne network internal http://192.168.10.11:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 21fc021061a249feb934f5f94977d848 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | a0a7183cb0b3448d887e0a3e1308a1c3 | | service_name | neutron | | service_type | network | | url | http://192.168.10.11:9696 | +--------------+----------------------------------+ # openstack endpoint create --region RegionOne network admin http://192.168.10.11:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 87b0af4ecd1c4d8fa5b08e3d1855ab2f | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | a0a7183cb0b3448d887e0a3e1308a1c3 | | service_name | neutron | | service_type | network | | url | http://192.168.10.11:9696 | +--------------+----------------------------------+
五、建立一個provider的網絡,修改以下配置:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/neutron.conf [DEFAULT] auth_strategy = keystone core_plugin = ml2 service_plugins = notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true transport_url = rabbit://openstack:openstack@192.168.10.11 [database] connection = mysql+pymysql://neutron:neutron@192.168.10.11/neutron [keystone_authtoken] auth_uri = http://192.168.10.11:5000 auth_url = http://192.168.10.11:35357 memcached_servers = 192.168.10.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [nova] auth_url = http://192.168.10.11:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp
六、配置/etc/neutron/plugins/ml2/ml2_conf.ini :
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = local,flat,vlan,gre,vxlan,geneve tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider [securitygroup] enable_ipset = true
七、配置Linuxbridge agent, 修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:eth0 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver enable_security_group = true [vxlan] enable_vxlan = false
八、配置DHCP代理 /etc/neutron/dhcp_agent.ini:
[root@openstack-node1 ~]# egrep -v "^$|^#" /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
9 、配置metadata代理/etc/neutron/metadata_agent.ini:
[root@openstack-node1 ~]# vim /etc/neutron/metadata_agent.ini [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = openstack # 這裏是metadata的密碼
十、在網絡服務中配置計算服務:
[root@openstack-node1 ~]# vim /etc/nova/nova.conf [neutron] url = http://192.168.10.11:9696 auth_url = http://192.168.10.11:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = openstack # 這裏是metadata的密碼
十一、建立軟鏈接:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
十二、同步數據庫:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
1三、重啓nova-api服務:
# systemctl restart openstack-nova-api.service
1四、啓動Neutron服務:
# systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service # systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service
一、拷貝控制節點的neutron.conf到計算節點,去除【database】的配置:
scp -p /etc/neutron/neutron.conf 192.168.10.12:/etc/neutron/ scp -p /etc/neutron/neutron.conf 192.168.10.13:/etc/neutron/
二、拷貝控制節點的/etc/neutron/plugins/ml2/linuxbridge_agent.ini,到計算節點:
scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.10.12:/etc/neutron/plugins/ml2/ scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.10.13:/etc/neutron/plugins/ml2/
三、計算節點配置nova.conf:
# vim /etc/nova/nova.conf url = http://192.168.10.11:9696 auth_url = http://192.168.10.11:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
四、完成配置,啓動服務:
# systemctl restart openstack-nova-compute.service # systemctl enable neutron-linuxbridge-agent.service # systemctl start neutron-linuxbridge-agent.service
五、在控制節點進行驗證:
[root@openstack-node1 ~]# openstack network agent list +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+ | 11416545-1330-47fe-bf87-250094e31b5a | Metadata agent | openstack-node1 | None | :-) | UP | neutron-metadata-agent | | 28227c23-5871-405e-b773-7a63117aae5d | Linux bridge agent | openstack-node2 | None | :-) | UP | neutron-linuxbridge-agent | | 5058627e-1674-4332-87d9-bc4d957162bc | DHCP agent | openstack-node1 | nova | :-) | UP | neutron-dhcp-agent | | 709cbca7-501d-48e8-8115-9a6b076bde60 | Linux bridge agent | openstack-node1 | None | :-) | UP | neutron-linuxbridge-agent | | c713812b-77ac-47cb-8807-6f4b0c6fe99f | Linux bridge agent | openstack-node3 | None | :-) | UP | neutron-linuxbridge-agent | +--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
一、建立一個provider的網絡:
[root@openstack-node1 ~]# source admin-openstack.sh [root@openstack-node1 ~]# openstack network create --share --external \ --provider-physical-network provider \ --provider-network-type flat provider +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2018-01-11T01:55:44Z | | description | | | dns_domain | None | | id | 09b8f3e8-14a2-40af-a62d-94d6c19462d8 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | None | | is_vlan_transparent | None | | mtu | 1500 | | name | provider | | port_security_enabled | True | | project_id | 0daaf987a867495fa0937a16b359c729 | | provider:network_type | flat | | provider:physical_network | provider | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 3 | | router:external | External | | segments | None | | shared | True | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2018-01-11T01:55:44Z | +---------------------------+--------------------------------------+
二、建立一個子網:
openstack subnet create --network provider \ --allocation-pool start=192.168.10.100,end=192.168.10.150 \ --dns-nameserver 192.168.10.2 --gateway 192.168.10.2 \ --subnet-range 192.168.10.0/24 provider +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | allocation_pools | 192.168.10.100-192.168.10.150 | | cidr | 192.168.10.0/24 | | created_at | 2018-01-11T02:00:36Z | | description | | | dns_nameservers | 192.168.10.2 | | enable_dhcp | True | | gateway_ip | 192.168.10.2 | | host_routes | | | id | a826d8c2-2c3b-4b36-8c84-1b2a69b3bd06 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | provider | | network_id | 09b8f3e8-14a2-40af-a62d-94d6c19462d8 | | project_id | 0daaf987a867495fa0937a16b359c729 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2018-01-11T02:00:36Z | | use_default_subnet_pool | None | +-------------------------+--------------------------------------+
三、建立一個m1.nano的虛擬機配置:
[root@openstack-node1 ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano +----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | id | 0 | | name | m1.nano | | os-flavor-access:is_public | True | | properties | | | ram | 64 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+
四、使用demo 的環境變量生成一個密鑰對:
[root@openstack-node1 ~]# source demo-openstack.sh [root@openstack-node1 ~]# ssh-keygen -q -N "" [root@openstack-node1 ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey +-------------+-------------------------------------------------+ | Field | Value | +-------------+-------------------------------------------------+ | fingerprint | ec:90:b7:c3:9b:bf:27:ff:89:0e:38:a8:5d:ce:57:fe | | name | mykey | | user_id | 8c10323be99e4597a099db1ba3b79627 | +-------------+-------------------------------------------------+ [root@openstack-node1 ~]# openstack keypair list +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | ec:90:b7:c3:9b:bf:27:ff:89:0e:38:a8:5d:ce:57:fe | +-------+-------------------------------------------------+
五、使用demo環境變量,建立一個安全組規則,並添加22端口的訪問權限:
[root@openstack-node1 ~]# openstack security group rule create --proto icmp default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2018-01-11T02:09:03Z | | description | | | direction | ingress | | ether_type | IPv4 | | id | 34a781c5-627e-4349-8fde-f569348989eb | | name | None | | port_range_max | None | | port_range_min | None | | project_id | d63f87c94e634aefbdf3fa48d4f43b18 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 0 | | security_group_id | 401f7dea-eb96-4e1f-b199-adc63b742f19 | | updated_at | 2018-01-11T02:09:03Z | +-------------------+--------------------------------------+ [root@openstack-node1 ~]# openstack security group rule create --proto tcp --dst-port 22 default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2018-01-11T02:09:29Z | | description | | | direction | ingress | | ether_type | IPv4 | | id | fd6d34e0-0615-4bc2-b371-84d33701250d | | name | None | | port_range_max | 22 | | port_range_min | 22 | | project_id | d63f87c94e634aefbdf3fa48d4f43b18 | | protocol | tcp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 0 | | security_group_id | 401f7dea-eb96-4e1f-b199-adc63b742f19 | | updated_at | 2018-01-11T02:09:29Z | +-------------------+--------------------------------------+
六、檢查啓動實例前的環境參數:
[root@openstack-node1 ~]# source demo-openstack.sh [root@openstack-node1 ~]# openstack flavor list +----+---------+-----+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+---------+-----+------+-----------+-------+-----------+ | 0 | m1.nano | 64 | 1 | 0 | 1 | True | +----+---------+-----+------+-----------+-------+-----------+ [root@openstack-node1 ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | dc655534-2821-47c1-b9c4-8687b52dfdbc | cirros | active | +--------------------------------------+--------+--------+ [root@openstack-node1 ~]# openstack network list +--------------------------------------+----------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------+--------------------------------------+ | 09b8f3e8-14a2-40af-a62d-94d6c19462d8 | provider | a826d8c2-2c3b-4b36-8c84-1b2a69b3bd06 | +--------------------------------------+----------+--------------------------------------+ [root@openstack-node1 ~]# openstack security group list +--------------------------------------+---------+------------------------+----------------------------------+ | ID | Name | Description | Project | +--------------------------------------+---------+------------------------+----------------------------------+ | 401f7dea-eb96-4e1f-b199-adc63b742f19 | default | Default security group | d63f87c94e634aefbdf3fa48d4f43b18 | +--------------------------------------+---------+------------------------+----------------------------------+
七、建立並啓動一個實例,指定net-id 爲 provider ID:
openstack server create --flavor m1.nano --image cirros \ --nic net-id=09b8f3e8-14a2-40af-a62d-94d6c19462d8 --security-group default \ --key-name mykey provider-instance +-----------------------------+-----------------------------------------------+ | Field | Value | +-----------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | 4H3dqBvxcRim | | config_drive | | | created | 2018-01-11T02:17:37Z | | flavor | m1.nano (0) | | hostId | | | id | 91b256f0-54f2-4df1-8ae4-3649670c7813 | | image | cirros (dc655534-2821-47c1-b9c4-8687b52dfdbc) | | key_name | mykey | | name | provider-instance | | progress | 0 | | project_id | d63f87c94e634aefbdf3fa48d4f43b18 | | properties | | | security_groups | name='401f7dea-eb96-4e1f-b199-adc63b742f19' | | status | BUILD | | updated | 2018-01-11T02:17:37Z | | user_id | 8c10323be99e4597a099db1ba3b79627 | | volumes_attached | | +-----------------------------+-----------------------------------------------+
提示: 當實例建立成功後,宿主機的網絡狀態會發生變化。eth0上再也不有IP地址,會建立一個橋接網卡和虛擬機網卡,控制節點的網絡也會變爲橋接模式。
八、查看虛擬機是否建立成功,顯示ACTIVE說明建立成功:
[root@openstack-node1 ~]# openstack server list +--------------------------------------+-------------------+--------+-------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------------------+--------+-------------------------+--------+---------+ | 91b256f0-54f2-4df1-8ae4-3649670c7813 | provider-instance | ACTIVE | provider=192.168.10.104 | cirros | m1.nano | +--------------------------------------+-------------------+--------+-------------------------+--------+---------+
九、獲取provider-instance虛擬機的vnc的web登陸連接:
[root@openstack-node1 ~]# openstack console url show provider-instance +-------+------------------------------------------------------------------------------------+ | Field | Value | +-------+------------------------------------------------------------------------------------+ | type | novnc | | url | http://192.168.10.11:6080/vnc_auto.html?token=0b5414ee-0928-4a37-9922-eb4c8512da88 | +-------+------------------------------------------------------------------------------------+
十、使用瀏覽器打開此連接,查看網絡狀態:
十一、在控制節點上登陸虛擬機:
[root@openstack-node1 ~]# ping 192.168.10.104 PING 192.168.10.104 (192.168.10.104) 56(84) bytes of data. 64 bytes from 192.168.10.104: icmp_seq=1 ttl=64 time=0.967 ms [root@openstack-node1 ~]# ssh cirros@192.168.10.104 ...
Horizon是能夠安裝在任意節點上的,只要配置對應的Memcached的鏈接便可,在安裝的Horizon的節點上啓動httpd服務便可。這將dashboard安裝在控制節點。
一、安裝openstack-dashboard:
# yum install openstack-dashboard -y
二、修改配置文件/etc/openstack-dashboard/local_settings,配置以下內容:
OPENSTACK_HOST = "192.168.10.11" ALLOWED_HOSTS = ['*'] CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '192.168.10.11:11211', }, } SESSION_ENGINE = 'django.contrib.sessions.backends.cache' OPENSTACK_HOST = "192.168.10.11" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_***': False, 'enable_fip_topology_check': False, }
三、重啓服務:
# systemctl restart httpd.service memcached.service
四、登陸驗證: 使用連接 http://192.168.10.11/dashboard進行登陸,帳號admin/admin, 或者 demo/demo。
使用demo用戶登陸,能夠在web 界面建立虛擬機。