openstack的安裝(二) 安裝nova服務同時創建計算節點

nova的安裝html

準備網絡mysql


1.啓用網絡接口的promisc功能sql


ip link set eth0 promisc on數據庫


確認開啓primisc功能vim


ip link show eth0api


2.建立橋接接口br100安全


yum -y install bridge-utils網絡


yum -y install libvirtssh


service libvirtd restartcurl


chkconfig libvirtd on


virsh iface-bridge eth0 br100


建立橋接設備 方法二


禁用NetworkManager


建立橋接設備


建立一個名爲br100的橋接設置,並將其橋接在eth0網卡上,兩步便可完成:首先建立一個橋接類型的設備,爲其制定地址的獲取方式、ip地址等屬性,相似於管理一個正常的網絡接口,只是類型爲brige,其次,爲eth0接口指定其橋接至剛剛定義的橋接設備便可,eth0此接口再也不須要配置ip地址等屬性


vim /etc/sysconfig/network-scripts/ifcfg-br100


DEVICE=br100

BOOTPROTO=none

DNS1=192.168.253.1

GATEWAY=192.168.253.1

IPADDR=192.168.253.139

NETMASK=255.255.255.0

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Brige

USERCTL=no

DELAY=0


vim /etc/sysconfig/network-scripts/ifcfg-eth0


DEVICE="eth0"

BOOTPROTO="none"

NM_CONTROLLED="no"

ONBOOT="yes"

TYPE=Ethernet

HWADDR=00:0C:29:83:A4:B5

IPV6INIT=no

USERCTL=no

BRIDGE=br100


重啓啓動網絡服務




brctl show

bridge name     bridge id          STP enabled     interfaces

br100          8000.000c2983a4b5     yes          eth0

virbr0          8000.5254003986f3     yes          virbr0-nic


啓動messagebus服務


service messagebus start


chkconfig messagebus on


安裝nova


yum -y install openstack-utils memcached qpid-cpp-server


yum -y install openstack-nova


初始化nova數據庫,同時創建用戶、密碼


openstack-db --init --service nova --password nova


爲nova建立數據庫的相關的用戶


grant all privileges on nova.* to nova@localhost identified by 'nova';


grant all privileges on nova.* to nova@'%' identified by 'nova';


flush privileges;


配置nova鏈接數據庫


vim /etc/nova/nova.conf


# AUTHENTICATION

auth_strategy=keystone


# LOGS/STATE

verbose=True

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

rootwrap_config=/etc/nova/rootwrap.conf


# SCHEDULER

compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler


# VOLUMES

volume_driver=nova.volume.driver.ISCSIDriver

volume_group=nova-volume

volume_name_template=volume-%08x

iscsi_helper=tgtadm


# DATABASE

sql_connection=mysql://nova:nova@192.168.0.100/nova


# COMPUTE

libvirt_type=qemu

compute_driver=libvirt.LibvirtDriver

instance_name_template=instance-%08x

api_paste_config=/etc/nova/api-paste.ini


# set the instances path

# instances_path=/nova/instances


# New add

libvirt_nonblocking = True

libvirt_inject_partition = -1


# COMPUTE/APIS: if you have separate configs for separate services

# this flag is required for both nova-api and nova-compute

allow_resize_to_same_host=True


# APIS

osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions

ec2_dmz_host=192.168.0.100

s3_host=192.168.0.100


# Qpid

rpc_backend = nova.openstack.common.rpc.impl_qpid

qpid_hostname = 192.168.0.100


# GLANCE

p_w_picpath_service=nova.p_w_picpath.glance.GlanceImageService

glance_api_servers=192.168.0.100:9292


# NETWORK

network_manager=nova.network.manager.FlatDHCPManager

force_dhcp_release=True

dhcpbridge_flagfile=/etc/nova/nova.conf

# New Add

dhcpbridge = /usr/bin/nova-dhcpbridge


firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


# Change my_ip to match each Compute host

my_ip=192.168.0.100

public_interface=eth0

vlan_interface=eth0

flat_network_bridge=br100

flat_interface=eth0

fixed_range=192.168.0.0/24


# NOVNC CONSOLE

novncproxy_base_url=http://192.168.0.100:6080/vnc_auto.html


# Change vncserver_proxyclient_address and vncserver_listen to match each compute host

vncserver_proxyclient_address=192.168.0.100

vncserver_listen=192.168.0.100


[keystone_authtoken]

auth_host = 192.168.0.100

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = nova

signing_dirname = /tmp/keystone-signing-nova




安裝libguestfs-tools


yum -y instrall libguestfs-tools


設置livirt類型爲qemu


openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu


爲qemu-kvm建立所須要的鏈接


ln -sv /usr/libexec/qemu-kvm /usr/bin/qemu


重啓libvirtd服務


service libvirtd restart


導入或遷移nova數據庫


nova-manage db sync


安裝配置qpid隊列服務


vim /etc/qpidd.conf


auth=no


service qpidd restart


chkconfig qpidd on


啓動nova服務


首先建立nova所文件目錄


mkdir /var/lock/nova


chown -R nova.nova /var/lo

ck/nova/


啓動nova相關服務,並設置開機自啓


compute、api、network、scheduler、console、cert


for svc in api compute network scheduler cert console;do service openstack-nova-$svc restart;chkconfig openstack-nova-$svc on; done


查看服務運行狀態


nova-manage service list


查看日誌


grep -i error /var/log/nova/*




建立nova網絡


nova-manage network create  --label=private --multi_host=T --fixed_range_v4=192.168.0.0/24 --bridge_interface=eth0 --bridge=br100 --num_networks=1 --network_size=256


nova-manage network list




在keystone中註冊nova  compute API(要以keystone的管理員admin身份運行,因此執行環境變量)


keystone service-create --name=nova --type=compute --description="Nova Compute Service"

+-------------+----------------------------------+

|   Property  |              Value               |

+-------------+----------------------------------+

| description |       Nova Compute Service       |

|      id     | 875164c08b7c43b0b0d3116007655942 |

|     name    |               nova               |

|     type    |             compute              |

+-------------+----------------------------------+


建立端點


keystone endpoint-create --service-id 875164c08b7c43b0b0d3116007655942 --publicurl "http://192.168.0.100:8774/v1.1/\$(tenant_id)s" --adminurl "http://192.168.0.100:8774/v1.1/\$(tenant_id)s" --internalurl "http://192.168.0.100:8774/v1.1/\$(tenant_id)s"

+-------------+----------------------------------------------+

|   Property  |                    Value                     |

+-------------+----------------------------------------------+

|   adminurl  | http://192.168.0.100:8774/v1.1/$(tenant_id)s |

|      id     |       cbac1b85c41349c8ac49a819e43385a7       |

| internalurl | http://192.168.0.100:8774/v1.1/$(tenant_id)s |

|  publicurl  | http://192.168.0.100:8774/v1.1/$(tenant_id)s |

|    region   |                  regionOne                   |

|  service_id |       875164c08b7c43b0b0d3116007655942       |

+-------------+----------------------------------------------+




運行vm實例

安全組default


nova secgroup-list 查看安全組


nova secgroup-list

+---------+-------------+

| Name    | Description |

+---------+-------------+

| default | default     |

+---------+-------------+


nova secgroup-add-rule 可用於安全組定義訪問規則,下面命令就實現了容許全部ip地址經過tcp協議的22端口訪問關聯的vm實例


nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

+-------------+-----------+---------+-----------+--------------+

| IP Protocol | From Port | To Port | IP Range  | Source Group |

+-------------+-----------+---------+-----------+--------------+

| tcp         | 22        | 22      | 0.0.0.0/0 |              |

+-------------+-----------+---------+-----------+--------------+


容許全部主機發起ping請求,開放icmp協議


nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

+-------------+-----------+---------+-----------+--------------+

| IP Protocol | From Port | To Port | IP Range  | Source Group |

+-------------+-----------+---------+-----------+--------------+

| icmp        | -1        | -1      | 0.0.0.0/0 |              |

+-------------+-----------+---------+-----------+--------------+


ssh公鑰注入


nova-keypair-add命令生成一對密鑰,並將其公鑰保存在compute服務


ssh-keygen -t rsa -P ''


nova keypair-add  --pub-key /root/.ssh/id_rsa.pub testkey


顯示添加的密鑰信息


nova keypair-list


+---------+-------------------------------------------------+

| Name    | Fingerprint                                     |

+---------+-------------------------------------------------+

| testkey | 7a:34:18:49:1d:60:30:29:18:66:69:d2:c4:c6:c0:2b |

+---------+-------------------------------------------------+


查看本地的密鑰文件


ssh-keygen -l -f /root/.ssh/id_rsa.pub


確保每一個節點都正常運行


查看虛擬機實例


nova flavor-list


建立一個虛擬機實例


nova flavor-create --swap 256 flavor.cirros 6 128 2 2


查看p_w_picpath映像文件


nova p_w_picpath-list

+--------------------------------------+---------------------+--------+--------+

| ID                                   | Name                | Status | Server |

+--------------------------------------+---------------------+--------+--------+

| b8964ced-5702-4be1-9644-32b14d9ebc25 | cirros-0.3.0-i386   | ACTIVE |        |

| edc31b1b-d3bf-4c76-885c-1f56a9eee3bc | cirros-0.3.0-x86_64 | ACTIVE |        |

+--------------------------------------+---------------------+--------+--------+


加載映像文件,啓動一個實例


nova boot --flavor 1 --p_w_picpath edc31b1b-d3bf-4c76-885c-1f56a9eee3bc --key_name testkey --security_group default cirros1


查看所啓動的虛擬機的狀態


+--------------------------------------+---------+--------+---------------------+

| ID                                   | Name    | Status | Networks            |

+--------------------------------------+---------+--------+---------------------+

| 1a09d053-ea12-4b16-ace0-e1ec8d842360 | cirros1 | ACTIVE | private=192.168.0.2 |

+--------------------------------------+---------+--------+---------------------+



登陸測試


nova console-log cirros1


wget: server returned error: HTTP/1.1 404 Not Found

cloud-userdata: failed to read user data url: http://169.254.169.254/2009-04-04/user-data

WARN: /etc/rc3.d/S99-cloud-userdata failed

 ____               ____  ____

/ __/ __ ____ ____ / __ \/ __/

/ /__ / // __// __// /_/ /\ \

\___//_//_/  /_/   \____/___/

http://launchpad.net/cirros



login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.

cirros login:


會閃退,使用如下方式登陸


ssh -l cirros 192.168.0.2



注意事項

brctl not found

解決辦法:yum install bridge-utils

相關文章
相關標籤/搜索