須要注意的是,這一步雖然是配置Network,可是主要是數據結構的準備,真正的設備並無建立。html
因爲在建立虛擬機的時候,咱們指定了將虛擬機放到哪一個private network裏面,於是在建立真正的設備以前,全部的信息都須要準備好。python
這裏的知識點設計Network的建立,然而這一步其實在虛擬機建立以前就應該作好。shell
一個最簡單的場景是經過下面的腳本建立網絡ubuntu
#!/bin/bash
TENANT_NAME="openstack"
TENANT_NETWORK_NAME="openstack-net"
TENANT_SUBNET_NAME="${TENANT_NETWORK_NAME}-subnet"
TENANT_ROUTER_NAME="openstack-router"
FIXED_RANGE="192.168.0.0/24"
NETWORK_GATEWAY="192.168.0.1"windows
PUBLIC_GATEWAY="172.24.1.1"
PUBLIC_RANGE="172.24.1.0/24"
PUBLIC_START="172.24.1.100"
PUBLIC_END="172.24.1.200"安全
TENANT_ID=$(keystone tenant-list | grep " $TENANT_NAME " | awk '{print $2}')bash
(1) TENANT_NET_ID=$(neutron net-create --tenant_id $TENANT_ID $TENANT_NETWORK_NAME --provider:network_type gre --provider:segmentation_id 1 | grep " id " | awk '{print $4}')網絡
(2) TENANT_SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 --name $TENANT_SUBNET_NAME $TENANT_NET_ID $FIXED_RANGE --gateway $NETWORK_GATEWAY --dns_nameservers list=true 8.8.8.8 | grep " id " | awk '{print $4}')數據結構
(3) ROUTER_ID=$(neutron router-create --tenant_id $TENANT_ID $TENANT_ROUTER_NAME | grep " id " | awk '{print $4}')架構
(4) neutron router-interface-add $ROUTER_ID $TENANT_SUBNET_ID
(5) neutron net-create public --router:external=True
(6) neutron subnet-create --ip_version 4 --gateway $PUBLIC_GATEWAY public $PUBLIC_RANGE --allocation-pool start=$PUBLIC_START,end=$PUBLIC_END --disable-dhcp --name public-subnet
(7) neutron router-gateway-set ${TENANT_ROUTER_NAME} public
通過這個流程,從虛擬網絡,到物理網絡就邏輯上聯通了。
然而真正的建立底層的設備,倒是經過具體的命令來的,本人總結了一下:
固然還有更復雜的場景,參考這篇文章
在建立Instance以前,固然須要Image,Image後來發現是一個大學問。
在Openstack裏面,對於KVM,應用到的Image格式主要是兩種RAW和qcow2,
raw格式簡單,容易轉換爲其餘的格式。須要文件系統的支持才能支持sparse file,性能相對較高。
qcow2是動態的,相對於raw來講,有下列的好處:
具體的格式和特色,參考下面的文章
QEMU KVM libvirt手冊(4) – images
建立一個image,有多種方法
一種方法是經過virt-install,講hard disk設爲一個image文件, 從CDROM啓動一個虛擬機,按照正常的安裝流程來,最後操做系統安裝好,image再通過qemu-img進行處理,壓縮,最終造成image。
參考文章
固然如今有了更先進的方法,就是libguestfs,它能夠輕鬆基於已有版本的image建立一個你想要的image,就是virt-builder
參考文章
固然一個能夠在Openstack裏面使用的image,毫不是僅僅安裝一個操做系統那麼簡單。
在OpenStack Virtual Machine Image Guide中詳細寫了一個Linux Image的各類需求
另外加幾條:
當一個Linux的Image安裝完畢後,總要測試一下:
對於windows image,卻要複雜的多,windows真的不是對cloud友好的。
對於cloud-init,參考下面的文章
http://cloudinit.readthedocs.org/en/latest/index.html
http://www.scalehorizontally.com/2013/02/24/introduction-to-cloud-init/
在ubuntu中,cloud-init主要包括
配置文件在/etc/cloud下面,默認的cloud.cfg以下
root@dfasdfsdafasdf:/etc/cloud# cat cloud.cfg
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- migrator
- seed_random
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- emit_upstart
- disk_setup
- mounts
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
- apt-pipelining
- apt-configure
- package-update-upgrade-install
- landscape
- timezone
- puppet
- chef
- salt-minion
- mcollective
- disable-ec2-metadata
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: ubuntu
lock_passwd: True
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [armhf, armel, default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
工做文件夾在/var/lib/cloud
root@dfasdfsdafasdf:/var/lib/cloud/instance# ls
boot-finished datasource obj.pkl sem user-data.txt.i vendor-data.txt.i
cloud-config.txt handlers scripts user-data.txt vendor-data.txt
另外就是cloud-init的命令
/usr/bin/cloud-init
若是咱們打開它,發現他是python文件,若是運行/usr/bin/cloud-init init,則會運行cloud_init_modules:下面的模塊,咱們以resizefs爲例子
/usr/bin/cloud-init 中會調用main_init,裏面會調用run_module_section
這就調用到python代碼裏面去了,因此cloud-init另外一個部分就是python代碼部分
/usr/lib/python2.7/dist-packages/cloudinit
咱們發現裏面有這個文件/usr/lib/python2.7/dist-packages/cloudinit/config/cc_resizefs.py
裏面是
def _resize_btrfs(mount_point, devpth): # pylint: disable=W0613
return ('btrfs', 'filesystem', 'resize', 'max', mount_point)
def _resize_ext(mount_point, devpth): # pylint: disable=W0613
return ('resize2fs', devpth)
def _resize_xfs(mount_point, devpth): # pylint: disable=W0613
return ('xfs_growfs', devpth)
def _resize_ufs(mount_point, devpth): # pylint: disable=W0613
return ('growfs', devpth)
哈哈,終於找到resize的根源了。
說完了建立image,還須要瞭解修改image,咱們的文件注入,就是對image的修改。
有三種方式:經過mount一個loop device,經過qemu的network block device,或者最早進的,經過libguestfs
總結成了一篇文章
對於qemu-nbd,有文章
QEMU KVM Libvirt手冊(6) – Network Block Device
對於libguestfs,我也寫了一些筆記
libguestfs手冊(2):guestfish command
對於文件注入,有文章
對於如何打snapshot,分多種,有文章
QEMU KVM Libvirt手冊(5) – snapshots
[轉] External(and Live) snapshots with libvirt
[轉] Snapshotting with libvirt for qcow2 images