CloudStack學習-1

環境準備


 

 

實驗使用的虛擬機配置html

Vmware Workstation
虛擬機系統2個
系統版本:centos6.6 x86_64
內存:4GB
網絡:兩臺機器都是nat
磁盤:裝完系統後額外添加個50GB的磁盤
額外:勾選vt-x

 

相關知識介紹java

Cloudstack模仿亞馬遜的雲
glusterfs模仿谷歌的分佈式文件系統
hadoop也是模仿谷歌的大數據系統產生的
cloudstack是java開發的
openstack是python開發node

Cloudstack它的架構相似於saltstackpython

 

下載軟件包mysql

從官網能夠下載下面rpm包linux

http://cloudstack.apt-get.eu/centos/6/4.8/nginx

master和agent都須要rpm包,注意路徑,這裏的6是centos6系統,4.8是Cloudstack的版本號
usage這個包用於計費監控的,這裏用不到
cli是調用亞馬遜aws接口之類的包,這裏用不到
實驗這裏只用到了management,common,agent這3個包web

下載以下包
cloudstack-agent-4.8.0-1.el6.x86_64.rpm
cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm
cloudstack-cli-4.8.0-1.el6.x86_64.rpm
cloudstack-common-4.8.0-1.el6.x86_64.rpm
cloudstack-management-4.8.0-1.el6.x86_64.rpm
cloudstack-usage-4.8.0-1.el6.x86_64.rpm

  操做命令以下sql

[root@master1 ~]# mkdir /tools
[root@master1 ~]# cd /tools/
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-agent-4.8.0-1.el6.x86_64.rpm
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-cli-4.8.0-1.el6.x86_64.rpm
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-common-4.8.0-1.el6.x86_64.rpm
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-management-4.8.0-1.el6.x86_64.rpm
wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-usage-4.8.0-1.el6.x86_64.rpm

  

下載kvm模板,這裏只有master須要下載這個模板,它是系統虛擬機使用的
systemvm64template-2016-05-18-4.7.1-kvm.qcow2.bz2chrome

http://cloudstack.apt-get.eu/systemvm/4.6/
http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2



正式開始
關閉iptables和selinux

sed  -i   's#SELINUX=enforcing#SELINUX=disabled#g'   /etc/selinux/config
setenforce  0
 chkconfig iptables off 
/etc/init.d/iptables  stop

兩臺機器配置IP地址爲靜態的

[root@master1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.145.151
NETMASK=255.255.255.0
GATEWAY=192.168.145.2
DNS1=10.0.1.11
[root@master1 ~]# 
[root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.145.152
NETMASK=255.255.255.0
GATEWAY=192.168.145.2
DNS1=10.0.1.11
[root@agent1 ~]# 

配置主機名分別爲master1和agent1

配置hosts文件
cat >>/etc/hosts<<EOF
192.168.145.151 master1
192.168.145.152 agent1
EOF

配置ntp 

yum  install ntp -y
chkconfig ntpd on 
/etc/init.d/ntpd start

檢查 hostname  --fqdn  

[root@master1 ~]# hostname --fqdn
master1
[root@master1 ~]# 
[root@agent1 ~]# hostname --fqdn
agent1
[root@agent1 ~]# 

 

兩臺機器安裝epel源,默認的163的源能夠下載epel源

yum   install  epel-release -y

master端安裝nfs
它也會自動把依賴的rpcbind安裝上
nfs 做爲二級存儲,給agent提供虛擬機iso文件,存儲快照的地方

yum  install nfs-utils -y

 

master端配置nfs,給agent宿主機當二級存儲使用

[root@master1 ~]# cat /etc/exports 
/export/secondary   *(rw,async,no_root_squash,no_subtree_check)
[root@master1 ~]# 

master建立掛載目錄 

[root@master1 ~]# mkdir /export/secondary  -p
[root@master1 ~]#

agent上也以下操做,注意agent新建primary目錄

[root@agent1 ~]# mkdir /export/primary  -p
[root@agent1 ~]# 

格式化磁盤
master上操做,這裏不分區了

[root@master1 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.41.12 (17-May-2010)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@master1 ~]# 

agent上操做

[root@agent1 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.41.12 (17-May-2010)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@agent1 ~]# 

掛載磁盤

master上操做
[root@master1 ~]# echo "/dev/sdb   /export/secondary  ext4  defaults  0  0">>/etc/fstab
[root@master1 ~]# mount -a
[root@master1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        35G  2.3G   31G   7% /
tmpfs           931M     0  931M   0% /dev/shm
/dev/sda1       380M   33M  328M   9% /boot
/dev/sdb         50G   52M   47G   1% /export/secondary
[root@master1 ~]# 

agent上操做
[root@agent1 ~]# echo "/dev/sdb   /export/primary  ext4  defaults  0  0">>/etc/fstab
[root@agent1 ~]# mount -a
[root@agent1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        35G  2.1G   32G   7% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
/dev/sda1       380M   33M  328M   9% /boot
/dev/sdb         50G   52M   47G   1% /export/primary
[root@agent1 ~]#

  

配置nfs和iptables


 

先打開官方文檔配置nfs和iptables等

http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.9/qig.html

有些企業能夠關閉iptables,有些企業須要開啓,這裏咱們按照開啓的配置,來配置nfs

centos6.x 配置nfs添加以下參數,默認有這些參數,只是被註釋了。這裏咱們直接添加到文件最後便可
這個工做在master上操做

LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020

這裏咱們直接添加到文件最後便可

[root@master1 ~]# vim /etc/sysconfig/nfs
[root@master1 ~]# tail -10 /etc/sysconfig/nfs 
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
#RDMA_PORT=20049 
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020
[root@master1 ~]# 


先後對比
master上操做的

[root@master1 tools]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
[root@master1 tools]# vim /etc/sysconfig/iptables

添加以下
多加了一條80的。爲了不之後再配置,這個80個nfs無關,是後面提供http方式的鏡像源使用

-A INPUT  -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 32769 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 892 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 875 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 662 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 80 -j ACCEPT

添加後結果以下

[root@master1 tools]# cat /etc/sysconfig/iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 32769 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 892 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 875 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A INPUT  -m state --state NEW -p udp --dport 662 -j ACCEPT
-A INPUT  -m state --state NEW -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
[root@master1 tools]# 

 

master開啓nfs服務,開啓iptables

[root@master1 tools]# service iptables restart
iptables: Applying firewall rules:                         [  OK  ]
[root@master1 tools]# service rpcbind start
[root@master1 tools]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@master1 tools]# chkconfig rpcbind on
[root@master1 tools]# chkconfig nfs on
[root@master1 tools]# 

agent上檢查
若是沒有showmount命令須要安裝nfs包
yum install nfs-utils -y

查看對方是否提供了nfs服務

[root@agent1 ~]# showmount -e 192.168.145.151
Export list for 192.168.145.151:
/export/secondary *
[root@agent1 ~]# 

測試下能否掛載

[root@agent1 ~]# mount -t nfs 192.168.145.151:/export/secondary /mnt
[root@agent1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              35G  2.3G   31G   7% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/sda1             380M   33M  328M   9% /boot
/dev/sdb               50G   52M   47G   1% /export/primary
192.168.145.151:/export/secondary
                       50G   52M   47G   1% /mnt
[root@agent1 ~]# 

測試成功,卸載便可。上面僅僅測試

[root@agent1 ~]# umount /mnt -lf
[root@agent1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        35G  2.3G   31G   7% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
/dev/sda1       380M   33M  328M   9% /boot
/dev/sdb         50G   52M   47G   1% /export/primary
[root@agent1 ~]# 

  

安裝和配置Cloudstack


 

管理服務器端安裝,master上操做

[root@master1 tools]# ls
cloudstack-agent-4.8.0-1.el6.x86_64.rpm
cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm
cloudstack-cli-4.8.0-1.el6.x86_64.rpm
cloudstack-common-4.8.0-1.el6.x86_64.rpm
cloudstack-management-4.8.0-1.el6.x86_64.rpm
cloudstack-usage-4.8.0-1.el6.x86_64.rpm
systemvm64template-4.6.0-kvm.qcow2.bz2


[root@master1 tools]# yum install -y cloudstack-management-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm
[root@master1 tools]# rpm -qa | grep cloudstack
cloudstack-common-4.8.0-1.el6.x86_64
cloudstack-management-4.8.0-1.el6.x86_64
[root@master1 tools]# 

在master上安裝mysql-server 

[root@master1 tools]# yum  install mysql-server -y

 

修改mysql配置文件添加參數
在[mysqld]模塊下
添加下面參數

innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

最終結果以下

[root@master1 tools]# vim /etc/my.cnf 
[root@master1 tools]# cat /etc/my.cnf 
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@master1 tools]# 

啓動mysql服務並設置開機啓動

[root@master1 tools]# service mysqld start
Initializing MySQL database:  Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h master1 password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

                                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@master1 tools]# chkconfig mysqld on
[root@master1 tools]# 
[root@master1 tools]# ls /var/lib/mysql/
ibdata1  ib_logfile0  ib_logfile1  mysql  mysql.sock  test
[root@master1 tools]# 

給mysql設置密碼,第一個是localhost的,第二個是能夠遠程登陸的

[root@master1 tools]# /usr/bin/mysqladmin -u root password '123456'
[root@master1 tools]# mysql -uroot -p123456 -e "grant all on *.* to root@'%'  identified by '123456';"
[root@master1 tools]# 


master端初始化Cloudstack的數據庫
這個命令實際上是導入數據到mysql庫(在master上操做),執行腳本建立庫和表

[root@master1 tools]# cloudstack-setup-databases cloud:123456@localhost --deploy-as=root:123456
Mysql user name:cloud                                                           [ OK ]
Mysql user password:******                                                      [ OK ]
Mysql server ip:localhost                                                       [ OK ]
Mysql server port:3306                                                          [ OK ]
Mysql root user name:root                                                       [ OK ]
Mysql root user password:******                                                 [ OK ]
Checking Cloud database files ...                                               [ OK ]
Checking local machine hostname ...                                             [ OK ]
Checking SELinux setup ...                                                      [ OK ]
Detected local IP address as 192.168.145.151, will use as cluster management server node IP[ OK ]
Preparing /etc/cloudstack/management/db.properties                              [ OK ]
Applying /usr/share/cloudstack-management/setup/create-database.sql             [ OK ]
Applying /usr/share/cloudstack-management/setup/create-schema.sql               [ OK ]
Applying /usr/share/cloudstack-management/setup/create-database-premium.sql     [ OK ]
Applying /usr/share/cloudstack-management/setup/create-schema-premium.sql       [ OK ]
Applying /usr/share/cloudstack-management/setup/server-setup.sql                [ OK ]
Applying /usr/share/cloudstack-management/setup/templates.sql                   [ OK ]
Processing encryption ...                                                       [ OK ]
Finalizing setup ...                                                            [ OK ]

CloudStack has successfully initialized database, you can check your database configuration in /etc/cloudstack/management/db.properties

[root@master1 tools]# 

初始化完畢
下面文件是初始化以後,自動改動的。咱們能夠查看下,裏面的東西不須要改動了

[root@master1 tools]# vim /etc/cloudstack/management/db.properties
[root@master1 tools]# 

啓動master,輸入cl按tab鍵能夠看到不少命令

[root@master1 tools]# cl
clean-binary-files                    cloudstack-set-guest-sshkey
clear                                 cloudstack-setup-databases
clock                                 cloudstack-setup-encryption
clockdiff                             cloudstack-setup-management
cloudstack-external-ipallocator.py    cloudstack-sysvmadm
cloudstack-migrate-databases          cloudstack-update-xenserver-licenses
cloudstack-sccs                       cls
cloudstack-set-guest-password         
[root@master1 tools]# cloudstack-setup-management 
Starting to configure CloudStack Management Server:
Configure Firewall ...        [OK]
Configure CloudStack Management Server ...[OK]
CloudStack Management Server setup is Done!
[root@master1 tools]#

你的master防火牆配置好了,這個啓動它會再起來,加入一些本身的端口,好比下面。多了9090,8250,8080端口 

[root@master1 tools]# head -10  /etc/sysconfig/iptables
# Generated by iptables-save v1.4.7 on Sat Feb 11 20:07:43 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -p tcp -m tcp --dport 9090 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 8250 -j ACCEPT 
-A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT 
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 
-A INPUT -p icmp -j ACCEPT 
[root@master1 tools]# 

下面是它的日誌,由於它底層是tomcat,日誌是一致的
master最好是16GB內存。提供足夠內存給jvm,這樣服務啓動才快。

[root@master1 tools]# tail -f /var/log/cloudstack/management/catalina.out 
INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-2f0e1bd5) (logid:4db872cc) Begin cleanup expired async-jobs
INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-2f0e1bd5) (logid:4db872cc) End cleanup expired async-jobs
INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-fc8583d4) (logid:8724f870) Begin cleanup expired async-jobs
INFO  [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-fc8583d4) (logid:8724f870) End cleanup expired async-jobs

IE瀏覽器打開下面地址
http://192.168.145.151:8080/client/
打開管理頁面,說明master端安裝完畢

 

導入系統虛擬機鏡像

CloudStack經過一系列系統虛擬機提供功能,如訪問虛擬機控制檯,如提供各種
網絡服務,以及管理輔助存儲中的各種資源。
下面是導入系統虛擬機模板,並把這些模板部署於剛纔建立的輔助存儲中:管理服務器
包含一個腳本能夠正確的操做這些系統虛擬機模板
先找到虛擬機模板路徑

[root@master1 tools]# ls /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 
/tools/systemvm64template-4.6.0-kvm.qcow2.bz2

master上執行下面命令

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /export/secondary \
-f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \
-h kvm -F

這個步驟的做用就是把虛擬機模板導入到二級存儲,執行過程以下

[root@master1 tools]# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
> -m /export/secondary \
> -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \
> -h kvm -F
Uncompressing to /usr/share/cloudstack-common/scripts/storage/secondary/0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2.tmp (type bz2)...could take a long time
Moving to /export/secondary/template/tmpl/1/3///0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2...could take a while
Successfully installed system VM template /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 to /export/secondary/template/tmpl/1/3/
[root@master1 tools]# 

導入成功後,模板會在這裏,一個模板和一個模板配置文件

[root@master1 tools]# cd /export/secondary/
[root@master1 secondary]# ls
lost+found  template
[root@master1 secondary]# cd template/tmpl/1/3/
[root@master1 3]# ls
0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2  template.properties
[root@master1 3]# ls
0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2  template.properties
[root@master1 3]# pwd
/export/secondary/template/tmpl/1/3
[root@master1 3]# 

這個是模板的配置,不用修改

[root@master1 3]# cat template.properties 
filename=0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2
description=SystemVM Template
checksum=
hvm=false
size=322954240
qcow2=true
id=3
public=true
qcow2.filename=0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2
uniquename=routing-3
qcow2.virtualsize=322954240
virtualsize=322954240
qcow2.size=322954240
[root@master1 3]# 

  

 

agent安裝Cloudstack包
agent端以下操做

[root@agent1 tools]# yum install cloudstack-common-4.8.0-1.el6.x86_64.rpm cloudstack-agent-4.8.0-1.el6.x86_64.rpm -y

它會依賴qemu-kvm,libvirt和glusterfs 這些包都會自動安裝上
glusterfs已經默認做爲kvm的後端存儲了

[root@agent1 ~]# rpm -qa | egrep "cloudstack|gluster|kvm|libvirt"
glusterfs-client-xlators-3.7.5-19.el6.x86_64
cloudstack-common-4.8.0-1.el6.x86_64
cloudstack-agent-4.8.0-1.el6.x86_64
glusterfs-libs-3.7.5-19.el6.x86_64
glusterfs-3.7.5-19.el6.x86_64
glusterfs-api-3.7.5-19.el6.x86_64
libvirt-python-0.10.2-60.el6.x86_64
libvirt-0.10.2-60.el6.x86_64
libvirt-client-0.10.2-60.el6.x86_64
qemu-kvm-0.12.1.2-2.491.el6_8.3.x86_64
[root@agent1 ~]# 

Agent端虛擬化配置

配置KVM
KVM中咱們有兩部分須要進行配置,libvirt和qemu

配置qemu
KVM的配置相對簡單,只須要配置一項,編輯/etc/libvirt/qemu.conf
取消vnc_listen=0.0.0.0的註釋,同時註釋掉security_driver="none"
因爲security_driver默認是以下注釋狀態。#security_driver = "selinux"
這裏就不改它了。只取消vnc_listen=0.0.0.0的註釋
(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)

配置Libvirt

Cloudstack經過調用libvirt管理虛擬機。
爲了實現動態遷移,libvirt須要監聽使用非加密的TCP鏈接。還須要關閉嘗試
使用組播DNS進行廣播。這些都在/etc/libvirt/libvirtd.conf文件中進行配置
(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)
設置下列參數
這些參數它是讓咱們取消註釋,咱們直接加入到最後就好了(agent上操做)

 

listen_tls = 0
listen_tcp = 1
tcp_port = "16059"
auth_tcp = "none"
mdns_adv = 0

修改命令以下 

cat>>/etc/libvirt/libvirtd.conf<<EOF
listen_tls = 0
listen_tcp = 1
tcp_port = "16059"
auth_tcp = "none"
mdns_adv = 0
EOF

操做過程以下

[root@agent1 tools]# cat>>/etc/libvirt/libvirtd.conf<<EOF
> listen_tls = 0
> listen_tcp = 1
> tcp_port = "16059"
> auth_tcp = "none"
> mdns_adv = 0
> EOF
[root@agent1 tools]# tail -5 /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16059"
auth_tcp = "none"
mdns_adv = 0
[root@agent1 tools]# 

還要取消下面文件中註釋
/etc/sysconfig/libvirtd
#LIBVIRTD_ARGS="--listen"
文檔上是取消註釋,咱們這裏改爲-l 注意這裏是listen的l,是字母,不是數字1
LIBVIRTD_ARGS="-1"

(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)
查看檢驗

[root@agent1 tools]# grep LIBVIRTD_ARGS /etc/sysconfig/libvirtd
# in LIBVIRTD_ARGS instead.
LIBVIRTD_ARGS="-1"
[root@agent1 tools]# 

重啓libvirt服務,查看kvm模塊是否有加載

[root@agent1 tools]# /etc/init.d/libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]
[root@agent1 tools]# lsmod | grep kvm
kvm_intel              55496  0 
kvm                   337772  1 kvm_intel
[root@agent1 tools]# 

KVM部分配置完成 

 

 

 

用戶界面操做



默認密碼是admin/password
用戶界面語言能夠選擇簡體中文
IE瀏覽器打開下面地址
http://192.168.145.151:8080/client/

CloudStack提供一個基於Web的UI,管理員和終端用戶可以使用這個界面。用戶界面版本

依賴於登陸時使用的憑證不一樣而不一樣。用戶界面是適用於大多數流行的瀏覽器。包括IE7
IE8,IE9,Firefox,chrome等
登陸地址以下
http://management-server-ip:8080/client/

 

admin/password
能夠選擇語言

 

 

登陸進去。顯示以下

 

 

 

用戶界面的配置
網頁上點擊跳過此步驟

 

點擊右邊的添加資源域

 

 

選擇基本網絡

 

dns這裏咱們均可以寫成公共的dns服務器(咱們沒本身搭建dns)
我本地局域網友dns,這裏就寫成了本地dns服務器
網絡就選擇默認的第一個就好了

 

下一步

 

這裏就輸入eth0
(高級玩法裏面還分存儲網絡,管理網絡和來賓網絡)

 

編輯下面的管理和來賓,標籤這裏都寫成cloudbr0

 

點擊下一步

 

網關寫上面的這些信息
預留的IP這裏,是給系統虛擬機和宿主機用的

 

這個是給普通kvm使用的,建立的客戶kvm使用的

 

集羣名稱自定義

 

這裏的主機就是agent的那個
root/root01

 

主存儲這裏支持不少協議。RBD和Gluster本身能夠百度下做用
RBD是safe
nfs的話,機器掛了。全部kvm都掛了
若是用了glusterfs掛了。可使用其它節點起來。(複製卷)
咱們這裏設置爲本身的共享掛載點/export/primary

 

最終填寫結果以下,下一步

 

有時間能夠百度下,下面東西都是什麼
s3是亞馬遜aws的雲存儲

 

二級存儲這裏最終的選擇以下

點擊啓動
 

 

初始化過程比較慢

 

能夠先點否,手動啓用資源域

 

啓動資源域

啓動資源域後系統會建立兩個虛擬機
console proxy vm是你虛擬機的vnc的代理服務的機器
secondary storage vm是你虛擬機鏡像的機器,經過它取到鏡像

 

 

 

啓動資源域以後
登陸agent上查看2個系統虛擬機

[root@agent1 cloudstack]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     s-1-VM                         running
 2     v-2-VM                         running

[root@agent1 cloudstack]# 

宿主機多了不少vnet

[root@agent1 cloudstack]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:feab:d5a9/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff
8: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0
    inet6 fe80::20c:29ff:feab:d5a9/64 scope link 
       valid_lft forever preferred_lft forever
10: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff
    inet 169.254.0.1/16 scope global cloud0
    inet6 fe80::f810:caff:fe2d:6be3/64 scope link 
       valid_lft forever preferred_lft forever
11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:a9ff:fefe:64/64 scope link 
       valid_lft forever preferred_lft forever
12: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:db:22:00:00:07 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcdb:22ff:fe00:7/64 scope link 
       valid_lft forever preferred_lft forever
13: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:20:e4:00:00:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc20:e4ff:fe00:13/64 scope link 
       valid_lft forever preferred_lft forever
14: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:a9:fe:00:6e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:a9ff:fefe:6e/64 scope link 
       valid_lft forever preferred_lft forever
15: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:92:68:00:00:08 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc92:68ff:fe00:8/64 scope link 
       valid_lft forever preferred_lft forever
16: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:84:42:00:00:0f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc84:42ff:fe00:f/64 scope link 
       valid_lft forever preferred_lft forever
[root@agent1 cloudstack]# 

同時發現eth0的地址配到了cloudbr0上

[root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg- 
ifcfg-cloudbr0  ifcfg-eth0      ifcfg-lo        
[root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-cloudbr0 

DEVICE=cloudbr0

TYPE=Bridge

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.145.152
NETMASK=255.255.255.0
GATEWAY=192.168.145.2
DNS1=10.0.1.11
NM_CONTROLLED=no
[root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet

ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.145.152
NETMASK=255.255.255.0
GATEWAY=192.168.145.2
DNS1=10.0.1.11
NM_CONTROLLED=no
BRIDGE=cloudbr0
[root@agent1 ~]# ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:AB:D5:A9  
          inet6 addr: fe80::20c:29ff:feab:d5a9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9577 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5449 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1804345 (1.7 MiB)  TX bytes:646033 (630.8 KiB)

[root@agent1 ~]# ifconfig cloudbr0
cloudbr0  Link encap:Ethernet  HWaddr 00:0C:29:AB:D5:A9  
          inet addr:192.168.145.152  Bcast:192.168.145.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feab:d5a9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2652 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1343 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:277249 (270.7 KiB)  TX bytes:190930 (186.4 KiB)

[root@agent1 ~]# 

 

系統VM是不一樣於主機上建立的普通虛擬機,它們是CloudStack雲平臺自帶的用於完成自身的一些任務的虛擬機

一、Seondary Storage VM:簡稱SSVM,用於管理二級存儲的相關操做,如模板根鏡像文件的
上傳與下載,快照,volumes的存放,第一次建立虛擬機時從二級存儲拷貝模板到一級存儲而且
自動建立快照,每個資源域能夠有多個SSVM,當SSVM被刪除或中止,它會自動被重建並啓動
二、ConsolePorxy VM:用於web界面展現控制檯
三、虛擬路由器:將會在第一個實例啓動後自動建立

兩臺虛擬機能夠查看控制檯

默認的用戶名和密碼root/password

167是pod網絡,179是來賓網絡

下面是另一臺機器的網絡狀況

系統VM是不一樣於主機上建立的普通虛擬機,它們是CloudStack雲平臺自帶的用於
完成自身的一些任務的虛擬機
Seondary Storage VM:簡稱SSVM,用於管理二級存儲的相關操做,如模板根鏡像文件的
上傳與下載,快照,volumes的存放,第一次建立虛擬機時從二級存儲拷貝模板到一級存儲而且
自動建立快照,每個資源域能夠有多個SSVM,當SSVM被刪除或中止,它會自動被重建並啓動
 
ConsolePorxy VM:用於web界面展現控制檯
 
虛擬路由器將會在第一個實例啓動後自動建立

 

 

模板這裏有個centos5.5,目前是沒法使用的

 

要把它打開

 

咱們容許均可以訪問

提示重啓服務才生效,咱們重啓,它會從網上下載5.5的鏡像源
[root@master1 3]# /etc/init.d/cloudstack-management restart
Stopping cloudstack-management:                            [FAILED]
Starting cloudstack-management:                            [  OK  ]
[root@master1 3]# 
[root@master1 3]# 
[root@master1 3]# /etc/init.d/cloudstack-management restart
Stopping cloudstack-management:                            [  OK  ]
Starting cloudstack-management:                            [  OK  ]
[root@master1 3]# 

以上修改,它就自動下載自帶的模板了

 

模板能夠經過本地上傳,也能夠添加
注意本地上傳暫時有bug。

 

從新登陸網頁

 

對於一些狀況,資源域資源告警時可能沒法建立虛擬機。咱們能夠調整參數,讓其超配

改爲3

mem.overprovisioning.factor
內存超配倍數,內存可用量=內存總量*超配倍數;類型:整數;默認:1(不超配)

下面兩個網站關於全局設置的意思能夠看看
http://www.chinacloudly.com/cloudstack%E5%85%A8%E5%B1%80%E9%85%8D%E7%BD%AE%E5%8F%82%E6%95%B0/
http://blog.csdn.net/u011650565/article/details/41945433

 

 

重啓management服務,登陸這裏沒變

 

而後點擊基礎架構---集羣----cluster--設置

找到下面,

也改爲3.0

告警設置

容許的利用率,下面的0.85能夠改爲0.99,由於達到0.85的狀況下不容許建立虛擬機了

 

這裏就變了,個人虛擬機沒8GB,這裏就是超配了

 

 

 

 

製做模板和建立自定義虛擬機



CloudStack模板支持兩種模式
一、經過kvm製做的qcow2或者raw文件
二、直接上傳iso文件做爲模板文件

因爲國內nginx比較流行,咱們這裏使用nginx搭建鏡像源

 

[root@master1 ~]# yum install nginx -y

防火牆咱們前期作了,其實也能夠把它關閉了
實際環境中,最好另外搭建一臺nginx服務器,儘可能減輕master的壓力

 

[root@master1 ~]# /etc/init.d/iptables stop

  

啓動nginx

[root@master1 ~]# /etc/init.d/nginx   start
Starting nginx:                                            [  OK  ]
[root@master1 ~]# 

瀏覽器輸入master的地址,也就是nginx安裝的服務器的地址
http://192.168.145.151/

編輯配置文件,讓它成爲目錄服務器

   這個access_log下面添加
    access_log  /var/log/nginx/access.log  main;
    autoindex on;  #顯示目錄
    autoindex_exact_size  on;  #顯示文件大小
    autoindex_localtime on;   #顯示文件時間

確認下,能夠把漢字刪除,防止一些可能的報錯

[root@master1 ~]# sed -n  '23,26p' /etc/nginx/nginx.conf
    access_log  /var/log/nginx/access.log  main;
    autoindex on;
    autoindex_exact_size  on;
    autoindex_localtime on;
[root@master1 ~]# 

 

檢查語法,重啓nginx  

[root@master1 ~]# /etc/init.d/nginx configtest
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@master1 ~]# /etc/init.d/nginx restart
Stopping nginx:                                            [  OK  ]
Starting nginx:                                            [  OK  ]
[root@master1 ~]# 

  

到/usr/share/nginx/html 目錄下,刪除全部文件,
上傳iso文件到/usr/share/nginx/html 下面
這裏咱們上傳CentOS-6.5-x86_64-minimal.iso

[root@master1 tools]# cd /usr/share/nginx/html/
[root@master1 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@master1 html]# rm -rf *
[root@master1 html]# mv /tools/CentOS-6.5-x86_64-minimal.iso .
[root@master1 html]# ls
CentOS-6.5-x86_64-minimal.iso
[root@master1 html]# 


再次刷新nginx首頁
http://192.168.145.151/

網頁操做,添加iso

它會自動從服務器上下載
本身下載到本身上,確定速度很快

 
 
建立自定義虛擬機實例

 

這裏能夠選擇20GB

關聯性默認

網絡默認

名稱這裏若是你不寫,它會自動生成64位的一個uuid

 

點擊啓動
 
建立中,第一次建立比較慢,它要從二級存儲,也就是nfs服務器拿到主存儲上,因此比較慢
建立第二個虛擬機的時候比較快

 

打開實例的控制檯

 

選最後一個

 

root01

 

 

virtio已是頁面標識了,紅帽如今全力支持kvm了
c6已經集成到內核了

安裝過程

 

點擊reboot以後,這裏取消附件iso,防止重複安裝
機器裝好以後,reboot,同時儘快取消iso,不然又從iso建立了

 

 設置網卡onboot=yes

 

安裝新的實例後,虛擬路由器這裏也變了

 

生產中,通常一個集羣是8-16臺或者24臺主機
2個機櫃的服務器,這麼劃分比較合理
超過24臺你能夠添加另外一個集羣,cluster2

 

 

agent機器上輸入ip a
看到cloudbr0就是咱們建立的網橋
vnet0就是虛擬設備
虛擬機鏈接到vnet設備,vnet又接到網橋上

[root@agent1 cloudstack]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:feab:d5a9/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff
8: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0
    inet6 fe80::20c:29ff:feab:d5a9/64 scope link 
       valid_lft forever preferred_lft forever
10: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff
    inet 169.254.0.1/16 scope global cloud0
    inet6 fe80::f810:caff:fe2d:6be3/64 scope link 
       valid_lft forever preferred_lft forever
11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:a9ff:fefe:64/64 scope link 
       valid_lft forever preferred_lft forever
12: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:db:22:00:00:07 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcdb:22ff:fe00:7/64 scope link 
       valid_lft forever preferred_lft forever
13: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:20:e4:00:00:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc20:e4ff:fe00:13/64 scope link 
       valid_lft forever preferred_lft forever
14: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:a9:fe:00:6e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:a9ff:fefe:6e/64 scope link 
       valid_lft forever preferred_lft forever
15: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:92:68:00:00:08 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc92:68ff:fe00:8/64 scope link 
       valid_lft forever preferred_lft forever
16: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:84:42:00:00:0f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc84:42ff:fe00:f/64 scope link 
       valid_lft forever preferred_lft forever
17: vnet6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:ba:5c:00:00:12 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcba:5cff:fe00:12/64 scope link 
       valid_lft forever preferred_lft forever
18: vnet7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:a9:fe:01:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc00:a9ff:fefe:14d/64 scope link 
       valid_lft forever preferred_lft forever
19: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:b1:ec:00:00:10 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fcb1:ecff:fe00:10/64 scope link 
       valid_lft forever preferred_lft forever
[root@agent1 cloudstack]# 

網絡流量走向:虛擬機到vnet---cloudbr0---eth0
cloudbr0橋接了不少設備

[root@agent1 cloudstack]# brctl show
bridge name	bridge id		STP enabled	interfaces
cloud0		8000.fe00a9fe0064	no		vnet0
							vnet3
							vnet7
cloudbr0		8000.000c29abd5a9	no		eth0
							vnet1
							vnet2
							vnet4
							vnet5
							vnet6
							vnet8
virbr0		8000.525400ea877d	yes		virbr0-nic
[root@agent1 cloudstack]# 

master到那個虛擬機網絡不通

[root@master1 html]# ping 192.168.145.176
PING 192.168.145.176 (192.168.145.176) 56(84) bytes of data.
^C
--- 192.168.145.176 ping statistics ---
23 packets transmitted, 0 received, 100% packet loss, time 22542ms

[root@master1 html]# 

  

添加下面安全組

打開icmp

放開所有的tcp

出口規則也所有打開

 

修改安全組以後通了 

[root@master1 html]# ping 192.168.145.176
PING 192.168.145.176 (192.168.145.176) 56(84) bytes of data.
64 bytes from 192.168.145.176: icmp_seq=1 ttl=64 time=3.60 ms
64 bytes from 192.168.145.176: icmp_seq=2 ttl=64 time=1.88 ms
64 bytes from 192.168.145.176: icmp_seq=3 ttl=64 time=1.46 ms
^C
--- 192.168.145.176 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2087ms
rtt min/avg/max/mdev = 1.463/2.316/3.605/0.927 ms
[root@master1 html]# 

搭建本身的私有云,安全組這裏是全放開,和剛纔設置相似,
再加個udp的全放開

 

 登陸到6.5的實例上,配置dns

[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
HWADDR=06:B1:EC:00:00:10
TYPE=Ethernet
UUID=5f46c5e2-5ac6-4bb9-b21d-fed7f49e7475
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
DNS1=10.0.1.11
[root@localhost ~]# /etc/init.d/network restart
[root@localhost ~]# ping www.baidu.com
PING www.a.shifen.com (115.239.211.112) 56(84) bytes of data.
64 bytes from 115.239.211.112: icmp_seq=1 ttl=128 time=4.10 ms
^C
--- www.a.shifen.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 815ms
rtt min/avg/max/mdev = 4.107/4.107/4.107/0.000 ms
[root@localhost ~]# 

6.5的實例上安裝openssh  

[root@localhost ~]# yum install -y openssh

  

回顧下,查看下二級存儲的東西

[root@master1 html]# cd /export/secondary/
[root@master1 secondary]# ls
lost+found  snapshots  template  volumes
[root@master1 secondary]# cd snapshots/
[root@master1 snapshots]# ls
[root@master1 snapshots]# cd ..
[root@master1 secondary]# cd template/
[root@master1 template]# ls
tmpl
[root@master1 template]# cd tmpl/2/201/
[root@master1 201]# ls
201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso  template.properties
[root@master1 201]# cat template.properties 
#
#Sat Feb 11 15:18:59 UTC 2017
filename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso
id=201
public=true
iso.filename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso
uniquename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298
virtualsize=417333248
checksum=0d9dc37b5dd4befa1c440d2174e88a87
iso.size=417333248
iso.virtualsize=417333248
hvm=true
description=centos6.5
iso=true
size=417333248
[root@master1 201]#

  

 agent上查看

[root@agent1 cloudstack]# cd /export/primary/
[root@agent1 primary]# ls
0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b  cf3dac7a-a071-4def-83aa-555b5611fb02
1685f81b-9ac9-4b21-981a-f1b01006c9ef  f3521c3d-fca3-4527-984d-5ff208e05b5c
99643b7d-aaf4-4c75-b7d6-832c060e9b77  lost+found

這裏面有兩個系統建立的虛擬機 ,加上本身建立的虛擬機,還有一個虛擬路由

[root@agent1 primary]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     s-1-VM                         running
 2     v-2-VM                         running
 3     r-4-VM                         running
 4     i-2-3-VM                       running

[root@agent1 primary]# 
相關文章
相關標籤/搜索