OpenStack實戰篇之序章--KVM虛擬機監控

初識OpenStack兩週,於它已沒有初見時的神祕感,褪去層層面紗以後,漸識它的強大,決心以彼爲刀劍,征戰雲計算的疆場,今夜,號角吹響,且奏序章。html

    用OpenStack 建立虛擬實例後,須要對這些實例進行監控,本文所講的是KVM環境下利用OpenStack集成的Libvirt API對虛擬機CPU,內存,磁盤使用率以及網絡上傳下載速度進行監控python

    閒話少說,直接正題。linux

    1 對CPU監控網絡

Python 代碼app

import libvirt
import os
import time

conn=libvirt.open("qemu:///system")
if conn==None:
print "fail to connect hypervisor"
sys.exit(1)
try:
dom0=conn.lookupByID(85)#根據OpenStack建立的Instance ID獲得相應的Domain對象
except:
print "fail to find the domain by ID"
sys.exit(1)

Pstart_time=time.time()   #取當前時間
Dstart_time=dom0.info()[4]#直接獲取DomainInfo中的CPU時間信息
time.sleep(2)
Dstop_time=dom0.info()[4]
Pstop_time=time.time()
core_num=int(dom0.info()[3])#獲取DomainIndo中的core數量信息dom

#CPU利用率計算公式-CPU時間差/時間間隔/1000000000/核的數量*100=CPU利用率
cpu_usage=(Dstart_time-Dstop_time)/(Pstart_time-Pstop_time)/1000000000/core_num*100
cpu_usage=cpu_usage if (cpu_usage>0else 0.0
cpu_usage=cpu_usage if (cpu_usage<100else 100.0
print cpu_usage函數

     

   2 對內存的監控this

python代碼雲計算

def get_memory(pid):#定義獲取當前已使用的內存的函數
mem=0spa

#linux下 /proc/pid(進程ID)/smaps 下保存的是進程內存映像信息,比同一目錄下的maps文件更詳細些

for line in file('/proc/%d/smaps' % int(pid),'r'):
if re.findall('Private_',line):

  #統計Private內存信息量
mem+=int(re.findall('(\d+)',line)[0])
return mem

#根據實例名獲取進程ID

pid=(os.popen("ps aux|grep "+dom0.name()+" | grep -v 'grep' | awk '{print $2}'").readlines()[0])
memstatus=get_memory(pid)
memusage='%.2f' % (int(memstatus)*100.0/int(dom0.info()[2]))
print memusage

驗證方法: 能夠SSH到相應的虛擬實例上,若是是Linux 系統能夠用free -m指令查看內存使用率

 

   3 對磁盤的監控

   建立虛擬機應用實例時,會生成相應的XML文件來代表實例的信息

   def get_devices(dom,path,devs):#該函數用於獲取XML中某節點的值

tree=ElementTree.fromstring(dom.XMLDesc(0))#將XML文件轉換爲XML樹對象
devices=[]
for target in tree.findall(path):
dev=target.get(devs)
if not dev in devices:
devices.append(dev)
return devices

 

def get_blockStats(dom):#獲取磁盤狀態信息函數 包含磁盤讀入的總比特數和寫出的總比特數
block_status={}
disks=get_devices(dom,"devices/disk/target","dev")
for block in disks:
block_status[block]=dom.blockStats(block)
return block_status

 

block_status0={}
block_status1={}
block_status0=get_blockStats(dom0)
time.sleep(2)
block_status1=get_blockStats(dom0)
block_info=[]
for block in get_devices(dom0,"devices/disk/source","file"):
block_info.append(dom0.blockInfo(block,0))#獲取磁盤信息 其中0爲默認傳入的參數
for domBlockInfo in block_info:
print "logical size in bytes :%s" % domBlockInfo[0]
print "highest allocated extent in bytes :%s" % domBlockInfo[1]
print "physical size in bytes :%s" % domBlockInfo[2]
print "disk usage :%s" % str(domBlockInfo[1]/1.0/domBlockInfo[0]*100)[:5]
for block in get_devices(dom0,"devices/disk/target","dev"):
print "rd_speed :%s" % str((block_status1[block][1]-block_status0[block][1])/2048)
print "wr_speed :%s" % str((block_status1[block][3]-block_status0[block][3])/2048)

驗證方法: 能夠SSH到相應的虛擬實例上,若是是Linux 系統能夠用df -h指令查看磁盤使用率

 

    4 對網絡的監控

def get_nicInfo(nics):#獲取網絡信息包括Receive的總比特數和Transmit的總比特數
net_status={}

#經過 cat /proc/net/dev 命令查看網絡信息
for nic in nics:
net_status[nic]=[os.popen("cat /proc/net/dev |grep -w '"+nic+"' |awk '{print $10}'").readlines()[0][:-1],os.popen("cat /proc/net/dev |grep -w '"+nic+"' |awk '{print $2}'").readlines()[0][:-1]]
return net_status
net_status0={}
net_status1={}

#獲取網卡名稱
nics=get_devices(dom0,"devices/interface/target","dev")
net_status0=get_nicInfo(nics)
time.sleep(2)
net_status1=get_nicInfo(nics)
for nic in nics:
print "netcard_name :%s" % nic
print "transmit_speed :%s" % str((int(net_status1[nic][0])-int(net_status0[nic][0]))/2048)
print "receive_speed :%s" % str((int(net_status1[nic][1])-int(net_status0[nic][1]))/2048)

 

OpenStack實戰篇之序章--KVM虛擬機監控OpenStack實戰篇之序章--KVM虛擬機監控初次寫博,請各位大神手下留情,請輕噴!

參考:

Libvirt 官網API定義

virDomainBlockInfo

struct virDomainBlockInfo {
unsigned long long capacity

logical size in bytes of the block device backing image

unsigned long long allocation

highest allocated extent in bytes of the block device backing image

unsigned long long physical

physical size in bytes of the container of the backing image

}

 

virDomainBlockStatsStruct

struct virDomainBlockStatsStruct {
long long rd_req

number of read requests

long long rd_bytes

number of read bytes

long long wr_req

number of write requests

long long wr_bytes

number of written bytes

long long errs

In Xen this returns the mysterious 'oo_req'.

}

 

http://libvirt.org/html/libvirt-libvirt.html

相關文章
相關標籤/搜索