在第一篇中,novaclient最終會向nova發出下面的HTTP POST request。html
POST /e40722e5e0c74a0b878c595c0afab5fd/servers/6a17e64d-23c7-46a3-9812-8409ad215e40/os-volume_attachments
和下面參數:
python
Action: 'create', body: {"volumeAttachment": {"device": "/dev/vdc", "volumeId": "5fe8132e-f937-4c1b-8361-9984f94a7c28"}}
這裏能夠看到attach REST API的詳細說明: http://api.openstack.org/api-ref-compute-v2-ext.htmlweb
nova-api啓動數據庫
咱們如今回頭看看nova是如何啓動web service來監聽上述的http request。 在openstack nova的運行環境中,你會發現nova-api 這個進程。打開/usr/bin/nova-api文件,咱們能夠找到啓動nova API服務的函數入口,在nova源代碼目錄/cmd/api.py文件中。 主要的處理流程以下:編程
1. 根據nova.conf中定義的enabled_apis變量,來啓動相應的API服務。 例如,下面就說明要啓動ec2 API。後端
enabled_apis = ec2,osapi_compute,metadataapi
2. 每一個api 服務實際就是一個WSGIService對象的實例。server = service.WSGIService(api, use_ssl=should_use_ssl)app
WSGI對象的初始化過程當中,除了基本的wsgi.Server參數處理,還有import相應的Manager class。好比說:nova.conf文件中定義了Network manager classdom
network_manager = nova.network.manager.FlatDHCPManageride
3. Launcher 一個服務,而後等待服務結束。函數調用次序: launcher.launch_service ==> self.services.add ==> self.tg.add_thread(self.run_service, service, self.done) => self.run_service, 即時啓動線程來call service中定義的start函數。
481 @staticmethod 482 def run_service(service, done): 483 """Service start wrapper. 484 485 :param service: service to run 486 :param done: event to wait on until a shutdown is triggered 487 :returns: None 488 489 """ 490 service.start() 491 systemd.notify_once() 492 done.wait()
切換到 nova/service.py中class WSGService, start函數中主要四個call。依次是self.manager.init_host, self.manager.pre_start_hook, self.server.start, self,manager.post_start_hook. 其中,self.server就是__init__函數中創建的wsgiServer。 文件位於nova/wsgi.py。wsgiServer.start函數最終spawn一個WSGI app來處理接受和處理HTTP request。
nova-compute 啓動
同理分析,一樣的service啓動次序。打開nova/cmd/compute.py,看看computer service是如何產生。
70 server = service.Service.create(binary='nova-compute', 71 topic=CONF.compute_topic, 72 db_allowed=CONF.conductor.use_local) 73 service.serve(server) 74 service.wait()
在Service的create類方法中,會實例化後端的computer manager (nova/computer/manager.py class ComputeManager)。 在ComputeManager的__init__構造函數中, 定義了computer RPC API接口。最後load comouter driver。 這個CONF.compute_driver必須在nova.conf文件裏配置,告訴nova compiter的後端虛擬化軟件到底用哪一個。 (i.e compute_driver = libvirt.LibvirtDriver)
572 def __init__(self, compute_driver=None, *args, **kwargs): 573 """Load configuration options and connect to the hypervisor.""" 574 self.virtapi = ComputeVirtAPI(self) 575 self.network_api = network.API() 576 self.volume_api = volume.API() 577 self._last_host_check = 0 578 self._last_bw_usage_poll = 0 579 self._bw_usage_supported = True 580 self._last_bw_usage_cell_update = 0 581 self.compute_api = compute.API() 582 self.compute_rpcapi = compute_rpcapi.ComputeAPI() 583 self.conductor_api = conductor.API() 584 self.compute_task_api = conductor.ComputeTaskAPI() 599 self.driver = driver.load_compute_driver(self.virtapi, compute_driver)
而後運行start,進入了上面的四個call。 咱們這裏先看看self.manager.init_host
1045 def init_host(self): 1046 """Initialization for a standalone compute service.""" 1047 self.driver.init_host(host=self.host) 1048 context = nova.context.get_admin_context()
這裏的driver對應的就是 libvirt.LibvirtDriver。init_host實質完成libvirt的初始化host的操做。
解析API
在nova/computer/api.py文件中,咱們能夠找到attach volume。nova client 的請求會轉發到 RPC API. 由於nova-api服務負責處理REST請求,而nova組件之間的通訊是經過RPC call。
2748 def _attach_volume(self, context, instance, volume_id, device, 2749 disk_bus, device_type): 2750 """Attach an existing volume to an existing instance. 2751 2752 This method is separated to make it possible for cells version 2753 to override it. 2754 """ ...... 2769 self.compute_rpcapi.attach_volume(context, instance=instance, 2770 volume_id=volume_id, mountpoint=device, bdm=volume_bdm) 2783 def attach_volume(self, context, instance, volume_id, device=None, 2784 disk_bus=None, device_type=None): 2785 """Attach an existing volume to an existing instance.""" 2786 # NOTE(vish): Fail fast if the device is not going to pass. This 2787 # will need to be removed along with the test if we 2788 # change the logic in the manager for what constitutes 2789 # a valid device. 2790 if device and not block_device.match_device(device): 2791 raise exception.InvalidDevicePath(path=device) 2792 return self._attach_volume(context, instance, volume_id, device, 2793 disk_bus, device_type)
而在nova/compute/rpcapi.py文件中,全部rpc client請求會remote call RPC server端的attach_volume.
326 def attach_volume(self, ctxt, instance, volume_id, mountpoint, bdm=None): 338 cctxt = self.client.prepare(server=_compute_host(None, instance), 339 version=version) 340 cctxt.cast(ctxt, 'attach_volume', **kw)
從上面的nova-computer啓動分析看出,這裏的cctxt.cast請求其實是remote call ComputeManager中的attach_volume函數。這是computer instance是管理接口。RPC server端對應的處理函數在nova/computer/manager.py中。這時候,具體的attach virtual device工做纔會交給後端的DriverVolumeBlockDevice.attach
4157 def attach_volume(self, context, volume_id, mountpoint, 4158 instance, bdm=None): 4159 """Attach a volume to an instance.""" 4160 if not bdm: 4161 bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id( 4162 context, volume_id) 4163 driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm) 4164 try: 4165 return self._attach_volume(context, instance, driver_bdm) 4166 except Exception: 4167 with excutils.save_and_reraise_exception(): 4168 bdm.destroy(context) ... 4170 def _attach_volume(self, context, instance, bdm): 4171 context = context.elevated() 4172 LOG.audit(_('Attaching volume %(volume_id)s to %(mountpoint)s'), 4173 {'volume_id': bdm.volume_id, 4174 'mountpoint': bdm['mount_device']}, 4175 context=context, instance=instance) 4176 try: 4177 bdm.attach(context, instance, self.volume_api, self.driver, 4178 do_check_attach=False, do_driver_attach=True)
咱們看看具體的virt/block_device.py 中的DriverVolumeBlockDevice class。
212 @update_db 213 def attach(self, context, instance, volume_api, virt_driver, 214 do_check_attach=True, do_driver_attach=False): 215 volume = volume_api.get(context, self.volume_id) # 根據volume id拿到volume object ...... 221 # 以LibvirtDriver爲例, 拿到虛擬化後端對應的volume鏈接,並初始化。 222 connector = virt_driver.get_volume_connector(instance) 223 connection_info = volume_api.initialize_connection(context, 224 volume_id, 225 connector) ...... 229 # If do_driver_attach is False, we will attach a volume to an instance 230 # at boot time. So actual attach is done by instance creation code. 231 if do_driver_attach: 232 encryption = encryptors.get_encryption_metadata( 233 context, volume_api, volume_id, connection_info) 234 235 try: 236 virt_driver.attach_volume( 237 context, connection_info, instance, 238 self['mount_device'], disk_bus=self['disk_bus'], 239 device_type=self['device_type'], encryption=encryption) 240 except Exception: # pylint: disable=W0702 241 with excutils.save_and_reraise_exception(): 242 LOG.exception(_("Driver failed to attach volume " 243 "%(volume_id)s at %(mountpoint)s"), 244 {'volume_id': volume_id, 245 'mountpoint': self['mount_device']}, 246 context=context, instance=instance) 247 volume_api.terminate_connection(context, volume_id, 248 connector) 249 self['connection_info'] = connection_info 250 volume_api.attach(context, volume_id, # callback函數,nova這邊的事情處理結束了。該cinder端更新數據庫等等 251 instance['uuid'], self['mount_device'])
到這裏nova/virt/libvirt/driver.py文件中的attach_volume函數,就是libvirt編程把volume作爲backing storage添加到KVM instance的配置文件當中,大體步驟流程,分析拿到KVM hyperv實例中,後端存儲是什麼類型(i.e NFS,ISCSI,FC)。而後生成對應的KVM 配置文件。主要是把須要attach的volume鏈接信息添加到libvirt.xml文件中。
LibvirtDriver.attach_volume ==》LibvirtBaseVolumeDriver.connect_volume ==>conf.to_xml() ==> virt_dom.attachDeviceFlags
Cinder follow up
全部volume_api 對應的文件在nova/volume/cinder.py (經過volume.API()函數import並實例化)。這裏的API實質上都是cindercient向cinder server端發出的REST 請求。
259 @translate_volume_exception 260 def attach(self, context, volume_id, instance_uuid, mountpoint): 261 cinderclient(context).volumes.attach(volume_id, instance_uuid, 262 mountpoint) ...... 268 @translate_volume_exception 269 def initialize_connection(self, context, volume_id, connector): 270 return cinderclient(context).volumes.initialize_connection(volume_id, 271 connector)
後續的第三篇文章會繼續來分析下cinder端如何更新數據庫信息。
總結和不足
1. 上述的nova分析主要關注attach volume接口函數的依次調用,最後只到到virtualization driver層,點到爲止。不一樣的虛擬化軟件driver給instance添加disk的步驟具體不一。這就須要讀者繼續分析相應的virt driver代碼。
2. REST service的server是如何創建起來的? 還須要讀者自行研究WSGI和 paste。
3. RPC 和 message通訊分析,都是值得咱們研究的內容。這裏就不一一介紹。
4. nova的代碼龐大,服務衆多,這裏只簡單介紹了nova-api和nova-computer.
第二篇nova inside全文完。轉載請指明出處。