nova start 虛機的代碼流程分析

nova start 虛機的代碼流程分析,以ocata版本爲分析基礎
一、nova api服務接受用戶下發的 nova start啓動虛機請求
其對應的http restfull api接口爲post /servers/{server_id}/action
發送的action爲os-startpython

nova/api/openstack/compute/servers.py
class ServersController(wsgi.Controller):
    def _start_server(self, req, id, body):
        """Start an instance."""
        context = req.environ['nova.context']
        instance = self._get_instance(context, id)
        context.can(server_policies.SERVERS % 'start', instance)
        try:
            self.compute_api.start(context, instance)--------compute服務的api模塊接受該請求
        except (exception.InstanceNotReady, exception.InstanceIsLocked) as e:
            raise webob.exc.HTTPConflict(explanation=e.format_message())
        except exception.InstanceUnknownCell as e:
            raise exc.HTTPNotFound(explanation=e.format_message())
        except exception.InstanceInvalidState as state_error:
            common.raise_http_conflict_for_instance_invalid_state(state_error,
                'start', id)

二、nova compute模塊的api處理該請求web

nova/compute/api.py
class API(base.Base):
  @check_instance_state(vm_state=[vm_states.STOPPED])
    def start(self, context, instance):
        """Start an instance."""
        LOG.debug("Going to try to start instance", instance=instance)

        instance.task_state = task_states.POWERING_ON
        instance.save(expected_task_state=[None])

        self._record_action_start(context, instance, instance_actions.START)-----記錄對虛機的action操做
        # TODO(yamahata): injected_files isn't supported right now.
        #                 It is used only for osapi. not for ec2 api.
        #                 availability_zone isn't used by run_instance.
        self.compute_rpcapi.start_instance(context, instance)-----給nova compute服務發送rpc請求

三、nova-compute接受rpc請求,最終接受rpc請求的是,對應的manager.py文件中的對應方法api

 

nova/compute/rpcapi.py
@profiler.trace_cls("rpc")
class ComputeAPI(object):
    def start_instance(self, ctxt, instance):
        version = '4.0'
        cctxt = self.router.by_instance(ctxt, instance).prepare(
                server=_compute_host(None, instance), version=version)
        cctxt.cast(ctxt, 'start_instance', instance=instance)

nova/compute/manager.py
   def start_instance(self, context, instance):
        """Starting an instance on this host."""
        self._notify_about_instance_usage(context, instance, "power_on.start")-----發送虛機上電開始的信息
        compute_utils.notify_about_instance_action(context, instance,
            self.host, action=fields.NotificationAction.POWER_ON,
            phase=fields.NotificationPhase.START)
        self._power_on(context, instance)-------核心,給虛機上電 3.1
        instance.power_state = self._get_power_state(context, instance)
        instance.vm_state = vm_states.ACTIVE
        instance.task_state = None

        # Delete an image(VM snapshot) for a shelved instance
        snapshot_id = instance.system_metadata.get('shelved_image_id')
        if snapshot_id:
            self._delete_snapshot_of_shelved_instance(context, instance,
                                                      snapshot_id)

        # Delete system_metadata for a shelved instance
        compute_utils.remove_shelved_keys_from_system_metadata(instance)

        instance.save(expected_task_state=task_states.POWERING_ON)
        self._notify_about_instance_usage(context, instance, "power_on.end")----發送虛機上電完成的信息
        compute_utils.notify_about_instance_action(context, instance,
            self.host, action=fields.NotificationAction.POWER_ON,
            phase=fields.NotificationPhase.END)

 3.1 對_power_on(context, instance)函數的詳解 restful

nova/compute/manager.py
    def _power_on(self, context, instance):
        network_info = self.network_api.get_instance_nw_info(context, instance)-----s1 獲取虛擬機的網絡信息
        block_device_info = self._get_instance_block_device_info(context,instance)----s2 獲取虛機掛在卷的信息
        self.driver.power_on(context, instance,network_info,block_device_info)--- s3 

s3 因爲openstack默認使用libvirt,因此調用的libvirt的驅動,
爲了確保在建立虛機的時候,鏡像、網絡、要掛在的塊設備有效和正常創建,採用了硬重啓網絡

nova/virt/libvirt/driver.py	
class LibvirtDriver(driver.ComputeDriver):
    def power_on(self, context, instance, network_info,
                 block_device_info=None):
        """Power on the specified instance."""	
        self._hard_reboot(context, instance, network_info, block_device_info)	

3.1.1 對_hard_reboot函數的詳解
該函數主要作的功能是:
1)強制關閉虛擬機
2)刪除虛機的xml文件
3)獲取虛機鏡像信息
4)生成xml文件
5)建立xml,啓動虛機app

nova/virt/libvirt/driver.py	
class LibvirtDriver(driver.ComputeDriver):

    def _hard_reboot(self, context, instance, network_info,
                     block_device_info=None):
        """Reboot a virtual machine, given an instance reference.
        Performs a Libvirt reset (if supported) on the domain.
        If Libvirt reset is unavailable this method actually destroys and
        re-creates the domain to ensure the reboot happens, as the guest
        OS cannot ignore this action.
        """

        self._destroy(instance)-----s1
        # Domain XML will be redefined so we can safely undefine it
        # from libvirt. This ensure that such process as create serial
        # console for guest will run smoothly.
        self._undefine_domain(instance)--s2

        # Convert the system metadata to image metadata
        # NOTE(mdbooth): This is a workaround for stateless Nova compute
        #                https://bugs.launchpad.net/nova/+bug/1349978
        instance_dir = libvirt_utils.get_instance_path(instance)----s3
        fileutils.ensure_tree(instance_dir)

        disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,instance, instance.image_meta,---s4
                                            block_device_info)
        # NOTE(vish): This could generate the wrong device_format if we are
        #             using the raw backend and the images don't exist yet.
        #             The create_images_and_backing below doesn't properly
        #             regenerate raw backend images, however, so when it
        #             does we need to (re)generate the xml after the images
        #             are in place.
        xml = self._get_guest_xml(context, instance, network_info, disk_info,----s5
                                  instance.image_meta,
                                  block_device_info=block_device_info)

        # NOTE(mdbooth): context.auth_token will not be set when we call
        #                _hard_reboot from resume_state_on_host_boot()
        if context.auth_token is not None:-----------------s6
            # NOTE (rmk): Re-populate any missing backing files.
            backing_disk_info = self._get_instance_disk_info(instance.name,
                                                             xml,
                                                             block_device_info)
            self._create_images_and_backing(context, instance, instance_dir,
                                            backing_disk_info)

        # Initialize all the necessary networking, block devices and
        # start the instance.
        self._create_domain_and_network(context, xml, instance, network_info,-------s7
                                        disk_info,
                                        block_device_info=block_device_info,
                                        reboot=True,
                                        vifs_already_plugged=True)
        self._prepare_pci_devices_for_use(
            pci_manager.get_instance_pci_devs(instance, 'all'))

        def _wait_for_reboot():
            """Called at an interval until the VM is running again."""
            state = self.get_info(instance).state

            if state == power_state.RUNNING:
                LOG.info(_LI("Instance rebooted successfully."),
                         instance=instance)
                raise loopingcall.LoopingCallDone()

        timer = loopingcall.FixedIntervalLoopingCall(_wait_for_reboot)
        timer.start(interval=0.5).wait()

在虛機啓動的時候,涉及虛機相關信息變化的目錄有三個
/var/lib/libvirt/qemu----存放虛機運行domain域的目錄
/etc/libvirt/qemu---------存放虛機xml文件的目錄
/os_instance/_base--------存儲虛機鏡像的目錄
/os_instance/虛機uuid-----存放虛機disk磁盤信息的目錄
s1 執行時,上面目錄沒有任何變化
s2 執行時,/etc/libvirt/qemu目錄下,虛機對應的xml刪除,其餘目錄變化
s3 執行時,獲取虛機的目錄/os_instance/03cb8a7c-786f-402a-b059-1f2d90e69bd4
s4 執行時,獲取虛機disk信息
主要相關參數的值以下:
CONF.libvirt.virt_type='kvm'
block_device_info={'swap': None, 'root_device_name': u'/dev/vda', 'ephemerals': [], 'block_device_mapping': []}
disk_info={'disk_bus': 'virtio',
'cdrom_bus': 'ide',
'mapping': {'disk.config': {'bus': 'ide', 'type': 'cdrom', 'dev': 'hda'},
'disk': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'},
'root': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}}
}
s5 執行時,生成xml文件信息,此時,只是在內存裏面存放,尚未寫到/etc/libvirt/qemu目錄對應的xml文件裏面
s6 context.auth_token 存放token信息
backing_disk_info=[
{'disk_size': 149159936,
'backing_file': '4c4935095cb43925d61d67395c452ea248e6b1c4',
'virt_disk_size': 85899345920,
'path': '/os_instance/03cb8a7c-786f-402a-b059-1f2d90e69bd4/disk',
'type': 'qcow2',
'over_committed_disk_size': 85750185984},
{'disk_size': 489472,
'backing_file': '',
'virt_disk_size': 489472,
'path': '/os_instance/03cb8a7c-786f-402a-b059-1f2d90e69bd4/disk.config',
'type': 'raw',
'over_committed_disk_size': 0}
]
s7 執行時,在/etc/libvirt/qemu目錄下生成虛機對應的xml文件,其餘目錄無變化less

nova stop 虛機的時候,這三個目錄的變化狀況
/var/lib/libvirt/qemu目錄下domain-21-instance-xx,相關的目錄會被刪除,其餘目錄無變化dom

相關文章
相關標籤/搜索