標籤: openstack nova 源碼閱讀html
在openstack中,虛擬機的建立無疑是很是重要的,瞭解虛擬機建立流程並閱讀nova模塊關於建立虛擬機的源碼對opensatck開發有很很大幫助,本篇文章將以openstack queens版本爲基礎.講解建立虛擬機的源碼.因爲nova模塊代碼複雜,並且閱讀源碼所需知識較多,因此側重於流程邏輯,源碼閱讀可能不夠詳盡指出.node
爲了簡便,這裏省略了認證過程,實際上一個請求發送過來,還要通過認證和鑑權等過程,確保該用戶有權限建立虛擬機.
在openstack的wiki中給出建立一個虛擬的workflow,圖有點大,但對於每一個組件的工做內容寫的很是詳細
咱們能夠把建立流程分紅幾個部分python
用戶發送一個建立虛擬機的請求,Nova-Api接收到請求後,Nova-Api負責激活擴展插件,檢查虛擬機名稱,接收注入文件,提取新虛擬機的網絡設置,檢查配置和鏡像等工做.
而後Nova-Api將處理好的參數集以JSON文件經過HTTP請求發送給Nova的Compute-Api.而後向用戶發送帶有虛擬機預留ID的相應(這時的相應碼202,提示虛擬機建立成功,但實際上虛擬機還未真正建立成功)
Compute-Api在接收到請求後將會檢查建立政策,進一步檢查虛擬機,網絡,鏡像和配額,而後正式創建虛擬機的配置.
接着在數據庫中生成虛擬機的相應記錄,然後經過消息隊列發送請求讓scheduler選擇一個host來建立虛擬機.數據庫
scheduler接收到消息後根據消息中filters對全部host進行過濾,最後選出某個host,而後跟新數據庫,並經過消息隊列向被選定的host發送建立虛擬機消息
被選定的host接收到隊列的消息後在數據庫中更新虛擬機和任務的狀態,而後經過nova network-api爲虛擬機建立或獲取網絡.
接着經過nova volume-api爲虛擬機建立新的卷,決定虛擬機的塊設備映射,並將卷掛載到虛擬機上
此時虛擬機的調度和資源準備都以完成.api
被選定的host獲取鏡像,創建塊設備映射,最後生成libvirt.xml最後執行hypervisor的spawn()方法,至此,虛擬機已經在host上運行了.最後一步是跟新虛擬機和任務的狀態.網絡
下面將根據虛擬機建立時調用的模塊順序閱讀代碼app
Nova-Api將處理不一樣類型的請求寫成了各類controler類,而處理建立虛擬機的類被稱爲ServerControler
ui
# nova/api/openstack/compute/servers.py class ServersController(wsgi.Controller): # 爲了簡明,省略了不少用於檢查的裝飾器 @wsgi.response(202) def create(self, req, body): """Creates a new server for a given user.""" context = req.environ['nova.context'] server_dict = body['server'] password = self._get_server_admin_password(server_dict) name = common.normalize_name(server_dict['name']) description = name # create方法乾的是些從請求中提取並檢查參數的髒活 # 省略... try: # 依然是構建參數的代碼... # 調用compute_api建立虛擬機 (instances, resv_id) = self.compute_api.create(context, inst_type, image_uuid, display_name=name, display_description=description, availability_zone=availability_zone, forced_host=host, forced_node=node, metadata=server_dict.get('metadata', {}), admin_password=password, requested_networks=requested_networks, check_server_group_quota=True, supports_multiattach=supports_multiattach, **create_kwargs) # 錯誤處理...
# nova/compute/api.py class API(base.Base): def create(self, context, instance_type, image_href, kernel_id=None, ramdisk_id=None, min_count=None, max_count=None, display_name=None, display_description=None, key_name=None, key_data=None, security_groups=None, availability_zone=None, forced_host=None, forced_node=None, user_data=None, metadata=None, injected_files=None, admin_password=None, block_device_mapping=None, access_ip_v4=None, access_ip_v6=None, requested_networks=None, config_drive=None, auto_disk_config=None, scheduler_hints=None, legacy_bdm=True, shutdown_terminate=False, check_server_group_quota=False, tags=None, supports_multiattach=False): """準備實例建立工做,而後將實例信息發送至scheduler, 由scheduler計算host上建立和在DB建立記錄。 """ # preparation # 爲了簡介,全部參數簡略爲args和kwargs self_create_instance(*args, **kwargs) def _create_instance(self, context, instance_type, image_href, kernel_id, ramdisk_id, min_count, max_count, display_name, display_description, key_name, key_data, security_groups, availability_zone, user_data, metadata, injected_files, admin_password, access_ip_v4, access_ip_v6, requested_networks, config_drive, block_device_mapping, auto_disk_config, filter_properties, reservation_id=None, legacy_bdm=True, shutdown_terminate=False, check_server_group_quota=False, tags=None, supports_multiattach=False): """覈查全部參數""" # verifying pass # 獲取鏡像信息 if image_href: # if image_href is provied, get image via glance api image_id, boot_meta = self._get_image(context, image_href) else: # if image_href is not proved, get image metadata from bdm image_id = None boot_meta = self._get_bdm_image_metadata( context, block_device_mapping, legacy_bdm) # 繼續檢查參數 # 因爲block device mapping有兩種版本,爲了兼容,須要檢查並在必要時轉換 block_device_mapping = self._check_and_transform_bdm(context, base_options, instance_type, boot_meta, min_count, max_count, block_device_mapping, legacy_bdm) # go on checking # 爲了支持cell特性,參見cell wiki # https://docs.openstack.org/nova/ocata/cells.html if CONF.cells.enable: # 建立instance模型對象 # 檢查quota # 調用rpc api將消息發送到隊列 self.compute_task_api.build_instance(*args, **kwargs) else: compute_task_api.schedule_and_build_instances(*args, **kwargs) return instances, reservation_id
nova組件之間能夠經過rpc api以消息隊列通訊,而最後的真正執行的任務的類都在manager.py
文件中定義.這裏咱們方便理解省略調度代碼spa
# nova/conductor/api.py class ComputeTaskAPI(object): def schedule_and_build_instance(self, *args, **kwargs): # very simple method # call rpc api only self.conductor_compute_rpc_api.schedule_and_build_instance(*args, **kwargs) # nova/conductor/rpcapi.py class ComputeTaskAPI(object): def schedule_and_build_instance(self, *args, **kwargs): # 構建參數和api版本檢查 # 最後將其發送到消息隊列 cctxt.cast(context, 'schedule_and_build_instance', **kwargs)
# nova/compute/manager.py class ComputeManager(object): @wrap_exception() @reverts_task_state @wrap_instance_fault def build_and_run_instance(self, *args, **kwargs): # 給資源加鎖,避免競爭 @utils.synchronized(instance.uuid) def _locked_do_build_and_run_instance(*args, **kwargs): with self._build_semaphore: try: result = self._do_build_and_run_instance(*args, **kwargs) # handle exceptions pass # 因爲建立虛擬機的工做可能會持續很長時間,爲了不進程阻塞 # 將這個任務分發給某個worker utils.spawn_n(_locked_do_build_and_run_instance, context, instance, ...) def _do_build_and_run_instance(self, *args, **kwargs): # 更新虛擬機和任務狀態 # 解碼注入文件 try: with timeutils.StopWatch() as timer: self._build_and_run_instance(*args) # handle exceptions def _build_and_run_instance(self, *args, **kwargs): # 獲取 image ref try: scheduler_hints = self._get_scheduler_hints(filter_properties, request_spec) rt = self._get_resource_tracker() with rt.instance_claime(context, instance, node, limits): # 獲取羣組策略和鏡像metadata # 經過調用_build_resources建立network和volume with self._build_resources(*args) as resources: # handle vm and task state # spawn instance on hypervisor with timeuitls.StopWatch() as time: # 經過driver建立xml,而後真正運行虛擬機 self.driver.spawn(*args, **kwargs) # handle exceptions