《OpenStack 虛擬機的磁盤文件類型與存儲方式》html
NOTE:本文語境限於 OpenStack 原生 Libvirt Driver(QEMU-KVM Hypervisor)。node
根據虛擬機啓動方式、虛擬機磁盤類型的不一樣能夠分爲:python
Boot from imagelinux
Boot from volumeweb
NOTE:更多 OpenStack 虛擬機磁盤文件類型與存儲方式請瀏覽《OpenStack 虛擬機的磁盤文件類型與存儲方式》。後端
典型的文件存儲是 NAS(Network Attached Storage,網絡接入存儲)、NFS(Network File System,網絡文件系統)。多個計算節點能夠經過 NFS 共享 instances_path
存放磁盤文件,Cinder 也支持 NFS Backend 將 Volumes 存放到共享目錄下。可見,在共享存儲的場景中,須要遷移的僅僅是虛擬機的內存數據。
api
典型的塊存儲(Block Storage)是 SAN(Storage Area Network,區域存儲網絡)。塊存儲經過 iSCSI、FC 等協議將塊設備 Attach 到應用服務器上,並被文件系統層接管,數據以塊的形式存儲在 Volume 裏。一樣的,Nova、Glance、Cinder 均可以接入 Block Storage Backend。即虛擬機使用了共享的系統盤、數據盤,Local 只存放 Ephemeral file、Swap file、console.log、disk.info 等磁盤及配置文件。若是虛擬機沒有使用 Ephemeral、Swap Disks 的話,那麼須要遷移的也一樣只有內存數據。安全
從某種層面上看,咱們不妨將文件存儲、塊存儲概括爲共享存儲行列。但須要注意的是,嚴格來講純粹的塊存儲並不能稱之爲共享存儲,由於塊設備沒法作到在多個掛載點上同時刷寫數據,這須要業務層的支持,例如 CInder Multi-Attach。這也致使了在遷移場景中,對文件存儲、塊存儲的處理方式並不相同:使用塊存儲的遷移過程,塊設備須要經歷先 Detach、再重定向 Attach 的過程;而使用文件存儲的遷移過程當中只須要在目的節點上直接訪問 mountd 目錄便可。服務器
值得一提的是,NAS 和 SAN 做爲傳統存儲解決方案分別提供了文件存儲和塊存儲。而 Ceph 做爲統一存儲解決方案,可以同時提供文件存儲、塊存儲和對象存儲,如今已經被大量使用到生產環境中。網絡
非共享存儲場景中,虛擬機的磁盤文件存放方式爲 Local,這就須要對虛擬機的內存數據、本地磁盤文件均進行遷移。遷移方式就是數據塊級別的拷貝,簡稱塊遷移。顯然,這種場景對熱遷移並不友好,由於拷貝時間太長會提升數據的丟失率(e.g. 拷貝過程當中的網絡問題)。
爲人熟知的遷移類型有冷遷移和熱遷移,這兩個概念很好區別,以遷移過程當中是否須要關閉業務主機做爲辨識。
冷遷移:即關閉虛擬機、數據遷移。須要遷移的只有系統盤數據、數據盤數據,而無需遷移內存數據,使用塊遷移方式。
熱遷移:又稱動態遷移、在線遷移,是一種用戶無感的遷移方式。虛擬機不須要關機,業務不被中斷,但相對的是一種複雜的遷移方式。
Non-live migration, also known as cold migration or simply migration. The instance is shut down, then moved to another hypervisor and restarted. The instance recognizes that it was rebooted, and the application running on the instance is disrupted.
Live migration, The instance keeps running throughout the migration. This is useful when it is not possible or desirable to stop the application running on the instance. Live migrations can be classified further by the way they treat instance storage:
- Shared storage-based live migration. The instance has ephemeral disks that are located on storage shared between the source and destination hosts.
- Block live migration, or simply block migration. The instance has ephemeral disks that are not shared between the source and destination hosts. Block migration is incompatible with read-only devices such as CD-ROMs and Configuration Drive (config_drive).
- Volume-backed live migration. Instances use volumes rather than ephemeral disks.
根據虛擬機數據類型、存儲場景的不一樣,遷移方式能夠分爲:
冷遷移
熱遷移
遷移場景信息:
Step 1. 保證源計算節點和目的計算節點的 nova 用戶能夠進行 SSH 免密登陸,由於 nova-compute.service 默認是由 nova 用戶啓動的,該服務進程會在源計算節點和目的計算節點之間使用 scp 指令進行數據拷貝。不然就會出現以下異常:
2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command. 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Command: ssh -o BatchMode=yes 172.17.1.16 mkdir -p /var/lib/nova/instances/1365380a-a532-4811-8784-57f507acac46 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Exit code: 255 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Stdout: u'' 2019-03-15 03:33:21.428 10639 ERROR oslo_messaging.rpc.server Stderr: u'Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n'
Step 2. 關閉 SELinux 或者執行下述指令讓 SSH 免密登陸認證文件可被訪問
chcon -R -t ssh_home_t /var/lib/nova/.ssh/authorized_keys
NOTE:SELinux 相關日誌 /var/log/audit/audit.log
Step 3. 執行遷移
[stack@undercloud (overcloudrc) ~]$ openstack server create --image c9debff2-cd87-4688-b712-87a2948461ce --flavor a0dd32df-8c1b-47ed-9b7c-88612a5dd78d --nic net-id=11d8d379-dcd9-46ff-9cd1-25d2737affb4 tst-block-migrate-vm +--------------------------------------+-----------------------------------------------+ | Field | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | WswHKEmcPnV3 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2019-03-15T09:32:17Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+-----------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server show tst-block-migrate-vm +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000008e | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-15T09:32:31.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.14 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | 9f1230901ddf3fe0e1a41e1c650a784c122b791f89fdf66a40cff3d6 | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-15T09:32:32Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+----------------------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server migrate --block-migration --wait tst-block-migrate-vm Complete [stack@undercloud (overcloudrc) ~]$ openstack server show tst-block-migrate-vm +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000008e | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-15T09:33:52.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.14 | | config_drive | | | created | 2019-03-15T09:32:17Z | | flavor | 1U2G (a0dd32df-8c1b-47ed-9b7c-88612a5dd78d) | | hostId | 0f2ec590cd73fe0e9522f1ba715dae7a7d4b884e15aa8254defe85d0 | | id | 80996760-0c30-4e2a-847a-b9d882182df5 | | image | cirros (c9debff2-cd87-4688-b712-87a2948461ce) | | key_name | None | | name | tst-block-migrate-vm | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-15T09:34:53Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+----------------------------------------------------------+
NOTE:上述執行遷移時沒有顯式選中目的計算節點,交由 nova-scheduler.service 負責調度。
從上述遷移場景信息可知,Nova 會對該虛擬機使用塊遷移方式進行磁盤文件的拷貝,這一點也體如今了操做日誌中。
源主機日誌分析:
# 開始進入遷移邏輯。 Starting migrate_disk_and_power_off # 在目的計算節點上嘗試建立 tmp 文件,以此來判斷源計算節點和目的計算節點是否使用了共享存儲 Creating file /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/8ac1bb9977bc4b4b948c4c8fdad9f1f6.tmp on remote host 172.17.1.16 create_file # 建立 tmp 文件失敗,表示沒有使用共享存儲,由於虛擬機目錄不存在 'ssh -o BatchMode=yes 172.17.1.16 touch /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/8ac1bb9977bc4b4b948c4c8fdad9f1f6.tmp' failed. Not Retrying. # 在目的計算節點上建立虛擬機目錄 Creating directory /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5 on remote host 172.17.1.16 # 關閉虛擬機電源 Shutting down instance Instance shutdown successfully after 35 seconds. # Hypervisor 層刪除虛擬機實例 Instance destroyed successfully. # 重命名虛擬機目錄 mv /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5 /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize # 拷貝虛擬機磁盤文件及配置文件至目的主機 scp -r /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize/disk 172.17.1.16:/var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk scp -r /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize/disk.info 172.17.1.16:/var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk.info # Nova 層面虛擬機已中止 VM Stopped (Lifecycle Event) # 正式刪除源主機虛擬機目錄 rm -rf /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5_resize # 移除虛擬機網絡設備 Unplugging vif VIFBridge(active=True,address=fa:16:3e:d0:f6:a4,bridge_name='qbr15c7b577-89',has_traffic_filtering=True,id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1,network=Network(11d8d379-dcd9-46ff-9cd1-25d2737affb4),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap15c7b577-89') brctl delif qbr15c7b577-89 qvb15c7b577-89 ip link set qbr15c7b577-89 down brctl delbr qbr15c7b577-89 ovs-vsctl --timeout=120 -- --if-exists del-port br-int qvo15c7b577-89 ip link delete qvo15c7b577-89
目的主機日誌分析:
# Nova 層面新建虛擬機,預扣虛擬機資源 Claim successful # 遷移中 Migrating # 更新虛擬機 vNIC 的 Port binding:host_id 信息 Updating port 15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 with attributes {'binding:host_id': u'overcloud-ovscompute-0.localdomain'} # 建立虛擬機鏡像 Creating image # 檢查是否能夠 resize 虛擬機磁盤文件 Checking if we can resize image /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk. Cannot resize image /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/disk # 肯定虛擬機 console.log 日誌文件存在 Ensure instance console log exists: /var/lib/nova/instances/80996760-0c30-4e2a-847a-b9d882182df5/console.log # 組裝 GuestOS 的 XML 文件 End _get_guest_xml # 添加虛擬機網絡虛擬設備 Plugging vif VIFBridge(active=False,address=fa:16:3e:d0:f6:a4,bridge_name='qbr15c7b577-89',has_traffic_filtering=True,id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1,network=Network(11d8d379-dcd9-46ff-9cd1-25d2737affb4),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap15c7b577-89') brctl addbr qbr15c7b577-89 brctl setfd qbr15c7b577-89 0 brctl stp qbr15c7b577-89 off brctl setageing qbr15c7b577-89 0 tee /sys/class/net/qbr15c7b577-89/bridge/multicast_snooping tee /proc/sys/net/ipv6/conf/qbr15c7b577-89/disable_ipv6 ip link add qvb15c7b577-89 type veth peer name qvo15c7b577-89 ip link set qvb15c7b577-89 up ip link set qvb15c7b577-89 promisc on ip link set qvb15c7b577-89 mtu 1450 ip link set qvo15c7b577-89 up ip link set qvo15c7b577-89 promisc on ip link set qvo15c7b577-89 mtu 1450 ip link set qbr15c7b577-89 up brctl addif qbr15c7b577-89 qvb15c7b577-89 ovs-vsctl -- --may-exist add-br br-int -- set Bridge br-int datapath_type=system ovs-vsctl --timeout=120 -- --if-exists del-port qvo15c7b577-89 -- add-port br-int qvo15c7b577-89 -- set Interface qvo15c7b577-89 external-ids:iface-id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d0:f6:a4 external-ids:vm-uuid=80996760-0c30-4e2a-847a-b9d882182df ip link set qvo15c7b577-89 mtu 1450 # Noca 層面虛擬機已啓動 Instance running successfully. VM Started (Lifecycle Event)
NOTE:雖然上述將源主機和目的主機的操做日誌分開記錄,但實際上二者的 nova-compute.service 是交叉交互的,並不是源主機的遷移操做處理完了以後再開始進行目的主機的遷移操做處理。
遷移場景信息:
從遷移場景信息可知,虛擬機的 Local 磁盤文件、內存數據都經過 Libvirt Live Migration 完成遷移;共享塊設備經過重定向掛載完成遷移;端口設備經過 Neutron 完成虛擬網絡設備的建立和刪除。
爲了保障 OpenStack 虛擬機熱遷移的正常運行,須要知足幾個前提條件:
配置 Libvirt 使用 SSH 協議進行數據傳輸:
[libvirt] ... live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity
qemu+ssh
:使用 ssh 協議nova_migration
:執行 ssh 的用戶%s
:計算節點 hostname,e.g. nova_migration@cpu01
keyfile
:安全通訊的 ssh 私鑰除此以外還可使用 TCP 協議進行數據傳輸:
live_migration_uri=qemu+tcp://nova_migration@%s/system
若是使用了 TCP 協議,那麼還得將源和目標節點的 Libvirt TCP 遠程監聽服務打開:
# /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 auth_tcp = "none" # /etc/init/libvirt-bin.conf # options passed to libvirtd, add "-l" to listen on tcp env libvirtd_opts="-d -l" # /etc/default/libvirt-bin libvirtd_opts="-d -l"
NOTE:在熱遷移時也可能會採用塊遷移,但這不是值得推薦的方式。
Block live migration requires copying disks from the source to the destination host. It takes more time and puts more load on the network. Shared-storage and volume-backed live migration does not copy disks.
[stack@undercloud (overcloudrc) ~]$ openstack server show VM1 +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-000000a0 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-19T08:04:50.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.17, 10.0.1.8, 10.0.1.16, 10.0.1.10, 10.0.1.18, 10.0.1.19 | | config_drive | | | created | 2019-03-19T08:04:04Z | | flavor | Flavor1 (2ff09ec5-19e4-40b9-a52e-6026652c0788) | | hostId | 0f2ec590cd73fe0e9522f1ba715dae7a7d4b884e15aa8254defe85d0 | | id | a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 | | image | CentOS-7-x86_64-GenericCloud (0aff2888-47f8-4133-928a-9c54414b3afb) | | key_name | stack | | name | VM1 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-19T08:04:50Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume1 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume2 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume3 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume4 [stack@undercloud (overcloudrc) ~]$ openstack server add volume VM1 volume5 [stack@undercloud (overcloudrc) ~]$ openstack server migrate --block-migration --live overcloud-ovscompute-1.localdomain --wait VM1 Complete [stack@undercloud (overcloudrc) ~]$ openstack server show VM1 +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | ovs | | OS-EXT-SRV-ATTR:host | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-ovscompute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-000000a0 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-03-19T08:04:50.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | net1=10.0.1.17, 10.0.1.8, 10.0.1.16, 10.0.1.10, 10.0.1.18, 10.0.1.19 | | config_drive | | | created | 2019-03-19T08:04:04Z | | flavor | Flavor1 (2ff09ec5-19e4-40b9-a52e-6026652c0788) | | hostId | 9f1230901ddf3fe0e1a41e1c650a784c122b791f89fdf66a40cff3d6 | | id | a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 | | image | CentOS-7-x86_64-GenericCloud (0aff2888-47f8-4133-928a-9c54414b3afb) | | key_name | stack | | name | VM1 | | os-extended-volumes:volumes_attached | [{u'id': u'afbe0783-50b8-4036-b59a-69b94dbdb630'}, {u'id': u'27fc8950-6e98-4ba7-9366-907e8fd2a90a'}, {u'id': u'df8b33a8-6d8c-4e0e-a742-869fec4ff923'}, {u'id': u'534bb675-4d8c-4380-8bd2-4aeaedbcda40'}, {u'id': u'623a513a-2cca- | | | 47e5-9426-71a154cbe0c0'}] | | progress | 0 | | project_id | a6c78435075246f3aa5ab946b87086c5 | | properties | | | security_groups | [{u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}] | | status | ACTIVE | | updated | 2019-03-19T08:18:39Z | | user_id | 4fe574569664493bbd660abfe762a630 | +--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
NUMA 親和和 CPU 綁定遷移狀況:
# 源主機 [root@overcloud-ovscompute-0 nova]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 node 0 size: 4095 MB node 0 free: 1273 MB node 1 cpus: 9 10 11 12 13 14 15 node 1 size: 4096 MB node 1 free: 2410 MB node distances: node 0 1 0: 10 20 1: 20 10 [root@overcloud-ovscompute-0 nova]# virsh list Id Name State ---------------------------------------------------- 11 instance-000000a0 running [root@overcloud-ovscompute-0 nova]# virsh vcpuinfo instance-000000a0 VCPU: 0 CPU: 0 State: running CPU time: 50.1s CPU Affinity: y--------------- VCPU: 1 CPU: 1 State: running CPU time: 26.8s CPU Affinity: -y-------------- [root@overcloud-ovscompute-0 nova]# virsh vcpupin instance-000000a0 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1 # 目的主機 [root@overcloud-ovscompute-1 nova]# numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 8 node 0 size: 4095 MB node 0 free: 1420 MB node 1 cpus: 9 10 11 12 13 14 15 node 1 size: 4096 MB node 1 free: 2270 MB node distances: node 0 1 0: 10 20 1: 20 10 [root@overcloud-ovscompute-1 nova]# virsh vcpuinfo instance-000000a0 VCPU: 0 CPU: 0 State: running CPU time: 2.9s CPU Affinity: y--------------- VCPU: 1 CPU: 1 State: running CPU time: 1.2s CPU Affinity: -y-------------- [root@overcloud-ovscompute-1 nova]# virsh vcpupin instance-000000a0 VCPU: CPU Affinity ---------------------------------- 0: 0 1: 1
源主機日誌分析:
# 經過建立 tmpfile 來檢測是否使用了共存存儲 Check if temp file /var/lib/nova/instances/tmpZ0Bj8s exists to indicate shared storage is being used for migration. Exists? False # 開始熱遷移 Starting monitoring of live migration _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6566 # 輪詢監控 libvirtd 熱遷移狀態並打印遷移進度日誌,並動態傳遞 downtime(最大停機時間) # 由於虛擬機的數據仍會不斷變化,因此最終遷移的 Size 每每會大於 data_gb Current None elapsed 0 steps [(0, 46), (300, 47), (600, 48), (900, 51), (1200, 57), (1500, 66), (1800, 84), (2100, 117), (2400, 179), (2700, 291), (3000, 500)] update_downtime /usr/lib/python2.7/site-packages/nova/virt/libvirt/migration.py:348 Increasing downtime to 46 ms after 0 sec elapsed time Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0) # Nova 層面暫停虛擬機 VM Paused (Lifecycle Event) # 虛擬機數據遷移完成 Migration operation has completed Migration operation thread has finished # 卸載虛擬機的共享塊設備 calling os-brick to detach iSCSI Volume disconnect_volume iscsiadm -m node -T iqn.2010-10.org.openstack:volume-afbe0783-50b8-4036-b59a-69b94dbdb630 -p 172.17.3.18:3260 --op delete Checking to see if SCSI volumes sdc have been removed. SCSI volumes sdc have been removed. # Nova 層面虛擬機已中止 VM Stopped (Lifecycle Event) # 拔出虛擬機網絡 Unplugging vif VIFBridge # 刪除虛擬機本地磁盤文件 mv /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del Deleting instance files /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del Deletion of /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37_del complete # 遷移完成 Migrating instance to overcloud-ovscompute-1.localdomain finished successfully. Live migration monitoring is all done
目的主機日誌分析:
# 建立 tmpfile 檢測是否使用共享存儲 Creating tmpfile /var/lib/nova/instances/tmpZ0Bj8s to notify to other compute nodes that they should mount the same storage. # 建立虛擬機本地磁盤文件 Creating instance directory: /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37 touch -c /var/lib/nova/instances/_base/ff34147b1062cd454ae2a8959f069e2e18691ec9 qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ff34147b1062cd454ae2a8959f069e2e18691ec9 /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk Creating disk.info with the contents: {u'/var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk': u'qcow2'} Checking to make sure images and backing files are present before live migration. # 檢查磁盤文件是否能夠 Resize Checking if we can resize image /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk. size=10737418240 qemu-img resize /var/lib/nova/instances/a2855dfd-c6e5-4cbf-9fdf-4b083cc8ec37/disk 10737418240 # 掛載虛擬機的共享塊設備 Connecting volumes before live migration. Calling os-brick to attach iSCSI Volume connect_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py:63 Trying to connect to iSCSI portal 172.17.3.18:3260 iscsiadm -m node -T iqn.2010-10.org.openstack:volume-afbe0783-50b8-4036-b59a-69b94dbdb630 -p 172.17.3.18:3260 Attached iSCSI volume {'path': u'/dev/sda', 'scsi_wwn': '360014052de14ef00f124a939740ba645', 'type': 'block'} # 插入虛擬機網絡 Plugging VIFs before live migration. # 更新 Port 信息 Port 35f7ede8-2a78-44b6-8c65-108e6f1080aa updated with migration profile {'migrating_to': 'overcloud-ovscompute-1.localdomain'} successfully # Nova 層面虛擬機已啓動 VM Started (Lifecycle Event) VM Resumed (Lifecycle Event)
從上述日誌能夠看出,熱遷移的關鍵動做是交由 Hypervisor 層完成的,Nova 只是對 Hypervisor Live Migration 功能進行了封裝和調度管理。在上例中 Libvirt Live Migration 經過 SSH 協議將虛擬機的本地磁盤文件、內存數據一併傳輸到目的主機。
https://developers.redhat.com/blog/2015/03/24/live-migrating-qemu-kvm-virtual-machines/
http://www.javashuo.com/article/p-scpajatm-mr.html
https://docs.openstack.org/nova/pike/admin/configuring-migrations.html
https://docs.openstack.org/nova/pike/admin/live-migration-usage.html
https://blog.csdn.net/lemontree1945/article/details/79901874
https://www.ibm.com/developerworks/cn/linux/l-cn-mgrtvm1/index.html
https://blog.csdn.net/hawkerou/article/details/53482268