08.存儲Cinder→5.場景學習→12.Ceph Volume Provider→5.Detach Volume

背景:html

vol-1:

c1-1:


描述 詳細
  1. 將ceph volume type的volume attach到instance c1上
  1. 咱們重點關注nova-compute 如何將vol-1從c1上detach。查看nova-compute日誌
    1. 經過curl請求得到volume。
1
2
3
4
5
Jun 27 19:41:54 controller nova-compute[7060]: 
INFO nova.compute.manager
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
[instance: f092dfe4-365b-4dd6-8867-76a311399782]
Detaching volume 823dfaa9-a41c-4df7-a165-30db20b5b1c7
1
2
3
4
5
6
7
8
9
Jun 27 19:41:54 controller nova-compute[7060]: 
DEBUG cinderclient.v3.client
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
REQ: curl -g -i -X GET http://172.16.1.17/volume/v3/c1e57b427f934cadbe32750b1b8ccfd8/volumes
/823dfaa9-a41c-4df7-a165-30db20b5b1c7
-H "OpenStack-API-Version: volume 3.48" -H
"User-Agent: python-cinderclient" -H "X-OpenStack-Request-ID:
req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7" -H "Accept: application/json"
-H "X-Auth-Token: {SHA1}47e5ca14354428cab7b8095abe1d6420c8df9553"
{{(pid=7060) _http_log_request /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:372
 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Jun 27 19:41:54 controller nova-compute[7060]: 
RESP BODY: {"volume": {"migration_status": null, "provider_id": null, "attachments":
[{"server_id": "f092dfe4-365b-4dd6-8867-76a311399782", "attachment_id":
"b030252b-a04d-4cc7-b682-b3a38c1e8937",
"attached_at": "2019-06-27T07:34:09.000000", "host_name": "controller", "volume_id":
"823dfaa9-a41c-4df7-a165-30db20b5b1c7",
"device": "/dev/vdb", "id": "823dfaa9-a41c-4df7-a165-30db20b5b1c7"}],
"links": [{"href": "http://172.16.1.17/volume/v3/c1e57b427f934cadbe32750b1b8ccfd8/volumes
/823dfaa9-a41c-4df7-a165-30db20b5b1c7", "rel": "self"}, {"href": "http://172.16.1.17/volume/
c1e57b427f934cadbe32750b1b8ccfd8/volumes/823dfaa9-a41c-4df7-a165-30db20b5b1c7", "rel": "bookmark"}],
"availability_zone": "nova", "os-vol-host-attr:host": "controller@ceph#ceph", "encrypted": false,
"updated_at": "2019-06-27T11:41:54.000000", "replication_status": null, "snapshot_id": null, "id":
"823dfaa9-a41c-4df7-a165-30db20b5b1c7", "size": 1, "user_id": "7a2243af10174653931f76c748209551",
"os-vol-tenant-attr:tenant_id": "c1e57b427f934cadbe32750b1b8ccfd8", "os-vol-mig-status-attr:migstat":
null, "metadata": {"attached_mode": "rw"}, "status": "detaching", "description": "", "multiattach":
false, "service_uuid": "a01f1424-3912-4b36-88e0-7fd8e1d5310a", "source_volid": null, "consistencygroup_id":
null, "os-vol-mig-status-attr:name_id": null, "name": "vol-1", "bootable": "false", "created_at":
"2019-06-27T07:31:09.000000", "volume_type": "ceph", "group_id": null, "shared_targets": false
  1. Raise DeviceNotFound if the device isn't found during detach
1
2
3
4
5
Jun 27 19:41:54 controller nova-compute[7060]: 
INFO nova.virt.block_device
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
[instance: f092dfe4-365b-4dd6-8867-76a311399782]
Attempting to driver detach volume 823dfaa9-a41c-4df7-a165-30db20b5b1c7 from mountpoint /dev/vdb
1
2
3
4
5
Jun 27 19:41:54 controller nova-compute[7060]: 
DEBUG nova.virt.libvirt.guest
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
Attempting initial detach for device vdb
{{(pid=7060) detach_device_with_retry /opt/stack/nova/nova/virt/libvirt/guest.py:430
  1. detach device xml,成功detach掉volume
 1
2
3
4
5
6
7
8
9
10
11
12
13
Jun 27 19:41:54 controller nova-compute[7060]: DEBUG nova.virt.libvirt.guest 
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
detach device xml:
<disk type="network" device="disk">
<driver name="qemu" type="raw" cache="writeback" discard="unmap"/>
<source protocol="rbd" name="volumes/volume-823dfaa9-a41c-4df7-a165-30db20b5b1c7">
<host name="172.16.1.17" port="6789"/>
</source>
<target bus="virtio" dev="vdb"/>
<serial>823dfaa9-a41c-4df7-a165-30db20b5b1c7</serial>
<address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0"/>
</disk>
{{(pid=7060) detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:481
1
2
3
4
5
Jun 27 19:41:54 controller nova-compute[7060]: 
DEBUG nova.virt.libvirt.guest
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
Successfully detached device vdb from guest. Persistent? 1. Live? True
{{(pid=7060) _try_detach_device /opt/stack/nova/nova/virt/libvirt/guest.py:403
root@controller:~# vim /etc/libvirt/qemu/instance-00000001.xml
  1. 經過curl請求delete attachment
1
2
3
4
5
6
7
8
9
Jun 27 19:41:54 controller nova-compute[7060]: 
DEBUG cinderclient.v3.client
[None req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 admin admin]
REQ: curl -g -i -X DELETE http://172.16.1.17/volume/v3/c1e57b427f934cadbe32750b1b8ccfd8/attachments
/b030252b-a04d-4cc7-b682-b3a38c1e8937

-H "OpenStack-API-Version: volume 3.44" -H "User-Agent: python-cinderclient" -H "X-OpenStack-Request-ID:
req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7"
-H "Accept: application/json" -H "X-Auth-Token: {SHA1}47e5ca14354428cab7b8095abe1d6420c8df9553"
{{(pid=7060) _http_log_request /usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py:372
1
2
Jun 27 19:41:55 controller nova-compute[7060]: 
RESP BODY: {"attachments": []}
  1. 查看cinder-volume日誌
    1. 刪除attachment
1
2
3
4
Jun 27 19:41:55 controller cinder-volume[9796]: 
INFO cinder.volume.manager
[req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 req-04307c40-9d7e-417a-8325-c63efc6ee332 admin None]
Terminate volume connection completed successfully.
1
2
3
4
5
Jun 27 19:41:55 controller cinder-volume[9796]: 
DEBUG cinder.volume.manager
[req-6073abe1-b6a6-4f69-b940-8e69d66a6ad7 req-04307c40-9d7e-417a-8325-c63efc6ee332 admin None]
Deleting attachment b030252b-a04d-4cc7-b682-b3a38c1e8937.
{{(pid=9947) _do_attachment_delete /opt/stack/cinder/cinder/volume/manager.py:4543
  1. 至此 detach volume 操做已經完成,GUI 也會更新 volume 的 detach 信息
相關文章
相關標籤/搜索