塊存儲管理系列文章html
(1)RBD 基本使用 - Storage6
vim
(2)iSCSI 網關管理 - Storage6
服務器
(3)使用 librbd 將虛擬機運行在 Ceph RBDcookie
(4)RBD Mirror 容災app
Ceph 塊設備容許共享物理資源,而且能夠調整大小。它們會在 Ceph 集羣中的多個 OSD 上等量存儲數據。Ceph 塊設備會利用 RADOS 功能,例如建立快照、複製和一致性。Ceph 的 RADOS塊設備 (RBD) 使用內核模塊或 librbd 庫與 OSD 交互。性能
Ceph 的塊設備爲內核模塊提供高性能及無限的可擴展性。它們支持虛擬化解決方案(例如QEMU)或依賴於 libvirt 的基於雲的計算系統(例如 OpenStack)。您可使用同一個集羣來同時操做對象網關、CephFS 和 RADOS 塊設備。spa
(1) 客戶端安裝rbd包3d
# zypper se ceph-common # zypper in ceph-common
(2)顯示幫助信息code
# rbd help <command> <subcommand>
# rbd help snap list
(3) 複製配置文件和keyorm
# scp admin:/etc/ceph/ceph.conf . # scp admin:/etc/ceph/ceph.client.admin.keyring .
(4)建立塊設備
# ceph osd pool create rbd 128 128 replicated # rbd create test001 --size 1024 --pool rbd # rbd ls test001
(5)映射塊設備
# rbd map test001
/dev/rbd0
(6)查看已映射設備
# rbd showmapped id pool image snap device 0 rbd test001 - /dev/rbd0
(7)格式化塊設備,而且掛載
# mkfs.xfs -q /dev/rbd0 # mkdir /mnt/ceph-test001 # mount /dev/rbd/rbd/test001 /mnt/ceph-test001/
注意:rbd
的指令若是省略-p / --pool
參數,則會默認-p rbd
,而這個rbd pool是默認生成的。
# df -Th
Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 xfs 1014M 33M 982M 4% /mnt/ceph-test001
# rbd --image test001 info rbd image 'test001': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: 408ea3f9d1c3b block_name_prefix: rbd_data.408ea3f9d1c3b format: 2 features: layering op_features: flags: create_timestamp: Mon Sep 23 15:47:00 2019 access_timestamp: Mon Sep 23 15:47:00 2019 modify_timestamp: Mon Sep 23 15:47:00 2019
size
: 就是這個塊的大小,即1024MB=1G,1024MB/256 = 4M
,共分紅了256個對象(object),每一個對象4M(8) 設置自動掛載
# vim /etc/ceph/rbdmap rbd/test001 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring # systemctl start rbdmap.service # systemctl enable rbdmap.service # systemctl status rbdmap.service
# vim /etc/fstab /dev/rbd/rbd/test001 /mnt/ceph-test001 xfs defaults,noatime,_netdev 0 0 # mount -a
(1)查看rbd0 原有大小爲1024M
# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 xfs 1014M 33M 982M 4% /mnt/ceph-test001
# rbd --image test001 info
rbd image 'test001':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 408ea3f9d1c3b
block_name_prefix: rbd_data.408ea3f9d1c3b
format: 2
features: layering
op_features:
flags:
(2)在線擴容
# rbd resize rbd/test001 --size 4096 Resizing image: 100% complete...done.
# rbd --image test001 info rbd image 'test001': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 0 id: 408ea3f9d1c3b block_name_prefix: rbd_data.408ea3f9d1c3b format: 2 features: layering op_features: flags: create_timestamp: Mon Sep 23 15:47:00 2019 access_timestamp: Mon Sep 23 15:47:00 2019 modify_timestamp: Mon Sep 23 15:47:00 2019
(3)文件系統擴容
# xfs_growfs /mnt/ceph-test001/ # df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/rbd0 xfs 4.0G 34M 4.0G 1% /mnt/ceph-test001
(4)在線縮小(XFS文件系統不支持縮小)
# rbd -p rbd resize test001 --size 1024 --allow-shrink
1) 在block設備上建立文件
# echo "Ceph This is snapshot test" > /mnt/ceph-test001/snapshot_test_file # cat /mnt/ceph-test001/snapshot_test_file Ceph This is snapshot test
2) 建立快照
語法:rbd snap create <pool-name>/<image-name>@<snap-name>
# rbd snap create rbd/test001@test001_snap
3) 顯示建立的快照
# rbd snap ls rbd/test001 SNAPID NAME SIZE TIMESTAMP 4 test001_snap 1GiB Mon Feb 11 11:12:40 2019
4) 刪除文件
# rm -rf /mnt/ceph-test001/snapshot_test_file
5) 經過快照恢復
語法:rbd snap rollback <pool-name>/<image-name>@<snap-name>
# rbd snap rollback rbd/test001@test001_snap # umount /mnt/ceph-test001 # mount –a # ll /mnt/ceph-test001/ total 4 -rw-r--r-- 1 root root 33 Jul 16 09:19 snapshot_test_file
6) 清除快照,表示全部的快照都刪除
語法:rbd --pool {pool-name} snap purge {image-name}
rbd snap purge {pool-name}/{image-name}
# rbd snap purge rbd/test001
7) 刪除快照,表示只刪除指定的快照
語法:rbd snap rm <pool-name>/<image-name>@<snap-name>
# rbd snap rm rbd/test001@test001_snap # rbd snap ls rbd/test001 # 再次顯示快照已經沒有了
(1)Clone必須rbd的image是類型 II
# rbd info test001 rbd image 'test001': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 0 id: 408ea3f9d1c3b block_name_prefix: rbd_data.408ea3f9d1c3b format: 2 features: layering op_features: flags: create_tim
(2)SUSE默認是類型II,如不是,建立時添加 --image-format 2參數
# rbd create test001 --size 1024 --image-format 2 # rbd create test002 --size 1024 --image-format 1 # rbd info test002 rbd image 'test002': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rb.0.b3dd.74b0dc51 format: 1
(3)建立COW克隆,首先要保護這個快照
克隆會訪問父快照。若是用戶意外刪除了父快照,則全部克隆都會損壞。爲了防止數據丟失,您須要先保護快照,而後才能克隆它。
# rbd snap create rbd/test001@test001_snap # 建立快照
# rbd snap protect rbd/test001@test001_snap # 保護
# rbd -p rbd ls -l NAME SIZE PARENT FMT PROT LOCK test001 4 GiB 2 test001@test001_snap 4 GiB 2 yes <=== protect test002 4 GiB 2
(4)快照克隆
語法:rbd clone {pool-name}/{parent-image}@{snap-name} {pool-name}/{child-image-name}
# rbd clone rbd/test001@test001_snap rbd/test001_snap_clone
注意:能夠將快照從一個存儲池克隆到另外一個存儲池中的映像。例如,能夠在一個存儲池中將只讀映像和快照做爲模板維護,而在另外一個存儲池中維護可寫入克隆
# ceph osd pool create rbd_clone 128 128 replicated # rbd clone rbd/test001@test001_snap rbd_clone/test001_snap_rbd_clone
(5) 檢查新鏡像信息
# rbd ls
test001
test001_snap_clone
test002
# rbd ls -p rbd_clone test001_snap_rbd_clone
# rbd info rbd/test001_snap_clone rbd image 'test001_snap_clone': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 0 id: 4153d12df2e2e block_name_prefix: rbd_data.4153d12df2e2e format: 2 features: layering op_features: flags: create_timestamp: Mon Sep 23 20:23:01 2019 access_timestamp: Mon Sep 23 20:23:01 2019 modify_timestamp: Mon Sep 23 20:23:01 2019 parent: rbd/test001@test001_snap overlap: 4 GiB
# rbd children rbd/test001@test001_snap rbd/test001_snap_clone rbd_clone/test001_snap_rbd_clone
(6)平展克隆的映像
不在使用父鏡像快照,所謂的完整克隆而不是以前的連接克隆,提升性能。
語法:# rbd --pool pool-name flatten --image image-name
# rbd flatten rbd/test001_snap_clone Image flatten: 100% complete...done.
# rbd flatten rbd_clone/test001_snap_rbd_clone
# rbd info rbd/test001_snap_clone rbd image 'test001_snap_clone': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 0 id: 4153d12df2e2e block_name_prefix: rbd_data.4153d12df2e2e format: 2 features: layering op_features: flags: create_timestamp: Mon Sep 23 20:23:01 2019 access_timestamp: Mon Sep 23 20:23:01 2019 modify_timestamp: Mon Sep 23 20:23:01 2019
(7)取消保護快照
# rbd info rbd/test001@test001_snap
rbd image 'test001': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 1 ....... protected: True
# rbd snap unprotect rbd/test001@test001_snap # 取消保護的
# rbd info rbd/test001@test001_snap
rbd image 'test001':
size 4 GiB in 1024 objects
order 22 (4 MiB objects)
......
protected: False
(1)導出 RBD 鏡像
語法:rbd export [image-name] [dest-path]
# rbd export test001 /tmp/test001_rbd_image
Exporting image: 100% complete...done
(2)導入 RBD 鏡像
語法:# rbd import [path] [dest-image]
# rbd import /tmp/test001_rbd_image test002
Importing image: 100% complete...done.
# rbd ls
test001
test002
# rbd info test002
rbd image 'test002':
size 4 GiB in 1024 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 40b02f91b3a9d
block_name_prefix: rbd_data.40b02f91b3a9d
format: 2
features: layering
op_features:
flags:
(1)經過 rbd status 命令確認鏡像被那個客戶端所使用
# rbd -p rbd ls test001 test002
# rbd status rbd/test001 Watchers: watcher=192.168.2.39:0/3075193743 client.264380 cookie=18446462598732840961
能夠看到該鏡像或者instance屬於192.168.2.39 IP地址
# rbd info test001 rbd image 'test001': size 4 GiB in 1024 objects order 22 (4 MiB objects) snapshot_count: 0 id: 408ea3f9d1c3b block_name_prefix: rbd_data.408ea3f9d1c3b format: 2 features: layering
# rados -p rbd listwatchers rbd_header.408ea3f9d1c3b watcher=192.168.2.39:0/3075193743 client.264380 cookie=18446462598732840961
# for each in `rados -p rbd ls | grep rbd_header`; do echo $each: && rados -p \
rbd listwatchers $each && echo -e '\n';done
rbd_header.408ea3f9d1c3b: watcher=192.168.2.39:0/3075193743 client.264380 cookie=18446462598732840961
1) 註釋映射信息
# vim /etc/ceph/rbdmap rbd/test001 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
2) 註釋下面掛載信息
# vim /etc/fstab /dev/rbd/rbd/test001 /mnt/ceph-test001 xfs defaults,noatime,_netdev 0 0
3) 顯示映射關係
# rbd showmapped id pool image snap device 0 rbd test001 - /dev/rbd0
4) 卸載映射關係,刪除test001塊設備
# rbd unmap /dev/rbd0 # rbd rm rbd/test001