Ceph Block Device塊設備操做

 

使用ceph block device須要以下三個步驟:node

1. 在ceph 集羣的pool中建立一個Block Device image.app

2. ceph Client使用RBD設備與ceph集羣的Block Device image進行映射(Map)。ide

3. ceph Client的User Space即可以掛載(Mount)該RBD設備。spa

image

 

Step1 建立Block Device Image

首先,須要新建一個pool,若是不想新建pool,能夠使用默認pool,即rbd。.net

命令:ceph osd pool create <creating_pool_name> <pg_num>
參數:creating_pool_name : 要建立的pool的名字
          pg_num : Placement Group的個數
   
    # ceph osd pool create testpool 512
    pool testpool' created
code

 

,須要在ceph集羣中建立一個Block Device Image。(查看rbd的命令,輸入 "man rbd"命令)orm

命令:rbd create --size {MegaBytes} {pool-name}/{image-name}ip

例如:在名爲「testpool」的pool中建立「bar」的Image,容量是1024MBrem

# rbd create --size 1024 testpool/barget

查看Block Device Images

# rbd ls testpool

rbd

以及查看一個Block Device Images的詳細信息

# rbd info testpool/bar
rbd image 'bar':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.5e3b248a65f6
format: 2
features: layering
flags:

 

Step2 ceph Client使用RBD設備與ceph集羣的Block Device image進行映射(Map)

命令: sudo rbd map rbd/myimage --id admin --keyring /path/to/keyring

例如

# sudo rbd map testpool/bar --id admin --keyring /etc/ceph/ceph.client.admin.keyring

dev/rbd0

 

查看已經映射的Block Device信息

# rbd showmapped
id  pool       image   snap   device   
0   testpool   bar        -       /dev/rbd0

Step3 ceph Client的User Space掛載(Mount)該RBD設備

首先,使用該block device在client-node上建立一個文件系統。

# sudo mkfs.ext4 -m0 /dev/rbd/testpool/bar
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                           
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
         32768, 98304, 163840, 229376

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

其次,掛載該文件系統

# sudo mkdir /mnt/ceph-block-device
# sudo mount /dev/rbd/testpool/bar /mnt/ceph-block-device

 

查看mount信息

# mount

...
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

 

* Ceph Block Deviced的其餘相關操做

To create a new rbd image that is 100 GB:

rbd create mypool/myimage --size 102400

To use a non-default object size (8 MB):

rbd create mypool/myimage --size 102400 --object-size 8M

To delete an rbd image (be careful!):

rbd rm mypool/myimage

To create a new snapshot:

rbd snap create mypool/myimage@mysnap

To create a copy-on-write clone of a protected snapshot:

rbd clone mypool/myimage@mysnap otherpool/cloneimage

To see which clones of a snapshot exist:

rbd children mypool/myimage@mysnap

To delete a snapshot:

rbd snap rm mypool/myimage@mysnap

To map an image via the kernel with cephx enabled:

rbd map mypool/myimage --id admin --keyfile secretfile

To map an image via the kernel with different cluster name other than default ceph.

rbd map mypool/myimage –cluster cluster name

To unmap an image:

rbd unmap /dev/rbd0

To create an image and a clone from it:

rbd import --image-format 2 image mypool/parent
rbd snap create mypool/parent@snap
rbd snap protect mypool/parent@snap
rbd clone mypool/parent@snap otherpool/child

To create an image with a smaller stripe_unit (to better distribute small writes in some workloads):

rbd create mypool/myimage --size 102400 --stripe-unit 65536B --stripe-count 16

To change an image from one image format to another, export it and then import it as the desired image format:

rbd export mypool/myimage@snap /tmp/img
rbd import --image-format 2 /tmp/img mypool/myimage2

To lock an image for exclusive use:

rbd lock add mypool/myimage mylockid

To release a lock:

rbd lock remove mypool/myimage mylockid client.2485
相關文章
相關標籤/搜索