(1) 使用要求:
a) 集羣環境搭建成功
b) 集羣的狀態是 active+clean。
c) 節點配置,將admnode也做爲client-node使用
主機名 角色 磁盤
================================================================
a) admnode deploy-node,client-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G) node
(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-rbd/
a) 在client-node建立Block Device Image,這裏使用默認的rbd pool(使用ceph osd lspools查看)
# ceph osd lspools
0 rbd,
# rbd create --size 1024 blockDevImg
# rbd ls rbd
blockDevImg
# rbd info blockDevImg
rbd image 'blockDevImg':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1041.74b0dc51
format: 1
b) 在client-node將該image 映射給一個 block device。
# sudo rbd map blockDevImg --name client.admin
/dev/rbd0
c) 使用該block device在client-node上建立一個文件系統。
# sudo mkfs.ext4 -m0 /dev/rbd/rbd/blockDevImg
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376ide
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: doneui
d) 掛載文件系統
# sudo mkdir /mnt/ceph-block-device
# sudo mount /dev/rbd/rbd/blockDevImg /mnt/ceph-block-device
# cd /mnt/ceph-block-device
# mount
...
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)orm