Ceph 塊存儲

 

 

 

 

 

任何普通的linux主機均可以充當ceph客戶機,客戶機經過網絡與ceph存儲集羣交互以存儲或檢索用戶數據。Ceph RBD支持已經添加到linux主線內核中,從2.6.34以及之後版本開始。linux

 

===================在管理端192.168.20.181操做================================git

su - ceph-admingithub

cd my-clustercentos

建立ceph塊客戶端用戶名和認證密鑰bash

ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' |tee ./ceph.client.rbd.keyring網絡

 

 

 

 

 

===================在客戶端192.168.20.184操做================================app

修改hosts文件:測試

cat /etc/hostsspa

192.168.20.181 c720181orm

192.168.20.182 c720182

192.168.20.183 c720183

192.168.20.184 c720184

 

建立存放密鑰和配置文件目錄:

mkdir -p /etc/ceph

 

 

===================在管理端192.168.20.181操做================================

把密鑰和配置文件拷貝到客戶端

scp ceph.client.rbd.keyring /etc/ceph/ceph.conf root@192.168.20.184:/etc/ceph/

 

 

 

 

===================在客戶端192.168.20.184操做================================

 

檢查客戶端是否符合塊設備環境要求

[root@c720184 ~]# uname -r

3.10.0-957.el7.x86_64                 #高於2.6.34就能夠

[root@c720184 ~]# modprobe rbd

[root@c720184 ~]# echo $?

0                                            #說明執行成功

 

安裝ceph客戶端

yum -y install ceph   (若是沒有配置ceph.repo 和 epel.repo 要先配置,不然安裝不成功)

 

鏈接ceph集羣:

ceph -s --name client.rbd 

 

 

 

 

(1)集羣建立存儲池

默認建立塊設備,會直接建立在rbd池中,但使用ceph-deploy安裝(L版ceph)後,該rbd池並無建立

 

#建立池

===================在管理端192.168.20.181操做================================

[ceph-admin@c720181 my-cluster]$ ceph osd lspools   #查看集羣池存儲池

[ceph-admin@c720181 my-cluster]$ ceph osd pool create rbd 128   #128爲place group 數量

pool 'rbd' created

[ceph-admin@c720181 my-cluster]$ ceph osd lspools

1 rbd,

 

 

(2)客戶端建立塊設備

[root@c720184 ~]# rbd create rbd1 --size 10240 --name client.rbd

[root@c720184 ~]# rbd ls -p rbd --name client.rbd

rbd1

[root@c720184 ~]# rbd list --name client.rbd

rbd1

[root@c720184 ~]# rbd --image rbd1 info --name client.rbd

rbd image 'rbd1':

size 10GiB in 2560 objects

order 22 (4MiB objects)

block_name_prefix: rbd_data.106b6b8b4567

format: 2

features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

flags:

create_timestamp: Sun Aug 18 18:56:02 2019

 

 

 

(3)客戶端映射塊設備

[root@c720184 ~]# rbd map --image rbd1 --name client.rbd

直接映射會報以下錯誤,是由於建立的塊設備屬性對於該內核版本不支持。

features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

 

 

 

 

解決方式有以下三種:

a.動態禁用 (建議)

rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff --name client.rbd

 

b.建立RBD塊設備時,只啓用分層特性

rbd create rbd1 --size 10240 --image-feature layering --name client.rbd

 

c.ceph配置文件中禁用

rbd_default_features = 1

 

好比這裏使用動態禁用:(截圖裏面少了--name client.rbd,否則會報錯)

        rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff --name client.rbd

 

        從新映射塊設備:

 [root@c720184 ~]# rbd map --image rbd1 --name client.rbd

 

 

 

 

(4)建立文件系統並掛載

[root@c720184 ~]# fdisk -l /dev/rbd0

 

Disk /dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

 

[root@c720184 ~]# mkfs.xfs /dev/rbd0

meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=0, sparse=0

data     =                       bsize=4096   blocks=2621440, imaxpct=25

         =                       sunit=1024   swidth=1024 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=8 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

 

[root@c720184 ~]# mkdir /mnt/ceph-disk1

[root@c720184 ~]# mount /dev/rbd0 /mnt/ceph-disk1

[root@c720184 ~]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd0                 10G   33M   10G   1% /mnt/ceph-disk1

 

(5)寫入數據測試

[root@c720184 ~]# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

100+0 records in

100+0 records out

104857600 bytes (105 MB) copied, 1.04015 s, 101 MB/s

[root@c720184 ~]# ll -h /mnt/ceph-disk1/

total 100M

-rw-r--r--. 1 root root 100M Aug 18 19:11 file1

 

(6)作成服務,設置開機自動掛載

wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount

#注意須要修改裏面的配置參數

 #!/bin/bash

# Pool name where block device image is stored
export poolname=rbd

# Disk image name
export rbdimage=rbd1

# Mounted Directory
export mountpoint=/mnt/ceph-disk1

# Image mount/unmount and pool are passed from the systemd service as arguments
# Are we are mounting or unmounting
if [ "$1" == "m" ]; then
   modprobe rbd
   rbd feature disable $rbdimage object-map fast-diff deep-flatten
   rbd map $rbdimage --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
   mkdir -p $mountpoint
   mount /dev/rbd/$poolname/$rbdimage $mountpoint
fi
if [ "$1" == "u" ]; then
   umount $mountpoint
   rbd unmap /dev/rbd/$poolname/$rbdimage
fi

#添加可執行權限

chmod +x /usr/local/bin/rbd-mount

wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service

 [root@c720184 ~]# systemctl daemon-reload

[root@c720184 ~]# systemctl enable rbd-mount.service

 

 

 

測試重啓是否自動掛載:

reboot -f

df -h

這裏爲了節省時間,沒有重啓,而是先取消掛載,啓動掛載服務看是否掛載上。

[root@c720184 ~]# umount /mnt/ceph-disk1/

[root@c720184 ~]# systemctl start rbd-mount.service

 

 [root@client ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/cl_-root   50G  1.8G   49G   4% /
devtmpfs              910M     0  910M   0% /dev
tmpfs                 920M     0  920M   0% /dev/shm
tmpfs                 920M   17M  904M   2% /run
tmpfs                 920M     0  920M   0% /sys/fs/cgroup
/dev/sda1            1014M  184M  831M  19% /boot
/dev/mapper/cl_-home  196G   33M  195G   1% /home
tmpfs                 184M     0  184M   0% /run/user/0
ceph-fuse              54G  1.0G   53G   2% /mnt/cephfs
c720182:/              54G  1.0G   53G   2% /mnt/cephnfs
/dev/rbd1              20G   33M   20G   1% /mnt/ceph-disk1

相關文章
相關標籤/搜索