ceph-deploy部署bluestore

環境描述:ceph1和ceph2兩個服務器(其中ceph1已經安裝好了ceph-deploy)node

                 每一個服務器上各一塊S3500 SSD、兩塊日立1T的HDD盤bootstrap

                 先後端和管理網絡合一:分別爲128.128.128.九、128.128.128.10後端

一、在/etc/hosts中增長:服務器

128.128.128.9  ceph1
128.128.128.10 ceph2

二、經過網絡

ssh-keygen和ssh-copy-id

使節點之間免密ssh

三、在ceph1中新建一個目錄,保留配置文件debug

mkdir /root/mycluster

cd /root/mycluster

四、安裝cephcode

ceph-deploy install ceph1 ceph2

五、安裝monit

ceph-deploy new ceph1

六、修改ceph.conf配置文件test

public_network = 128.128.128.0/24
cluster_network = 128.128.128.0/24

enable experimental unrecoverable data corrupting features = bluestore rocksdb debug_white_box_testing_ec_overwrites
bluestore block db size = 10737418240  #10G
bluestore block wal size = 10737418240  #10G
osd objectstore = bluestore
mon_allow_pool_delete = true
rbd_cache = false

[osd]
bluestore = true

七、初始化mon

ceph-deploy mon create-initial

若須要增長mon

ceph-deploy add mon ceph2

八、格式化分區

ceph-deploy zap {ceph-node}:{dest-disk}

例如:
ceph-deploy zap ceph1:/dev/sdb

九、將ceph.bootstrap-osd.keyring拷貝到/var/lib/ceph/bootstrap-osd/下,並命令爲ceph.keyring

cp /home/mycluster/ceph.bootstrap-osd.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

或
scp /home/mycluster/ceph.bootstrap-osd.keyring ceph2:/var/lib/ceph/bootstrap-osd/ceph.keyring

十、增長osd

ceph-disk prepare  --bluestore --block.db /dev/sdb  --block.wal /dev/sdb /dev/sdc

其中:/dev/sdb爲SSD盤,/dev/sdc爲HDD盤
相關文章
相關標籤/搜索