集羣狀況和上一篇文章同樣git
# ceph -s cluster: id: 4c7ec5af-cbd3-40fd-8c96-0615c77660d4 health: HEALTH_OK services: mon: 3 daemons, quorum luminous0,luminous1,luminous2 mgr: luminous0(active) mds: 1/1/1 up {0=luminous0=up:active} osd: 6 osds: 6 up, 6 in data: pools: 7 pools, 112 pgs objects: 240 objects, 3359 bytes usage: 9245 MB used, 51587 MB / 60833 MB avail pgs: 112 active+clean
一、在 ceph.conf
配置中將掛鉤關掉github
osd_crush_update_on_start = false
二、部署OSDvim
三、手動建立全部的 CRUSH buckets測試
四、手動在每一個 buckets 中放置 OSDspa
每當新加入、移除一個節點,或者將OSD從一個 host 移到另外一個 host 時,也必須手動更改 CRUSH map。code
定義 osd_crush_location_hook,它可讓你定義一個路徑去執行腳本,容許你自動處理以上過程。對象
調用方式:進程
myhook --cluster <cluster_name> --id <id> --type osd
集羣名一般是 ceph , id 是守護進程標識符( OSD 號)。部署
這麼作的目的是爲ceph不一樣類型的設備(HDD,SSD,NVMe)提供一個合理的默認,以便用戶沒必要本身手動編輯指定。這至關於給磁盤組一個統一的class標籤,根據class建立rule,而後根據role建立pool,整個操做不須要手動修改crushmap。get
# ceph osd crush class ls [] # ceph osd crush class create hdd created class hdd with id 0 to crush map # ceph osd crush class create ssd created class ssd with id 1 to crush map # ceph osd crush class ls [ "hdd", "ssd" ]
根據class,能夠對osd進行如下兩種操做:
一、部署OSD時指定 class,好比,指定部署磁盤所在的 OSD 到指定 class 中:
ceph-disk prepare --crush-device-class <class> /dev/XXX
二、將現有 osd 加入到指定 class 中,命令以下:
ceph osd crush set-device-class <class> osd.<id>
* 如下對第二種操做進行實驗,也是使用最多的。*
# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.05814 root default -2 0.01938 host luminous0 1 0.00969 osd.1 up 1.00000 1.00000 5 0.00969 osd.5 up 1.00000 1.00000 -3 0.01938 host luminous2 0 0.00969 osd.0 up 1.00000 1.00000 4 0.00969 osd.4 up 1.00000 1.00000 -4 0.01938 host luminous1 2 0.00969 osd.2 up 1.00000 1.00000 3 0.00969 osd.3 up 1.00000 1.00000
將0、一、2分到hdd class,三、四、5分到ssd class
# for i in 0 1 2; do ceph osd crush set-device-class hdd osd.$i; done set-device-class item id 3 name 'osd.0' device_class hdd set-device-class item id 4 name 'osd.1' device_class hdd set-device-class item id 5 name 'osd.2' device_class hdd # for i in 3 4 5; do ceph osd crush set-device-class ssd osd.$i; done set-device-class item id 3 name 'osd.3' device_class ssd set-device-class item id 4 name 'osd.4' device_class ssd set-device-class item id 5 name 'osd.5' device_class ssd
# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -12 0.02907 root default~ssd -9 0.00969 host luminous0~ssd 5 0.00969 osd.5 up 1.00000 1.00000 -10 0.00969 host luminous2~ssd 4 0.00969 osd.4 up 1.00000 1.00000 -11 0.00969 host luminous1~ssd 3 0.00969 osd.3 up 1.00000 1.00000 -8 0.02907 root default~hdd -5 0.00969 host luminous0~hdd 1 0.00969 osd.1 up 1.00000 1.00000 -6 0.00969 host luminous2~hdd 0 0.00969 osd.0 up 1.00000 1.00000 -7 0.00969 host luminous1~hdd 2 0.00969 osd.2 up 1.00000 1.00000 -1 0.05814 root default -2 0.01938 host luminous0 1 0.00969 osd.1 up 1.00000 1.00000 5 0.00969 osd.5 up 1.00000 1.00000 -3 0.01938 host luminous2 0 0.00969 osd.0 up 1.00000 1.00000 4 0.00969 osd.4 up 1.00000 1.00000 -4 0.01938 host luminous1 2 0.00969 osd.2 up 1.00000 1.00000 3 0.00969 osd.3 up 1.00000 1.00000
# ceph osd crush rule create-simple hdd-rule default~ssd host firstn Invalid command: invalid chars ~ in default~ssd osd crush rule create-simple <name> <root> <type> {firstn|indep} : create crush rule <name> to start from <root>, replicate across buckets of type <type>, using a choose mode of <firstn|indep> (default firstn; indep best for erasure pools) Error EINVAL: invalid command
這裏出現錯誤,我在想,是否是 class name 不用帶上 default~
這個符號,因而
# ceph osd crush rule create-simple hdd-rule ssd host firstn Error ENOENT: root item ssd does not exist
先跳過這個直接建立rule關聯class的命令,後續BUG修復了再來實驗
首先查看當前rule的情況
# ceph osd crush rule ls [ "replicated_rule" ]
只有一個默認的rule
* 第一步:獲取crushmap *
# ceph osd getcrushmap -o c1 11
第二步:反編譯crushmap
# crushtool -d c1 -o c2.txt
編輯crushmap
# vim c2.txt
在 # rule
那一欄 replicated_rule
的後面添加 hdd_rule
和 ssd_rule
# rules rule replicated_rule { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule hdd_rule { ruleset 1 type replicated min_size 1 max_size 10 step take default class hdd step chooseleaf firstn 0 type osd step emit } rule ssd_rule { ruleset 2 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type osd step emit }
第三步:編譯crushmap
# crushtool -c c2.txt -o c1.new
第四步:注入crushmap
# ceph osd setcrushmap -i c1.new 12
此時,查看rule
# ceph osd crush rule ls [ "replicated_rule", "hdd_rule", "ssd_rule" ]
有了新建立的兩個rule
一、在 ssd_rule 上建立一個 pool
# ceph osd pool create testpool 64 64 ssd_rule pool 'testpool' created
二、寫一個對象
# rados -p testpool put object1 c2.txt
三、查看對象的osdmap
# ceph osd map testpool object1 osdmap e46 pool 'testpool' (7) object 'object1' -> pg 7.bac5debc (7.3c) -> up ([5,3,4], p5) acting ([5,3,4], p5)
發現對象確實只寫在 ssd class 所對應的 3個OSD(osd.3 osd.4 osd.5)上,rule綁定成功。