ceph 指定OSD建立pool

    ceph集羣中容許使用混合類型的磁盤,好比一部分磁盤是SSD,一部分是STAT。若是針對某些業務小高速磁盤SSD,某些業務須要STAT,在建立資源池的時候能夠指定建立在某些OSD上。node

    基本步驟有8步:code

        當前只有STAT沒有SSD,可是不影響實驗結果。對象

1    獲取crush mapip

[root@ceph-admin getcrushmap]# ceph osd getcrushmap -o /opt/getcrushmap/crushmap
got crush map from osdmap epoch 2482

2    反編譯crush map資源

[root@ceph-admin getcrushmap]# crushtool -d crushmap -o decrushmap

3    修改crush mapget

    在root default 後面添加下面兩個bucketjenkins

root ssd {
	id -5
	alg straw
	hash 0
	item osd.0 weight 0.01
}
root stat {
        id -6
        alg straw
        hash 0
        item osd.1 weight 0.01
}

    在rules部分添加以下規則:hash

rule ssd{
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take ssd
	step chooseleaf firstn 0 type osd
	step emit
}
rule stat{
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take stat
        step chooseleaf firstn 0 type osd
        step emit
}

4    編譯crush mapit

[root@ceph-admin getcrushmap]# crushtool -c decrushmap -o newcrushmap

5    注入crush map編譯

[root@ceph-admin getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap 
set crush map
[root@ceph-admin getcrushmap]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-6 0.00999 root stat                                             
 1 0.00999     osd.1                up  1.00000          1.00000 
-5 0.00999 root ssd                                              
 0 0.00999     osd.0                up  1.00000          1.00000 
-1 0.58498 root default                                          
-2 0.19499     host ceph-admin                                   
 2 0.19499         osd.2            up  1.00000          1.00000 
-3 0.19499     host ceph-node1                                   
 0 0.19499         osd.0            up  1.00000          1.00000 
-4 0.19499     host ceph-node2                                   
 1 0.19499         osd.1            up  1.00000          1.00000 
# 從新查看osd tree 的時候已經看見這個樹已經變了。添加了名稱爲stat和SSD的兩個bucket

6    建立資源池

[root@ceph-admin getcrushmap]# ceph osd pool create ssd_pool 8 8
pool 'ssd_pool' created
[root@ceph-admin getcrushmap]# ceph osd pool create stat_pool 8 8
pool 'stat_pool' created
[root@ceph-admin getcrushmap]# ceph osd dump|grep ssd
pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0
[root@ceph-admin getcrushmap]# ceph osd dump|grep stat
pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0

注意:剛剛建立的兩個資源池ssd_pool 和stat_pool 的 crush_ruleset  都是0,下面須要修改。

7    修改資源池存儲規則

[root@ceph-admin getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1
set pool 28 crush_ruleset to 1
[root@ceph-admin getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2
set pool 29 crush_ruleset to 2
[root@ceph-admin getcrushmap]# ceph osd dump|grep ssd
pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width 0
[root@ceph-admin getcrushmap]# ceph osd dump|grep stat
pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0

# luminus 版本設置pool規則的語法是
[root@ceph-admin ceph]# ceph osd pool set ssd crush_rule ssd
set pool 2 crush_rule to ssd
[root@ceph-admin ceph]# ceph osd pool set stat crush_rule stat
set pool 1 crush_rule to stat

8    驗證

    驗證前先看看ssd_pool 和stat_pool 裏面是否有對象

[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
#這兩個資源池中都沒有對象

    用rados命令 添加對象到兩個資源池中

[root@ceph-admin getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts
[root@ceph-admin getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts
[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
test_object1
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
test_object2
#對象添加成功
[root@ceph-admin getcrushmap]# ceph osd map ssd_pool test_object1
osdmap e2493 pool 'ssd_pool' (28) object 'test_object1' -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0)
[root@ceph-admin getcrushmap]# ceph osd map stat_pool test_object2
osdmap e2493 pool 'stat_pool' (29) object 'test_object2' -> pg 29.c5cfe5e9 (29.1) -> up ([1], p1) acting ([1,0,2], p1)

上面驗證結果能夠看出,test_object1 存入osd.0中,test_object2 存入osd.1中。達到預期目的。

相關文章
相關標籤/搜索