本文介紹如何配置cache pool tiering. cache pool的做用是提供可擴展的cache,用來緩存ceph的熱點數據或者直接用來做爲高速pool。如何創建一個cache pool:首先利用ssd盤作一個虛擬的bucket tree,shell
而後建立一個cache pool,設置其crush映射rule和相關配置,最後關聯須要用到的pool到cache pool。緩存
這是新增ssd bucket(vrack)後的osd tree。其中osd.1 osd.0 osd.2使用的是ssd盤。如何建立將簡單,無非是調整或新增osd到bucket tree下。spa
# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 6.00000 root default -2 6.00000 room test -3 3.00000 rack r1 -7 1.00000 host H09 3 1.00000 osd.3 up 1.00000 1.00000 -9 1.00000 host H07 5 1.00000 osd.5 up 1.00000 1.00000 -10 1.00000 host H06 6 1.00000 osd.6 up 1.00000 1.00000 -4 3.00000 rack vrack -6 1.00000 host vh06 1 1.00000 osd.1 up 1.00000 1.00000 -8 1.00000 host vh07 2 1.00000 osd.2 up 1.00000 1.00000 -5 1.00000 host vh09 0 1.00000 osd.0 up 1.00000 1.00000
#ceph osd getcrushmap -o map #crushtool -d map -o map.txt #vi map.txt 添加replicated_ruleset_cache crush策略,從vrack機架選擇osd rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take r1 step chooseleaf firstn 0 type host step emit } rule replicated_ruleset_cache { ruleset 1 type replicated min_size 1 max_size 10 step take vrack step chooseleaf firstn 0 type host step emit } #crushtool -c map.txt -o map.new #ceph osd setcrushmap -i map.new
指定新建的pool的crush rules 爲replicated_ruleset_cache
code
#ceph osd pool create rbd.cache 128 128 #ceph osd pool set rbd.cache crush_ruleset 1
# ceph osd tier add rbd rbd.cache # ceph osd tier cache-mode rbd.cache writeback # ceph osd tier set-overlay rbd rbd.cache
參數含義請參考官網文檔
# ceph osd pool set rbd.cache hit_set_type bloom # ceph osd pool set rbd.cache hit_set_count 1 # ceph osd pool set rbd.cache hit_set_period 1800 # ceph osd pool set rbd.cache target_max_bytes 30000000000 # ceph osd pool set rbd.cache min_read_recency_for_promote 1 # ceph osd pool set rbd.cache min_write_recency_for_promote 1 # ceph osd pool set rbd.cache cache_target_dirty_ratio .4 # ceph osd pool set rbd.cache cache_target_dirty_high_ratio .6 # ceph osd pool set rbd.cache cache_target_full_ratio .8
【CACHE POOL】http://docs.ceph.com/docs/master/dev/cache-pool/get