構建雲設施,存儲是一個重要組件,因此本文主要介紹一下我這裏如何使用ceph的。node
雲軟件選擇openstack,版本是Mitaka,部署系統是centos 7.1,ceph版本是10.2.2.python
選擇ceph的緣由是,免費、開源、支持多,而且市面上大部分都是選擇ceph作雲存儲。centos
另外本文是參考了http://www.vpsee.com/2015/07/install-ceph-on-centos-7/ api
目錄安全
1、ceph安裝bash
2、openstack裏應用ceph集羣服務器
3、glance應用ceph網絡
4、刪除osd節點app
5、ceph使用混合磁盤dom
下面是開始安裝
能夠參考官方的http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
1、ceph安裝
主機環境
一個adm,3個mon,3個osd,複製2份
下面是hosts配置(每一個主機都有)
10.10.128.18 ck-ceph-adm 10.10.128.19 ck-ceph-mon1 10.10.128.20 ck-ceph-mon2 10.10.128.21 ck-ceph-mon3 10.10.128.22 ck-ceph-osd1 10.10.128.23 ck-ceph-osd2 10.10.128.24 ck-ceph-osd3
另外須要對mon與osd節點進行一些優化
綁定盤符 ll /sys/block/sd*|awk '{print $NF}'|sed 's/..//'|awk -F '/' '{print "DEVPATH==\""$0"\", NANE=\""$NF"\", MODE=\"0660\""}'>/etc/udev/rules.d/90-ceph-disk.rules #關閉節能模式 for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done #增長pid數量 echo "kernel.pid_max = 4194303"|tee -a /etc/sysctl.conf #增長最大打開文件數量 echo "fs.file-max = 26234859"|tee -a /etc/sysctl.conf #增長順序度 for READ_KB in /sys/block/sd*/queue/read_ahead_kb; do [ -f $READ_KB ] || continue; echo 8192 > $READ_KB; done #增長IO調度隊列 for REQUEST in /sys/block/sd*/queue/nr_requests; do [ -f $REQUEST ] || continue; echo 20480 > $REQUEST; done #配置IO調度器 for SCHEDULER in /sys/block/sd*/queue/scheduler; do [ -f $SCHEDULER ] || continue; echo deadline > $SCHEDULER; done #關閉swwap echo "vm.swappiness = 0" | tee -a /etc/sysctl.conf
每一個主機也最好是把主機名修改跟hosts裏一致
一、建立用戶
useradd -m ceph-admin su - ceph-admin mkdir -p ~/.ssh chmod 700 ~/.ssh cat << EOF > ~/.ssh/config Host * Port 50020 StrictHostKeyChecking no UserKnownHostsFile=/dev/null EOF chmod 600 ~/.ssh/config
作ssh信任
ssh-keygen -t rsa -b 2048
以後一路回車就行
複製id_rsa.pub到其餘節點的/home/ceph/.ssh/authorized_keys
chmod 600 .ssh/authorized_keys
以後給予ceph sudo權限
修改/etc/sudoers
ceph-admin ALL=(root) NOPASSWD:ALL
以後在這個配置文件裏關閉
Defaults requiretty
在這行前加#
對osd組服務器進行磁盤格式化
若是隻是測試,能夠直接使用目錄,正式使用,仍是直接裸設備格式化
cat auto_parted.sh #!/bin/bash name="b c d e f g h i" for i in ${name}; do echo "Creating partitions on /dev/sd${i} ..." parted -a optimal --script /dev/sd${i} -- mktable gpt parted -a optimal --script /dev/sd${i} -- mkpart primary xfs 0% 100% sleep 1 mkfs.xfs -f /dev/sd${i}1 & done
而後運行
二、安裝epel(全部節點)
yum -y install epel-release
三、安裝ceph源(全部節點,若是是不使用ceph-deploy安裝使用,不然使用ceph-deploy自動安裝)
yum -y install yum-plugin-priorities rpm --import https://download.ceph.com/keys/release.asc rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm cd /etc/yum.repos.d/ sed -i 's@download.ceph.com@mirrors.ustc.edu.cn/ceph@g' ceph.repo yum -y install ceph ceph-radosgw
四、管理節點配置
安裝定製軟件
yum install ceph-deploy -y
進行初始化
su - ceph-admin mkdir ck-ceph-cluster cd ck-ceph-cluster ceph-deploy new ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3
有幾個mon節點就寫幾個
配置
echo "osd pool default size = 2">>ceph.conf echo "osd pool default min size = 2">>ceph.conf echo "public network = 10.10.0.0/16">>ceph.conf echo "cluster network = 172.16.0.0/16">>ceph.conf
請注意若是是多個網卡的話,最好把public與cluster單獨區分出來,cluster是集羣通訊與同步數據網絡,public是供監控與客戶端鏈接網絡。
在全部節點安裝ceph(若是是想使用ceph-deploy安裝就進行,若是使用了第3步,能夠忽略這步)
ceph-deploy install ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3
監控節點初始化
ceph-deploy mon create-initial
對osd節點進行數據盤初始化
ceph-deploy disk zap ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi ceph-deploy osd create ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi ceph-deploy disk zap ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi ceph-deploy osd create ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi ceph-deploy disk zap ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi ceph-deploy osd create ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi
同步配置
ceph-deploy admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3 ceph-deploy --overwrite-conf admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3 ck-ceph-osd1 ck-ceph-osd2 sudo chmod +r /etc/ceph/ceph.client.admin.keyring
對全部節點/etc/ceph修改權限
sudo chown -R ceph:ceph /etc/ceph
查看集羣信息
[ceph-admin@ck-ceph-adm ~]$ ceph -s cluster 2aafe304-2dd1-48be-a0fa-cb9c911c7c3b health HEALTH_OK monmap e1: 3 mons at {ck-ceph-mon1=10.10.128.19:6789/0,ck-ceph-mon2=10.10.128.20:6789/0,ck-ceph-mon3=10.10.128.21:6789/0} election epoch 6, quorum 0,1,2 ck-ceph-mon1,ck-ceph-mon2,ck-ceph-mon3 osdmap e279: 40 osds: 40 up, 40 in flags sortbitwise pgmap v96866: 2112 pgs, 3 pools, 58017 MB data, 13673 objects 115 GB used, 21427 GB / 21543 GB avail 2112 active+clean
2、openstack裏應用ceph集羣
能夠參考官網http://docs.ceph.com/docs/master/rbd/rbd-openstack/
一、建立池子
ceph osd pool create volumes 1024 1024
這個1024的pg_num與pgp_num的值,你們參考http://docs.ceph.com/docs/master/rados/operations/placement-groups/
二、安裝ceph客戶端工具
在全部cinder節點與計算節點都安裝
rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm yum install ceph-common
三、同步配置
同步/etc/ceph/ceph.conf
把adm裏的同步到cinder節點與計算節點
四、安全認證(ceph管理節點)
運行cinder用戶訪問ceph權限
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'
五、把key加入節點(管理節點)
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
六、密鑰文件管理(管理節點)
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
把密鑰加入到libvirt使用
獲取uuid
uuidgen 457eb676-33da-42ec-9a8c-9293d545c337
登錄計算節點,把uuid改成上面的
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
重啓服務
systemctl restart openstack-nova-compute.service
使用virsh secret-list查看是否有此密鑰
若是不在全部節點使用,那麼在把雲硬盤掛載到實例的時候出現/var/log/nova/nova-compute.log
2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher rv = meth(*args, **kwargs) 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib64/python2.7/site-packages/libvirt.py", line 554, in attachDeviceFlags 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher libvirtError: Secret not found: rbd no secret matches uuid '9c0e4528-bd0f-4fe8-a3cd-7b1b9bb21d63'
七、配置cinder(在cinder節點)
修改/etc/cinder/cinder.conf配置
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
並把下面修改
enabled_backends = ceph
重啓服務
systemctl restart openstack-cinder-volume.service target.service
3、glance應用ceph
一、建立池子(在ceph管理節點操做)
ceph osd pool create p_w_picpaths 128
二、設置權限(在ceph管理節點操做)
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=p_w_picpaths'
三、在glance主機裏安裝ceph
yum install ceph-common
四、複製ceph配置文件到glance節點
同步/etc/ceph/ceph.conf
五、配置認證
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
六、配置glance文件
修改/etc/glance/glance-api.conf
[glance_store] stores = rbd default_store = rbd rbd_store_pool = p_w_picpaths rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
七、重啓服務
systemctl restart openstack-glance-api.service openstack-glance-registry.service
八、上傳鏡像並測試
glance p_w_picpath-create --name centos64-test1 --disk-format qcow2 --container-format bare --visibility public --file /tmp/CentOS-6.4-x86_64.qcow2 --progress [root@ceph-mon ceph]# rados -p p_w_picpaths ls rbd_header.7eca70122ade rbd_data.7eca70122ade.0000000000000000 rbd_directory rbd_data.7eca70122ade.0000000000000001 rbd_data.7ee831dac577.0000000000000000 rbd_header.7ee831dac577 rbd_id.c7a81292-773f-457a-859c-2784d780544c rbd_data.7ee831dac577.0000000000000001 rbd_data.7ee831dac577.0000000000000002 rbd_id.a5ae8722-698a-4a84-aa29-500144616001
4、刪除osd節點
一、移出集羣(管理節點執行)
ceph osd out 7 (ceph osd tree中,REWEIGHT值變爲0)
二、中止服務(目標節點執行)
systemctl stop ceph-osd@7 (ceph osd tree中,狀態變爲DOWN)
三、移出crush
ceph osd crush remove osd.7
四、刪除key
ceph auth del osd.7
五、移除osd
ceph osd rm 7
六、查找其所在主機是否還有osd,如有,進入第7步驟,不然
ceph osd crush remove `hostname`
七、修改並同步ceph.conf文件
vi /etc/ceph/ceph.conf
八、刪除目錄文件
rm –rf * /var/lib/ceph/osd/ceph-7
5、ceph使用混合磁盤
下面是使用sas 15k 600g與sas 7.2k 4T作混合存儲
下面是修改前的
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44436G 44434G 1844M 0 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 22216G 0 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ rados df pool name KB objects clones degraded unfound rd rd KB wr wr KB rbd 0 0 0 0 0 0 0 0 0 total used 1888268 0 total avail 46592893204 total space 46594781472
一、獲取當前crush map,反編譯它
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd getcrushmap -o default-crushmapdump got crush map from osdmap epoch 238 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ crushtool -d default-crushmapdump -o default-crushmapdump-decompiled [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ cat default-crushmapdump-decompiled # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 device 6 osd.6 device 7 osd.7 device 8 osd.8 device 9 osd.9 device 10 osd.10 device 11 osd.11 device 12 osd.12 device 13 osd.13 device 14 osd.14 device 15 osd.15 device 16 osd.16 device 17 osd.17 device 18 device18 device 19 osd.19 device 20 osd.20 device 21 osd.21 device 22 osd.22 device 23 osd.23 device 24 osd.24 device 25 osd.25 device 26 osd.26 device 27 osd.27 device 28 osd.28 device 29 osd.29 device 30 osd.30 device 31 osd.31 device 32 osd.32 device 33 osd.33 device 34 osd.34 device 35 osd.35 device 36 osd.36 device 37 osd.37 device 38 osd.38 device 39 osd.39 device 40 osd.40 device 41 osd.41 device 42 osd.42 device 43 osd.43 device 44 osd.44 device 45 osd.45 device 46 osd.46 # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ck-ceph-osd1 { id -2 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.0 weight 0.540 item osd.1 weight 0.540 item osd.2 weight 0.540 item osd.3 weight 0.540 item osd.4 weight 0.540 item osd.5 weight 0.540 item osd.6 weight 0.540 item osd.7 weight 0.540 item osd.8 weight 0.540 item osd.9 weight 0.540 item osd.10 weight 0.540 item osd.11 weight 0.540 } host ck-ceph-osd2 { id -3 # do not change unnecessarily # weight 8.641 alg straw hash 0 # rjenkins1 item osd.12 weight 0.540 item osd.13 weight 0.540 item osd.14 weight 0.540 item osd.15 weight 0.540 item osd.16 weight 0.540 item osd.17 weight 0.540 item osd.19 weight 0.540 item osd.20 weight 0.540 item osd.21 weight 0.540 item osd.22 weight 0.540 item osd.23 weight 0.540 item osd.24 weight 0.540 item osd.25 weight 0.540 item osd.26 weight 0.540 item osd.27 weight 0.540 item osd.28 weight 0.540 } host ck-ceph-osd3 { id -4 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.29 weight 0.540 item osd.30 weight 0.540 item osd.31 weight 0.540 item osd.32 weight 0.540 item osd.33 weight 0.540 item osd.34 weight 0.540 item osd.35 weight 0.540 item osd.36 weight 0.540 item osd.37 weight 0.540 item osd.38 weight 0.540 item osd.39 weight 0.540 item osd.40 weight 0.540 } host ck-ceph-osd4 { id -5 # do not change unnecessarily # weight 21.789 alg straw hash 0 # rjenkins1 item osd.41 weight 3.631 item osd.42 weight 3.631 item osd.43 weight 3.631 item osd.44 weight 3.631 item osd.45 weight 3.631 item osd.46 weight 3.631 } root default { id -1 # do not change unnecessarily # weight 43.392 alg straw hash 0 # rjenkins1 item ck-ceph-osd1 weight 6.481 item ck-ceph-osd2 weight 8.641 item ck-ceph-osd3 weight 6.481 item ck-ceph-osd4 weight 21.789 } # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } # end crush map
二、對crushmap文件進行修改,在root default後面,建立2個新的osd root域,分別是sas-15(對於sas 15k硬盤)與sas-7(對於sas 7.2k硬盤)
root sas-10 { id -6 alg straw hash 0 item osd.0 weight 0.540 item osd.1 weight 0.540 item osd.2 weight 0.540 item osd.3 weight 0.540 item osd.4 weight 0.540 item osd.5 weight 0.540 item osd.6 weight 0.540 item osd.7 weight 0.540 item osd.8 weight 0.540 item osd.9 weight 0.540 item osd.10 weight 0.540 item osd.11 weight 0.540 item osd.12 weight 0.540 item osd.13 weight 0.540 item osd.14 weight 0.540 item osd.15 weight 0.540 item osd.16 weight 0.540 item osd.17 weight 0.540 item osd.19 weight 0.540 item osd.20 weight 0.540 item osd.21 weight 0.540 item osd.22 weight 0.540 item osd.23 weight 0.540 item osd.24 weight 0.540 item osd.25 weight 0.540 item osd.26 weight 0.540 item osd.27 weight 0.540 item osd.28 weight 0.540 item osd.29 weight 0.540 item osd.30 weight 0.540 item osd.31 weight 0.540 item osd.32 weight 0.540 item osd.33 weight 0.540 item osd.34 weight 0.540 item osd.35 weight 0.540 item osd.36 weight 0.540 item osd.37 weight 0.540 item osd.38 weight 0.540 item osd.39 weight 0.540 item osd.40 weight 0.540 }
id是參考上面的id,累加就行,alg與hash不須要動,而後把對於sas 15k的硬盤osd都加入到sas-10裏
下面是把sas 7.2k的osd都加入到sas-7裏
root sas-7 { id -7 alg straw hash 0 item osd.41 weight 3.631 item osd.42 weight 3.631 item osd.43 weight 3.631 item osd.44 weight 3.631 item osd.45 weight 3.631 item osd.46 weight 3.631 }
三、下面是新增crush rule規則,這個規則是爲了作匹配使用,設置哪些池子使用什麼osd,在rule replicated_ruleset後面添加
rule sas-15-pool { ruleset 1 type replicated min_size 1 max_size 10 step take sas-15 step chooseleaf firstn 0 type osd step emit } rule sas-7-pool { ruleset 2 type replicated min_size 1 max_size 10 step take sas-7 step chooseleaf firstn 0 type osd step emit }
四、把規則注入到集羣
下面是完整的規則
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ cat default-crushmapdump-decompiled # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 device 6 osd.6 device 7 osd.7 device 8 osd.8 device 9 osd.9 device 10 osd.10 device 11 osd.11 device 12 osd.12 device 13 osd.13 device 14 osd.14 device 15 osd.15 device 16 osd.16 device 17 osd.17 device 18 device18 device 19 osd.19 device 20 osd.20 device 21 osd.21 device 22 osd.22 device 23 osd.23 device 24 osd.24 device 25 osd.25 device 26 osd.26 device 27 osd.27 device 28 osd.28 device 29 osd.29 device 30 osd.30 device 31 osd.31 device 32 osd.32 device 33 osd.33 device 34 osd.34 device 35 osd.35 device 36 osd.36 device 37 osd.37 device 38 osd.38 device 39 osd.39 device 40 osd.40 device 41 osd.41 device 42 osd.42 device 43 osd.43 device 44 osd.44 device 45 osd.45 device 46 osd.46 # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ck-ceph-osd1 { id -2 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.0 weight 0.540 item osd.1 weight 0.540 item osd.2 weight 0.540 item osd.3 weight 0.540 item osd.4 weight 0.540 item osd.5 weight 0.540 item osd.6 weight 0.540 item osd.7 weight 0.540 item osd.8 weight 0.540 item osd.9 weight 0.540 item osd.10 weight 0.540 item osd.11 weight 0.540 } host ck-ceph-osd2 { id -3 # do not change unnecessarily # weight 8.641 alg straw hash 0 # rjenkins1 item osd.12 weight 0.540 item osd.13 weight 0.540 item osd.14 weight 0.540 item osd.15 weight 0.540 item osd.16 weight 0.540 item osd.17 weight 0.540 item osd.19 weight 0.540 item osd.20 weight 0.540 item osd.21 weight 0.540 item osd.22 weight 0.540 item osd.23 weight 0.540 item osd.24 weight 0.540 item osd.25 weight 0.540 item osd.26 weight 0.540 item osd.27 weight 0.540 item osd.28 weight 0.540 } host ck-ceph-osd3 { id -4 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.29 weight 0.540 item osd.30 weight 0.540 item osd.31 weight 0.540 item osd.32 weight 0.540 item osd.33 weight 0.540 item osd.34 weight 0.540 item osd.35 weight 0.540 item osd.36 weight 0.540 item osd.37 weight 0.540 item osd.38 weight 0.540 item osd.39 weight 0.540 item osd.40 weight 0.540 } host ck-ceph-osd4 { id -5 # do not change unnecessarily # weight 21.789 alg straw hash 0 # rjenkins1 item osd.41 weight 3.631 item osd.42 weight 3.631 item osd.43 weight 3.631 item osd.44 weight 3.631 item osd.45 weight 3.631 item osd.46 weight 3.631 } root default { id -1 # do not change unnecessarily # weight 43.392 alg straw hash 0 # rjenkins1 item ck-ceph-osd1 weight 6.481 item ck-ceph-osd2 weight 8.641 item ck-ceph-osd3 weight 6.481 item ck-ceph-osd4 weight 21.789 } root sas-15 { id -6 alg straw hash 0 item osd.0 weight 0.540 item osd.1 weight 0.540 item osd.2 weight 0.540 item osd.3 weight 0.540 item osd.4 weight 0.540 item osd.5 weight 0.540 item osd.6 weight 0.540 item osd.7 weight 0.540 item osd.8 weight 0.540 item osd.9 weight 0.540 item osd.10 weight 0.540 item osd.11 weight 0.540 item osd.12 weight 0.540 item osd.13 weight 0.540 item osd.14 weight 0.540 item osd.15 weight 0.540 item osd.16 weight 0.540 item osd.17 weight 0.540 item osd.19 weight 0.540 item osd.20 weight 0.540 item osd.21 weight 0.540 item osd.22 weight 0.540 item osd.23 weight 0.540 item osd.24 weight 0.540 item osd.25 weight 0.540 item osd.26 weight 0.540 item osd.27 weight 0.540 item osd.28 weight 0.540 item osd.29 weight 0.540 item osd.30 weight 0.540 item osd.31 weight 0.540 item osd.32 weight 0.540 item osd.33 weight 0.540 item osd.34 weight 0.540 item osd.35 weight 0.540 item osd.36 weight 0.540 item osd.37 weight 0.540 item osd.38 weight 0.540 item osd.39 weight 0.540 item osd.40 weight 0.540 } root sas-7 { id -7 alg straw hash 0 item osd.41 weight 3.631 item osd.42 weight 3.631 item osd.43 weight 3.631 item osd.44 weight 3.631 item osd.45 weight 3.631 item osd.46 weight 3.631 } # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule sas-15-pool { ruleset 1 type replicated min_size 1 max_size 10 step take sas-15 step chooseleaf firstn 0 type osd step emit } rule sas-7-pool { ruleset 2 type replicated min_size 1 max_size 10 step take sas-7 step chooseleaf firstn 0 type osd step emit } # end crush map
注入集羣
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ crushtool -c default-crushmapdump-decompiled -o default-crushmapdump-compiled [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd setcrushmap -i default-crushmapdump-compiled set crush map
應用後查看osd tree
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -7 21.78598 root sas-7 41 3.63100 osd.41 up 1.00000 1.00000 42 3.63100 osd.42 up 1.00000 1.00000 43 3.63100 osd.43 up 1.00000 1.00000 44 3.63100 osd.44 up 1.00000 1.00000 45 3.63100 osd.45 up 1.00000 1.00000 46 3.63100 osd.46 up 1.00000 1.00000 -6 21.59973 root sas-15 0 0.53999 osd.0 up 1.00000 1.00000 1 0.53999 osd.1 up 1.00000 1.00000 2 0.53999 osd.2 up 1.00000 1.00000 3 0.53999 osd.3 up 1.00000 1.00000 4 0.53999 osd.4 up 1.00000 1.00000 5 0.53999 osd.5 up 1.00000 1.00000 6 0.53999 osd.6 up 1.00000 1.00000 7 0.53999 osd.7 up 1.00000 1.00000 8 0.53999 osd.8 up 1.00000 1.00000 9 0.53999 osd.9 up 1.00000 1.00000 10 0.53999 osd.10 up 1.00000 1.00000 11 0.53999 osd.11 up 1.00000 1.00000 12 0.53999 osd.12 up 1.00000 1.00000 13 0.53999 osd.13 up 1.00000 1.00000 14 0.53999 osd.14 up 1.00000 1.00000 15 0.53999 osd.15 up 1.00000 1.00000 16 0.53999 osd.16 up 1.00000 1.00000 17 0.53999 osd.17 up 1.00000 1.00000 19 0.53999 osd.19 up 1.00000 1.00000 20 0.53999 osd.20 up 1.00000 1.00000 21 0.53999 osd.21 up 1.00000 1.00000 22 0.53999 osd.22 up 1.00000 1.00000 23 0.53999 osd.23 up 1.00000 1.00000 24 0.53999 osd.24 up 1.00000 1.00000 25 0.53999 osd.25 up 1.00000 1.00000 26 0.53999 osd.26 up 1.00000 1.00000 27 0.53999 osd.27 up 1.00000 1.00000 28 0.53999 osd.28 up 1.00000 1.00000 29 0.53999 osd.29 up 1.00000 1.00000 30 0.53999 osd.30 up 1.00000 1.00000 31 0.53999 osd.31 up 1.00000 1.00000 32 0.53999 osd.32 up 1.00000 1.00000 33 0.53999 osd.33 up 1.00000 1.00000 34 0.53999 osd.34 up 1.00000 1.00000 35 0.53999 osd.35 up 1.00000 1.00000 36 0.53999 osd.36 up 1.00000 1.00000 37 0.53999 osd.37 up 1.00000 1.00000 38 0.53999 osd.38 up 1.00000 1.00000 39 0.53999 osd.39 up 1.00000 1.00000 40 0.53999 osd.40 up 1.00000 1.00000 -1 43.39195 root default -2 6.48099 host ck-ceph-osd1 0 0.53999 osd.0 up 1.00000 1.00000 1 0.53999 osd.1 up 1.00000 1.00000 2 0.53999 osd.2 up 1.00000 1.00000 3 0.53999 osd.3 up 1.00000 1.00000 4 0.53999 osd.4 up 1.00000 1.00000 5 0.53999 osd.5 up 1.00000 1.00000 6 0.53999 osd.6 up 1.00000 1.00000 7 0.53999 osd.7 up 1.00000 1.00000 8 0.53999 osd.8 up 1.00000 1.00000 9 0.53999 osd.9 up 1.00000 1.00000 10 0.53999 osd.10 up 1.00000 1.00000 11 0.53999 osd.11 up 1.00000 1.00000 -3 8.64099 host ck-ceph-osd2 12 0.53999 osd.12 up 1.00000 1.00000 13 0.53999 osd.13 up 1.00000 1.00000 14 0.53999 osd.14 up 1.00000 1.00000 15 0.53999 osd.15 up 1.00000 1.00000 16 0.53999 osd.16 up 1.00000 1.00000 17 0.53999 osd.17 up 1.00000 1.00000 19 0.53999 osd.19 up 1.00000 1.00000 20 0.53999 osd.20 up 1.00000 1.00000 21 0.53999 osd.21 up 1.00000 1.00000 22 0.53999 osd.22 up 1.00000 1.00000 23 0.53999 osd.23 up 1.00000 1.00000 24 0.53999 osd.24 up 1.00000 1.00000 25 0.53999 osd.25 up 1.00000 1.00000 26 0.53999 osd.26 up 1.00000 1.00000 27 0.53999 osd.27 up 1.00000 1.00000 28 0.53999 osd.28 up 1.00000 1.00000 -4 6.48099 host ck-ceph-osd3 29 0.53999 osd.29 up 1.00000 1.00000 30 0.53999 osd.30 up 1.00000 1.00000 31 0.53999 osd.31 up 1.00000 1.00000 32 0.53999 osd.32 up 1.00000 1.00000 33 0.53999 osd.33 up 1.00000 1.00000 34 0.53999 osd.34 up 1.00000 1.00000 35 0.53999 osd.35 up 1.00000 1.00000 36 0.53999 osd.36 up 1.00000 1.00000 37 0.53999 osd.37 up 1.00000 1.00000 38 0.53999 osd.38 up 1.00000 1.00000 39 0.53999 osd.39 up 1.00000 1.00000 40 0.53999 osd.40 up 1.00000 1.00000 -5 21.78899 host ck-ceph-osd4 41 3.63100 osd.41 up 1.00000 1.00000 42 3.63100 osd.42 up 1.00000 1.00000 43 3.63100 osd.43 up 1.00000 1.00000 44 3.63100 osd.44 up 1.00000 1.00000 45 3.63100 osd.45 up 1.00000 1.00000 46 3.63100 osd.46 up 1.00000 1.00000
五、下面是建立池子
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool create sas-15-pool 1024 1024 pool 'sas-15-pool' created [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas pool 1 'sas-15-pool' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 240 flags hashpspool stripe_width 0
這個池子名稱要跟以前配置文件裏rule設置的同樣,下面是設置crush規則,讓剛纔這個sas-15-pool能應用到配置文件裏對應池子
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool set sas-15-pool crush_ruleset 1 set pool 1 crush_ruleset to 1 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas pool 1 'sas-15-pool' replicated size 2 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 242 flags hashpspool stripe_width 0
sas 15k的池子配置好了,下面是配置sas 7.2k的池子
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool create sas-7-pool 256 256 pool 'sas-7-pool' created [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas-7 pool 2 'sas-7-pool' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 244 flags hashpspool stripe_width 0 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd pool set sas-7-pool crush_ruleset 2 set pool 2 crush_ruleset to 2 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph osd dump|grep sas-7 pool 2 'sas-7-pool' replicated size 2 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 256 pgp_num 256 last_change 246 flags hashpspool stripe_width 0
查看集羣存儲空間
使用40個sas 15k 600g與6個sas 7.2k4T弄的存儲,副本2份
因此sas 15k的話,可用空間是12t,sas 7.2k的話也是12t
[ceph-admin@ck-ceph-adm ck-ceph-cluster]$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44436G 44434G 1904M 0 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 22216G 0 sas-15-pool 1 0 0 11061G 0 sas-7-pool 2 0 0 11155G 0 [ceph-admin@ck-ceph-adm ck-ceph-cluster]$ rados df pool name KB objects clones degraded unfound rd rd KB wr wr KB rbd 0 0 0 0 0 0 0 0 0 sas-15-pool 0 0 0 0 0 0 0 0 0 sas-7-pool 0 0 0 0 0 0 0 0 0 total used 1950412 0 total avail 46592831060 total space 46594781472
有問題博客留言,我看到會及時答覆。