一、在運行 Ceph 守護進程的節點上建立一個普通用戶,ceph-deploy 會在節點安裝軟件包,因此你建立的用戶須要無密碼 sudo 權限。若是使用root能夠忽略。
爲賦予用戶全部權限,把下列加入 /etc/sudoers.d/cephnode
echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph sudo chmod 0440 /etc/sudoers.d/ceph
二、配置你的管理主機,使之可經過 SSH無密碼訪問各節點。
三、配置ceph源ceph.repo,這裏直接配置163的源,加快安裝速度python
[Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=http://mirrors.163.com/ceph/keys/release.asc priority=1
四、安裝ceph-deploy 只須要在管理節點docker
sudo yum update && sudo yum install ceph-deploy 安裝ceph相關軟件以及依賴包 sudo ceph-deploy install qd01-stop-cloud001 qd01-stop-cloud002 qd01-stop-cloud003
注意:這裏可使用yum直接安裝ceph相關包
在全部節點執行,若是執行這一步,上面的ceph-deploy install能夠不用執行json
sudo yum -y install ceph ceph-common rbd-fuse ceph-release python-ceph-compat python-rbd librbd1-devel ceph-radosgw
五、建立集羣bash
sudo ceph-deploy new qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003
修改配置文件ceph.confapp
[global] fsid = ec7ee19a-f7c6-4ed0-9307-f48af473352c mon_initial_members = qd01-stop-k8s-node001, qd01-stop-k8s-node002, qd01-stop-k8s-node003 mon_host = 10.26.22.105,10.26.22.80,10.26.22.85 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx cluster_network = 10.0.0.0/0 public_network = 10.0.0.0/0 filestore_xattr_use_omap = true osd_pool_default_size = 3 osd_pool_default_min_size = 1 osd_pool_default_pg_num = 520 osd_pool_default_pgp_num = 520 osd_recovery_op_priority= 10 osd_client_op_priority = 100 osd_op_threads = 20 osd_recovery_max_active = 2 osd_max_backfills = 2 osd_scrub_load_threshold = 1 osd_deep_scrub_interval = 604800000 osd_deep_scrub_stride = 4096 [client] rbd_cache = true rbd_cache_size = 134217728 rbd_cache_max_dirty = 125829120 [mon] mon_allow_pool_delete = true
注意: 在某一主機上新增 Mon 時,若是它不是由 ceph-deploy new 命令所定義的,那就必須把 public network 加入 ceph.conf 配置文件。ide
六、建立初始化mon測試
ceph-deploy mon create-initial
七、建立OSD,這裏只列出一臺機器的命令,多臺osd替換主機名重複執行便可
初始化磁盤ui
ceph-deploy disk zap qd01-stop-k8s-node001 /dev/sdb ceph-deploy disk zap qd01-stop-k8s-node001 /dev/sdc ceph-deploy disk zap qd01-stop-k8s-node001 /dev/sdd
建立並激活url
ceph-deploy osd create --data /dev/sdb qd01-stop-k8s-node001 ceph-deploy osd create --data /dev/sdc qd01-stop-k8s-node001 ceph-deploy osd create --data /dev/sdd qd01-stop-k8s-node001
八、建立管理主機
ceph-deploy mgr create qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003
九、查看集羣狀態
[root@qd01-stop-k8s-node001 ~]# ceph -s cluster: id: ec7ee19a-f7c6-4ed0-9307-f48af473352c health: HEALTH_OK services: mon: 3 daemons, quorum qd01-stop-k8s-node002,qd01-stop-k8s-node003,qd01-stop-k8s-node001 mgr: qd01-stop-k8s-node001(active), standbys: qd01-stop-k8s-node002, qd01-stop-k8s-node003 osd: 24 osds: 24 up, 24 in data: pools: 1 pools, 256 pgs objects: 5 objects, 325 B usage: 24 GiB used, 44 TiB / 44 TiB avail pgs: 256 active+clean
十、開啓mgr dashboard
ceph mgr module enable dashboard ceph dashboard create-self-signed-cert ceph dashboard set-login-credentials admin admin ceph mgr services ceph config-key put mgr/dashboard/server_addr 0.0.0.0 綁定IP ceph config-key put mgr/dashboard/server_port 7000 設置端口 systemctl restart ceph-mgr@qd01-stop-k8s-node001.service
十一、建立pool
建立 ceph osd pool create k8s 256 256 容許rbd使用pool ceph osd pool application enable k8s rbd --yes-i-really-mean-it 查看 ceph osd pool ls
十二、測試
[root@qd01-stop-k8s-node001 ~]# rbd create docker_test --size 4096 -p k8s [root@qd01-stop-k8s-node001 ~]# rbd info docker_test -p k8s rbd image 'docker_test': size 4 GiB in 1024 objects order 22 (4 MiB objects) id: 11ed6b8b4567 block_name_prefix: rbd_data.11ed6b8b4567 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Wed Nov 11 17:19:38 2020
1三、刪除節點並刪除ceph包
ceph-deploy purgedata qd01-stop-k8s-node008 ceph-deploy purge qd01-stop-k8s-node008
**健康檢查** ceph -s –conf /etc/ceph/ceph.conf –name client.admin –keyring /etc/ceph/ceph.client.admin.keyring ceph health ceph quorum_status –format json-pretty ceph osd dump ceph osd stat ceph mon dump ceph mon stat ceph mds dump ceph mds stat ceph pg dump ceph pg stat **osd/pool** ceph osd tree ceph osd pool ls detail ceph osd pool set rbd crush_ruleset 1 ceph osd pool create sata-pool 256 rule-sata ceph osd pool create ssd-pool 256 rule-ssd ceph osd pool set data min_size 2 **配置相關** ceph daemon osd.0 config show (在osd節點上執行) ceph daemon osd.0 config set mon_allow_pool_delete true(在osd節點執行,重啓後失效) ceph tell osd.0 config set mon_allow_pool_delete false (任意節點執行,重啓後失效) ceph config set osd.0 mon_allow_pool_delete true(只支持13.x版本,任意節點執行,重啓有效,要求配置選項不在配置配置文件中,不然mon會忽略該設置) **日誌相關** ceph log last 100 map ceph osd map ceph pg dump ceph pg map x.yz ceph pg x.yz query **驗證** ceph auth get client.admin –name mon. –keyring /var/lib/ceph/mon/ceph-$hostname/keyring ceph auth get osd.0 ceph auth get mon. ceph auth ls **crush相關** ceph osd crush add-bucket root-sata root ceph osd crush add-bucket ceph-1-sata host ceph osd crush add-bucket ceph-2-sata host ceph osd crush move ceph-1-sata root=root-sata ceph osd crush move ceph-2-sata root=root-sata ceph osd crush add osd.0 2 host=ceph-1-sata ceph osd crush add osd.1 2 host=ceph-1-sata ceph osd crush add osd.2 2 host=ceph-2-sata ceph osd crush add osd.3 2 host=ceph-2-sata ceph osd crush add-bucket root-ssd root ceph osd crush add-bucket ceph-1-ssd host ceph osd crush add-bucket ceph-2-ssd host ceph osd getcrushmap -o /tmp/crush crushtool -d /tmp/crush -o /tmp/crush.txt update /tmp/crush.txt crushtool -c /tmp/crush.txt -o /tmp/crush.bin ceph osd setcrushmap -i /tmp/crush.bin