ceph-deploy部署過程

[root@ceph-1 my_cluster]# ceph-deploy --overwrite-conf osd create ceph-1 --data data_vg1/data_lv1 --block-db block_db_vg1/block_db_lv1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf osd create ceph-1 --data data_vg1/data_lv1 --block-db block_db_vg1/block_db_lv1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x132d170>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-1
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x12b7a28>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : data_vg1/data_lv1
[ceph_deploy.cli][INFO ] block_db : block_db_vg1/block_db_lv1
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device data_vg1/data_lv1
[ceph-1][DEBUG ] connected to host: ceph-1
[ceph-1][DEBUG ] detect platform information from remote host
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data data_vg1/data_lv1 --block.db block_db_vg1/block_db_lv1
[ceph-1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e9d0e462-08f9-4cb4-99de-ae360feeb5d8
[ceph-1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-1][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-0
[ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/data_vg1/data_lv1
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph-1][DEBUG ] Running command: ln -s /dev/data_vg1/data_lv1 /var/lib/ceph/osd/ceph-0/block
[ceph-1][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-1][DEBUG ] stderr: got monmap epoch 1
[ceph-1][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDo2p5cB9vsIxAAIOyJUSvxxvhWxpmoMkqg/g==
[ceph-1][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-1][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDo2p5cB9vsIxAAIOyJUSvxxvhWxpmoMkqg/g== with 0 caps)
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/block_db_vg1/block_db_lv1
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
[ceph-1][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --bluestore-block-db-path /dev/block_db_vg1/block_db_lv1 --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e9d0e462-08f9-4cb4-99de-ae360feeb5d8 --setuser ceph --setgroup ceph
[ceph-1][DEBUG ] --> ceph-volume lvm prepare successful for: data_vg1/data_lv1
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-1][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/data_vg1/data_lv1 --path /var/lib/ceph/osd/ceph-0
[ceph-1][DEBUG ] Running command: ln -snf /dev/data_vg1/data_lv1 /var/lib/ceph/osd/ceph-0/block
[ceph-1][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-1][DEBUG ] Running command: ln -snf /dev/block_db_vg1/block_db_lv1 /var/lib/ceph/osd/ceph-0/block.db
[ceph-1][DEBUG ] Running command: chown -h ceph:ceph /dev/block_db_vg1/block_db_lv1
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
[ceph-1][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
[ceph-1][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
[ceph-1][DEBUG ] Running command: systemctl enable ceph-volume@lvm-0-e9d0e462-08f9-4cb4-99de-ae360feeb5d8
[ceph-1][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-e9d0e462-08f9-4cb4-99de-ae360feeb5d8.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-1][DEBUG ] Running command: systemctl enable --runtime ceph-osd@0
[ceph-1][DEBUG ] Running command: systemctl start ceph-osd@0
[ceph-1][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-1][DEBUG ] --> ceph-volume lvm create successful for: data_vg1/data_lv1
[ceph-1][INFO ] checking OSD status...
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-1 is now ready for osd use.css

 

 

ceph刪除一個osdpython

刪除osd
正常處理流程:
中止osd進程——將節點狀態標記爲out——從crush中移除節點——刪除節點——刪除節點認證
根據這個方法會觸發兩次遷移,一次是在節點osd out之後,一次是在crush remove之後。參考磨渣-刪除OSD的正確方式,調整處理步驟可以減小一次數據遷移。json

在ceph的集羣當中關於節點的替換的問題,一直按照之前的方式進行的處理,處理的步驟以下:bootstrap

調整OSD的CRUSH WEIGHTruby

ceph osd crush reweight osd.0 0.1 

說明:這個地方若是想慢慢的調整就分幾回將crush 的weight 減低到0 ,這個過程其實是讓數據不分佈在這個節點上,讓數據慢慢的分佈到其餘節點上,直到最終爲沒有分佈在這個osd,而且遷移完成
這個地方不光調整了osd 的crush weight ,實際上同時調整了host 的 weight ,這樣會調整集羣的總體的crush 分佈,在osd 的crush 爲0 後, 再對這個osd的任何刪除相關操做都不會影響到集羣的數據的分佈ui

中止OSD進程spa

systemctl stop ceph-osd@0

中止到osd的進程,這個是通知集羣這個osd進程不在了,不提供服務了,由於自己沒權重,就不會影響到總體的分佈,也就沒有遷移debug

將節點狀態標記爲outrest

ceph osd out osd.0 

中止到osd的進程,這個是通知集羣這個osd再也不映射數據了,不提供服務了,由於自己沒權重,就不會影響到總體的分佈,也就沒有遷移code

從CRUSH中移除節點

ceph osd crush remove osd.0 

這個是從crush中刪除,由於已是0了 因此沒影響主機的權重,也就沒有遷移了

刪除節點

ceph osd rm osd.0 

這個是從集羣裏面刪除這個節點的記錄

刪除節點認證(不刪除編號會佔住)

ceph auth del osd.0 

刪除HOST節點
將OSD所有刪除後,若是還須要在集羣中刪除該osd的host節點,可使用該命令。

ceph osd crush rm test5
相關文章
相關標籤/搜索