Centos7下使用Ceph-deploy快速部署Ceph分佈式存儲-操做記錄

以前已詳細介紹了Ceph分佈式存儲基礎知識,下面簡單記錄下Centos7使用Ceph-deploy快速部署Ceph環境:
1)基本環境
192.168.10.220 ceph-admin(ceph-deploy) mds一、mon1(也能夠將monit節點另放一臺機器)
192.168.10.239 ceph-node1 osd1
192.168.10.212 ceph-node2 osd2
192.168.10.213 ceph-node3 osd3
-------------------------------------------------
每一個節點修改主機名html

hostnamectl set-hostname ceph-admin

hostnamectl set-hostname ceph-node1

hostnamectl set-hostname ceph-node2

# hostnamectl set-hostname ceph-node3

每一個節點綁定主機名映射node

cat /etc/hosts

192.168.10.220 ceph-admin
192.168.10.239 ceph-node1
192.168.10.212 ceph-node2
192.168.10.213 ceph-node3
-------------------------------------------------
每一個節點確認連通性linux

ping -c 3 ceph-admin

ping -c 3 ceph-node1

ping -c 3 ceph-node2

# ping -c 3 ceph-node3

每一個節點關閉防火牆和selinuxbootstrap

systemctl stop firewalld

systemctl disable firewalld

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# setenforce 0

每一個節點安裝和配置NTP(官方推薦的是集羣的全部節點所有安裝並配置 NTP,須要保證各節點的系統時間一致。沒有本身部署ntp服務器,就在線同步NTP)vim

yum install ntp ntpdate ntp-doc -y

systemctl restart ntpd

# systemctl status ntpd

每一個節點準備yum源
刪除默認的源,國外的比較慢centos

yum clean all

mkdir /mnt/bak

mv /etc/yum.repos.d/* /mnt/bak/

下載阿里雲的base源和epel源服務器

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

添加ceph源網絡

vim /etc/yum.repos.d/ceph.repo

[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
------------------------------------------------------------
每一個節點建立cephuser用戶,設置sudo權限app

useradd -d /home/cephuser -m cephuser

echo "cephuser"|passwd --stdin cephuser

echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser

chmod 0440 /etc/sudoers.d/cephuser

sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

測試cephuser的sudo權限dom

su - cephuser

$ sudo su -

#

配置相互間的ssh信任關係
如今ceph-admin節點上產生公私鑰文件,而後將ceph-admin節點的.ssh目錄拷貝給其餘節點
[root@ceph-admin ~]# su - cephuser
[cephuser@ceph-admin ~]$ ssh-keygen -t rsa #一路回車
[cephuser@ceph-admin ~]$ cd .ssh/
[cephuser@ceph-admin .ssh]$ ls
id_rsa id_rsa.pub
[cephuser@ceph-admin .ssh]$ cp id_rsa.pub authorized_keys

[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node1:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node2:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node3:/home/cephuser/

而後在各節點直接驗證cephuser用戶下的ssh相互信任關係
$ ssh -p22 cephuser@ceph-admin
$ ssh -p22 cephuser@ceph-node1
$ ssh -p22 cephuser@ceph-node2
$ ssh -p22 cephuser@ceph-node3
2)準備磁盤(ceph-node一、ceph-node二、ceph-node3三個節點)

測試時使用的磁盤不要過小,不然後面添加磁盤時會報錯,建議磁盤大小爲20G及以上。
三個節點均是在WebvirtMgr上建立的虛擬機,參考:https://www.cnblogs.com/kevingrace/p/8387999.html 文檔中建立並掛載虛擬機磁盤。
以下分別在三個節點掛載了一塊20G的裸盤

檢查磁盤
$ sudo fdisk -l /dev/vdb
Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

格式化磁盤
$ sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
$ sudo mkfs.xfs /dev/vdb -f

查看磁盤格式(xfs格式)
$ sudo blkid -o value -s TYPE /dev/vdb
3)部署階段(ceph-admin節點上使用ceph-deploy快速部署)

[root@ceph-admin ~]# su - cephuser

安裝ceph-deploy
[cephuser@ceph-admin ~]$ sudo yum update -y && sudo yum install ceph-deploy -y

建立cluster目錄
[cephuser@ceph-admin ~]$ mkdir cluster
[cephuser@ceph-admin ~]$ cd cluster/

建立集羣(後面填寫monit節點的主機名,這裏monit節點和管理節點是同一臺機器,即ceph-admin)
[cephuser@ceph-admin cluster]$ ceph-deploy new ceph-admin
.........
[ceph-admin][DEBUG ] IP addresses found: [u'192.168.10.220']
[ceph_deploy.new][DEBUG ] Resolving host ceph-admin
[ceph_deploy.new][DEBUG ] Monitor ceph-admin at 192.168.10.220
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-admin']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.10.220']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

修改ceph.conf文件(注意:mon_host必須和public network 網絡是同網段內!)
[cephuser@ceph-admin cluster]$ vim ceph.conf #添加下面兩行配置內容
......
public network = 192.168.10.220/24
osd pool default size = 3

安裝ceph(過程有點長,須要等待一段時間....)
[cephuser@ceph-admin cluster]$ ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3

初始化monit監控節點,並收集全部密鑰
[cephuser@ceph-admin cluster]$ ceph-deploy mon create-initial
[cephuser@ceph-admin cluster]$ ceph-deploy gatherkeys ceph-admin

添加OSD到集羣
檢查OSD節點上全部可用的磁盤
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3

使用zap選項刪除全部osd節點上的分區
[cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb

準備OSD(使用prepare命令)
[cephuser@ceph-admin cluster]$ ceph-deploy osd prepare ceph-node1:/dev/vdb ceph-node2:/dev/vdb ceph-node3:/dev/vdb

激活OSD(注意因爲ceph對磁盤進行了分區,/dev/vdb磁盤分區爲/dev/vdb1)
[cephuser@ceph-admin cluster]$ ceph-deploy osd activate ceph-node1:/dev/vdb1 ceph-node2:/dev/vdb1 ceph-node3:/dev/vdb1
---------------------------------------------------------------------------------------------
可能出現下面的報錯:
[ceph-node1][WARNIN] ceph_disk.main.Error: Error: /dev/vdb1 is not a directory or block device
[ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/vdb1

可是這個報錯沒有影響ceph的部署,在三個osd節點上經過命令已顯示磁盤已成功mount:
[cephuser@ceph-node1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 4.2G 0 rom
vda 252:0 0 70G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 69G 0 part
├─centos-root 253:0 0 43.8G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 21.4G 0 lvm /home
vdb 252:16 0 20G 0 disk
├─vdb1 252:17 0 15G 0 part /var/lib/ceph/osd/ceph-0 #掛載成功
└─vdb2 252:18 0 5G 0 part

[cephuser@ceph-node2 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 4.2G 0 rom
vda 252:0 0 70G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 69G 0 part
├─centos-root 253:0 0 43.8G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 21.4G 0 lvm /home
vdb 252:16 0 20G 0 disk
├─vdb1 252:17 0 15G 0 part /var/lib/ceph/osd/ceph-1 #掛載成功
└─vdb2 252:18 0 5G 0 part

[cephuser@ceph-node3 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 4.2G 0 rom
vda 252:0 0 70G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 69G 0 part
├─centos-root 253:0 0 43.8G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 21.4G 0 lvm /home
vdb 252:16 0 20G 0 disk
├─vdb1 252:17 0 15G 0 part /var/lib/ceph/osd/ceph-2 #掛載成功
└─vdb2 252:18 0 5G 0 part

查看OSD
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
........
[ceph-node1][DEBUG ] /dev/vdb2 ceph journal, for /dev/vdb1 #以下顯示這兩個分區,則表示成功了
[ceph-node1][DEBUG ] /dev/vdb1 ceph data, active, cluster ceph, osd.0, journal /dev/vdb2
........
[ceph-node3][DEBUG ] /dev/vdb2 ceph journal, for /dev/vdb1
[ceph-node3][DEBUG ] /dev/vdb1 ceph data, active, cluster ceph, osd.1, journal /dev/vdb2
.......
[ceph-node3][DEBUG ] /dev/vdb2 ceph journal, for /dev/vdb1
[ceph-node3][DEBUG ] /dev/vdb1 ceph data, active, cluster ceph, osd.2, journal /dev/vdb2

用ceph-deploy把配置文件和admin密鑰拷貝到管理節點和Ceph節點,這樣你每次執行Ceph命令行時就無需指定monit節點地址
和ceph.client.admin.keyring了
[cephuser@ceph-admin cluster]$ ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3

修改密鑰權限
[cephuser@ceph-admin cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

檢查ceph狀態
[cephuser@ceph-admin cluster]$ sudo ceph health
HEALTH_OK
[cephuser@ceph-admin cluster]$ sudo ceph -s
cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
health HEALTH_OK
monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
election epoch 3, quorum 0 ceph-admin
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
100 MB used, 45946 MB / 46046 MB avail
64 active+clean

查看ceph osd運行狀態
[cephuser@ceph-admin ~]$ ceph osd stat
osdmap e19: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds

查看osd的目錄樹
[cephuser@ceph-admin ~]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.04376 root default
-2 0.01459 host ceph-node1
0 0.01459 osd.0 up 1.00000 1.00000
-3 0.01459 host ceph-node2
1 0.01459 osd.1 up 1.00000 1.00000
-4 0.01459 host ceph-node3
2 0.01459 osd.2 up 1.00000 1.00000

查看monit監控節點的服務狀況
[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mon@ceph-admin
[cephuser@ceph-admin cluster]$ ps -ef|grep ceph|grep 'cluster'
ceph 28190 1 0 11:44 ? 00:00:01 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

分別查看下ceph-node一、ceph-node二、ceph-node3三個節點的osd服務狀況,發現已經在啓動中。
[cephuser@ceph-node1 ~]$ sudo systemctl status ceph-osd@0.service #啓動是start、重啓是restart
[cephuser@ceph-node1 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster

[cephuser@ceph-node2 ~]$ sudo systemctl status ceph-osd@1.service
[cephuser@ceph-node2 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster

[cephuser@ceph-node3 ~]$ sudo systemctl status ceph-osd@2.service
[cephuser@ceph-node3 ~]$ sudo ps -ef|grep ceph|grep "cluster"
ceph 28749 1 0 11:44 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
cephuser 29197 29051 0 11:54 pts/2 00:00:00 grep --color=auto cluster
4)建立文件系統

先查看管理節點狀態,默認是沒有管理節點的。
[cephuser@ceph-admin ~]$ ceph mds stat
e1:

建立管理節點(ceph-admin做爲管理節點)。
須要注意:若是不建立mds管理節點,client客戶端將不能正常掛載到ceph集羣!!
[cephuser@ceph-admin ~]$ pwd
/home/cephuser
[cephuser@ceph-admin ~]$ cd cluster/
[cephuser@ceph-admin cluster]$ ceph-deploy mds create ceph-admin

再次查看管理節點狀態,發現已經在啓動中
[cephuser@ceph-admin cluster]$ ceph mds stat
e2:, 1 up:standby

[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mds@ceph-admin
[cephuser@ceph-admin cluster]$ ps -ef|grep cluster|grep ceph-mds
ceph 29093 1 0 12:46 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

建立pool,pool是ceph存儲數據時的邏輯分區,它起到namespace的做用
[cephuser@ceph-admin cluster]$ ceph osd lspools #先查看pool
0 rbd,

新建立的ceph集羣只有rdb一個pool。這時須要建立一個新的pool
[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_data 10 #後面的數字是PG的數量
pool 'cephfs_data' created

[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_metadata 10 #建立pool的元數據
pool 'cephfs_metadata' created

[cephuser@ceph-admin cluster]$ ceph fs new myceph cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1

再次查看pool狀態
[cephuser@ceph-admin cluster]$ ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,

檢查mds管理節點狀態
[cephuser@ceph-admin cluster]$ ceph mds stat
e5: 1/1/1 up {0=ceph-admin=up:active}

查看ceph集羣狀態
[cephuser@ceph-admin cluster]$ sudo ceph -s
cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
health HEALTH_OK
monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
election epoch 3, quorum 0 ceph-admin
fsmap e5: 1/1/1 up {0=ceph-admin=up:active} #多了此行狀態
osdmap e19: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v48: 84 pgs, 3 pools, 2068 bytes data, 20 objects
101 MB used, 45945 MB / 46046 MB avail
84 active+clean

查看ceph集羣端口
[cephuser@ceph-admin cluster]$ sudo lsof -i:6789
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ceph-mon 28190 ceph 10u IPv4 70217 0t0 TCP ceph-admin:smc-https (LISTEN)
ceph-mon 28190 ceph 19u IPv4 70537 0t0 TCP ceph-admin:smc-https->ceph-node1:41308 (ESTABLISHED)
ceph-mon 28190 ceph 20u IPv4 70560 0t0 TCP ceph-admin:smc-https->ceph-node2:48516 (ESTABLISHED)
ceph-mon 28190 ceph 21u IPv4 70583 0t0 TCP ceph-admin:smc-https->ceph-node3:44948 (ESTABLISHED)
ceph-mon 28190 ceph 22u IPv4 72643 0t0 TCP ceph-admin:smc-https->ceph-admin:51474 (ESTABLISHED)
ceph-mds 29093 ceph 8u IPv4 72642 0t0 TCP ceph-admin:51474->ceph-admin:smc-https (ESTABLISHED)
5)client端掛載ceph存儲(採用fuse方式)

安裝ceph-fuse(這裏的客戶機是centos6系統)
[root@centos6-02 ~]# rpm -Uvh https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@centos6-02 ~]# yum install -y ceph-fuse

建立掛載目錄
[root@centos6-02 ~]# mkdir /cephfs

複製配置文件
將ceph配置文件ceph.conf從管理節點copy到client節點(192.168.10.220爲管理節點)
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.conf /etc/ceph/
或者
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.conf /etc/ceph/ #兩個路徑下的文件內容同樣

複製密鑰
將ceph的ceph.client.admin.keyring從管理節點copy到client節點
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
或者
[root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.client.admin.keyring /etc/ceph/

查看ceph受權
[root@centos6-02 ~]# ceph auth list
installed auth entries:

mds.ceph-admin
key: AQAZZxdbH6uAOBAABttpSmPt6BXNtTJwZDpSJg==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.0
key: AQCuWBdbV3TlBBAA4xsAE4QsFQ6vAp+7pIFEHA==
caps: [mon] allow profile osd
caps: [osd] allow
osd.1
key: AQC6WBdbakBaMxAAsUllVWdttlLzEI5VNd/41w==
caps: [mon] allow profile osd
caps: [osd] allow

osd.2
key: AQDJWBdbz6zNNhAATwzL2FqPKNY1IvQDmzyOSg==
caps: [mon] allow profile osd
caps: [osd] allow
client.admin
key: AQCNWBdbf1QxAhAAkryP+OFy6wGnKR8lfYDkUA==
caps: [mds] allow

caps: [mon] allow
caps: [osd] allow

client.bootstrap-mds
key: AQCNWBdbnjLILhAAT1hKtLEzkCrhDuTLjdCJig==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQCOWBdbmxEANBAAiTMJeyEuSverXAyOrwodMQ==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQCNWBdbiO1bERAARLZaYdY58KLMi4oyKmug4Q==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQCNWBdboBLXIBAAVTsD2TPJhVSRY2E9G7eLzQ==
caps: [mon] allow profile bootstrap-rgw

將ceph集羣存儲掛載到客戶機的/cephfs目錄下
[root@centos6-02 ~]# ceph-fuse -m 192.168.10.220:6789 /cephfs
2018-06-06 14:28:54.149796 7f8d5c256760 -1 init, newargv = 0x4273580 newargc=11
ceph-fuse[16107]: starting ceph client
ceph-fuse[16107]: starting fuse

[root@centos6-02 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_centos602-lv_root
50G 3.5G 44G 8% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/vda1 477M 41M 412M 9% /boot
/dev/mapper/vg_centos602-lv_home
45G 52M 43G 1% /home
/dev/vdb1 20G 5.1G 15G 26% /data/osd1
ceph-fuse 45G 100M 45G 1% /cephfs

由上面可知,已經成功掛載了ceph存儲,三個osd節點,每一個節點有15G(在節點上經過"lsblk"命令能夠查看ceph data分區大小),一共45G!

取消ceph存儲的掛載
[root@centos6-02 ~]# umount /cephfs

舒適提示:
當有一半以上的OSD節點掛掉後,遠程客戶端掛載的Ceph存儲就會使用異常了,即暫停使用。好比本案例中有3個OSD節點,當其中一個OSD節點掛掉後(好比宕機),
客戶端掛載的Ceph存儲使用正常;但當有2個OSD節點掛掉後,客戶端掛載的Ceph存儲就不能正常使用了(表現爲Ceph存儲目錄下的數據讀寫操做一直卡着狀態),
當OSD節點恢復後,Ceph存儲也會恢復正常使用。OSD節點宕機從新啓動後,osd程序會自動起來(經過監控節點自動起來)
============================其餘記錄===========================


清除ceph存儲
清除安裝包
[cephuser@ceph-admin ~]$ ceph-deploy purge ceph1 ceph2 ceph3

清除配置信息
[cephuser@ceph-admin ~]$ ceph-deploy purgedata ceph1 ceph2 ceph3
[cephuser@ceph-admin ~]$ ceph-deploy forgetkeys

每一個節點刪除殘留的配置文件
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/osd/
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/mon/

[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/mds/
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-mds/

[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-osd/
[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/bootstrap-mon/

[cephuser@ceph-admin ~]$ sudo rm -rf /var/lib/ceph/tmp/
[cephuser@ceph-admin ~]$ sudo rm -rf /etc/ceph/

[cephuser@ceph-admin ~]$ sudo rm -rf /var/run/ceph/*


查看ceph命令,關於ceph osd、ceph mds、ceph mon、ceph pg的命令
[cephuser@ceph-admin ~]$ ceph --help


以下報錯
[cephuser@ceph-admin ~]$ ceph osd tree
2018-06-06 14:56:27.843841 7f8a0b6dd700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2018-06-06 14:56:27.843853 7f8a0b6dd700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2018-06-06 14:56:27.843854 7f8a0b6dd700 0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound
[cephuser@ceph-admin ~]$ ceph osd stat
2018-06-06 14:55:58.165882 7f377a1c9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2018-06-06 14:55:58.165894 7f377a1c9700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2018-06-06 14:55:58.165896 7f377a1c9700 0 librados: client.admin initialization error (2) No such file or directory

解決辦法:
[cephuser@ceph-admin ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

[cephuser@ceph-admin ~]$ ceph osd stat osdmap e35: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds [cephuser@ceph-admin ~]$ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.04376 root default -2 0.01459 host ceph-node1 0 0.01459 osd.0 up 1.00000 1.00000 -3 0.01459 host ceph-node2 1 0.01459 osd.1 up 1.00000 1.00000 -4 0.01459 host ceph-node3 2 0.01459 osd.2 up 1.00000 1.00000

相關文章
相關標籤/搜索