一、報錯一ide
[root@ct ceph]# ceph -s cluster: id: dfb110f9-e0e0-4544-9f13-9141750ee9f6 health: HEALTH_WARN Degraded data redundancy: 192 pgs undersized services: mon: 3 daemons, quorum ct,c1,c2 mgr: ct(active), standbys: c2, c1 osd: 2 osds: 2 up, 2 in data: pools: 3 pools, 192 pgs objects: 0 objects, 0 B usage: 2.0 GiB used, 2.0 TiB / 2.0 TiB avail pgs: 102 active+undersized 90 stale+active+undersized 查看obs狀態,c2沒有鏈接上 [root@ct ceph]# ceph osd status +----+------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | ct | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up | | 1 | c1 | 1026M | 1022G | 0 | 0 | 0 | 0 | exists,up | +----+------+-------+-------+--------+---------+--------+---------+-----------+
解決方法:
在c2重啓osd便可解決[root@c2 ~]# systemctl restart ceph-osd.target
rest
二、報錯二code
[root@ct ceph]# ceph -s cluster: id: 44d72edb-4085-4cfc-8652-eb670472f169 health: HEALTH_WARN clock skew detected on mon.c1, mon.c2 services: mon: 3 daemons, quorum ct,c1,c2 mgr: c1(active), standbys: c2, ct osd: 3 osds: 1 up, 1 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 1.0 GiB used, 1023 GiB / 1024 GiB avail pgs:
解決方法:
(1)控制節點重啓NTP服務[root@ct ceph]# systemctl restart ntpd
(2)計算節點從新同步控制節點時間[root@c2 ~]# ntpdate 192.168.100.10
(3)在控制節點重啓mon服務[root@ct ceph]# systemctl restart ceph-mon.target
get