ceph luminous 新功能以內置dashboard

 

搭建一個3節點環境,ceph -s 以下:tcp

[root@clove16 ~]# ceph -s
  cluster:
    id:     a57fb0a6-9528-11e7-84c0-ecf4bbdc70f8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum 10.118.203.14,10.118.203.15,10.118.203.16
    mgr: 10.118.203.16(active), standbys: 10.118.203.14, 10.118.203.15
    osd: 30 osds: 30 up, 30 in
 
  data:
    pools:   2 pools, 728 pgs
    objects: 25609 objects, 100255 MB
    usage:   13568 MB used, 27903 GB / 27916 GB avail
    pgs:     728 active+clean
 
  io:
    client:   4435 B/s rd, 0 B/s wr, 4 op/s rd, 0 op/s wr

開啓監控模塊

下列操做只需在mgr爲「active」的節點執行(如上所在:mgr爲「active」的節點IP 爲「10.118.203.16」)spa

在/etc/ceph/ceph.conf中添加rest

[mgr]
mgr_modules = dashboard

設置dashboard的ip和端口code

[root@clove16 ~]# ceph config-key put mgr/dashboard/server_addr 10.118.203.16

[root@clove16 ~]# ceph config-key put mgr/dashboard/server_port 7000

查看配置server

[root@clove117 ~]# ceph config-key dump 
{
    "mgr/dashboard/server_addr": "10.118.203.16",
    "mgr/dashboard/server_port": "7000"

重啓mgr服務 ip

[root@clove16 ~]# systemctl restart ceph-mgr@10.118.203.16

檢查端口io

[root@clove16 ~]# netstat -anp|grep 7000
tcp        0      0 10.118.203.16:7000      0.0.0.0:*               LISTEN      24217/ceph-mgr      
tcp        0      0 10.118.203.16:7000      10.90.96.21:62756       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62754       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62755       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62748       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62749       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62753       TIME_WAIT   -                   
tcp        0      0 10.118.203.16:7000      10.90.96.21:62775       ESTABLISHED 24217/ceph-mgr

登錄界面監控

相關文章
相關標籤/搜索