08.存儲Cinder→5.場景學習→12.Ceph Volume Provider→2.經常使用命令

描述 命令
查看ceph版本
1
2
root@controller:~# ceph --version
ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable)
查看ceph相關的進程
  1.  The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems.
個人devstack:


研發環境:




查看存儲池
  1. images:glance的rbd_store_pool,對應的是image文件
  2. vms:nova的images_rbd_pool,對應的是鏡像文件是啓動盤,backing file依然保存在_base文件夾中
  3. volumes:cinder的rbd_pool,對應的volume文件



查看存儲池內的pg數量
1
2
3
4
5
6
root@controller:~# ceph osd pool get images pg_num
pg_num: 8
root@controller:~# ceph osd pool get vms pg_num
pg_num: 8
root@controller:~# ceph osd pool get volumes pg_num
pg_num: 8
查看集羣狀態
  1. 若出現錯誤:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    root@controller:~# ceph -s
      cluster:
        id:     eab37548-7aef-466a-861c-3757a12ce9e8
        health: HEALTH_WARN
                application not enabled on 1 pool(s) too few PGs per OSD (24 < min 30)
     
      services:
        mon: 1 daemons, quorum controller
        mgr: x(active)
        osd: 1 osds: 1 up, 1 in
     
      data:
        pools:   3 pools, 24 pgs
        objects: 2 objects, 19B
        usage:   249MiB used, 23.7GiB / 24.0GiB avail
        pgs:     24 active+clean
    1. 計算方法:若pgs=64,副本數爲3,osd個數爲9,那麼每一個osd均分64/9*3=21個pgs這個也小於配置30個,也錯了,這裏是說明計算方法
    2. 因爲在devstack安裝好後,有24個pg,而僅有一個osd,副本數只能爲1,所以每一個osd均分24/1*1=24;提示說每一個osd上的pg數量小於最小的數目30個,所以錯誤
    3. 解決:
      1. 修改pg數目
        1
        2
        3
        4
        root@controller:~# ceph osd pool set images pg_num 32
        set pool 1 pg_num to 32
        root@controller:~# ceph osd pool set images pgp_num 32
        set pool 1 pgp_num to 32
        1
        2
        root@controller:~# ceph osd pool get images pg_num
        pg_num: 32
      2. 而後查看ceph -s獲得:application not enabled on 1 pool(s)
        1. 詳細的信息能夠使用ceph health detail查看
      3. enable application:
        1
        2
        root@controller:~# ceph osd pool application enable images rbd
        enabled application 'rbd' on pool 'images'
        ceph osd pool application enable <pool-name> <app-name>,這裏<app-name> is 'cephfs', 'rbd', 'rgw',or freeform for custom applications.
  2. 當建立了鏡像cirros後,觀察ceph集羣狀態
1.解決錯誤後的集羣狀態
status
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
root@controller:~# ceph -s
  cluster:
    id:     eab37548-7aef-466a-861c-3757a12ce9e8
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum controller
    mgr: x(active)
    osd: 1 osds: 1 up, 1 in
 
  data:
    pools:   3 pools, 48 pgs
    objects: 2 objects, 19B
    usage:   249MiB used, 23.7GiB / 24.0GiB avail
    pgs:     48 active+clean
48=32+8+8

2.當建立了鏡像cirros後,觀察ceph集羣狀態

查看osd目錄樹
  1. 安裝完帶有ceph plugin的devstack環境後,發現只有一個osd
  2. 八節點高可用環境中有8個osd,每一個對應一個volume
個人devstack圖

controller是主機名
八節點高可用環境
查看mon的狀態信息
  1. 個人devstack中mon只有一個
  2. 八節點高可用環境中mon有3個,Ceph 存儲集羣只須要單個監視器就能運行,但它就成了單一故障點,爲加強可靠性和容錯能力,Ceph 支持監視器集羣
個人devstack
1
2
3
root@controller:~# ceph mon stat
e1: 1 mons at {controller=172.16.1.17:6789/0}, election epoch 5, leader 0 controller, quorum 0 
controller
八節點高可用環境
1
2
3
root@osd-1:~# ceph mon stat
e1: 3 mons at {osd-1=172.16.1.46:6789/0,osd-2=172.16.1.78:6789/0,osd-3=172.16.1.62:6789/0}, 
election epoch 1298, leader 0 osd-1, quorum 0,1,2 osd-1,osd-3,osd-2
查看認證狀態
  1. 查看祕鑰文件(個人devstack)
    1
    2
    3
    4
    5
    6
    7
    8
    root@controller:~# ll /etc/ceph/  
    total 24
    drwxr-xr-x   2 root  root  4096 Jun 26 16:53 ./
    drwxr-xr-x 112 root  root  4096 Jun 25 20:36 ../
    -rw-------   1 ceph  ceph    63 Jun 25 19:31 ceph.client.admin.keyring
    -rw-r--r--   1 stack stack   64 Jun 25 20:19 ceph.client.cinder.keyring
    -rw-r--r--   1 stack stack   64 Jun 25 20:19 ceph.client.glance.keyring
    -rw-r--r--   1 root  root   335 Jun 25 19:31 ceph.conf
  2. 查看祕鑰文件(八節點高可用環境)
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    root@ctl-1:~# ll /etc/ceph/
    total 28
    drwxr-xr-x   2 root   root   4096 Jun  1 10:47 ./
    drwxr-xr-x 108 root   root   4096 Jun 17 14:59 ../
    -rw-------   1 root   root    151 May 31 16:49 ceph.client.admin.keyring
    -rw-r--r--   1 cinder cinder   64 Jun  1 10:47 ceph.client.cinder.keyring
    -rw-r--r--   1 glance glance   64 Jun  1 10:41 ceph.client.glance.keyring
    -rw-r--r--   1 root   root    297 May 31 16:49 ceph.conf
    -rw-r--r--   1 root   root     92 Mar 20 03:51 rbdmap
    -rw-------   1 root   root      0 May 31 16:49 tmpbq23nn
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    root@cmp-1:~# ll /etc/ceph/
    total 24
    drwxr-xr-x   2 root root 4096 Jun  1 13:11 ./
    drwxr-xr-x 104 root root 4096 Jun  1 11:16 ../
    -rw-------   1 root root  151 May 31 16:53 ceph.client.admin.keyring
    -rw-r--r--   1 root root   64 Jun  1 10:51 ceph.client.cinder.keyring
    -rw-r--r--   1 root root  582 Jun  1 13:11 ceph.conf
    -rw-r--r--   1 root root   92 Mar 20 03:51 rbdmap
    -rw-------   1 root root    0 May 31 16:53 tmpKGhaY5
    
    1
    2
    3
    4
    5
    6
    7
    8
    root@osd-1:~# ll /etc/ceph/
    total 20
    drwxr-xr-x  2 root root 4096 May 31 17:03 ./
    drwxr-xr-x 93 root root 4096 May 31 15:30 ../
    -rw-------  1 root root  151 May 31 15:55 ceph.client.admin.keyring
    -rw-r--r--  1 root root  297 Jun  1 09:40 ceph.conf
    -rw-r--r--  1 root root   92 Mar 13 01:46 rbdmap
    -rw-------  1 root root    0 May 31 15:37 tmp56mnHo


1
2
3
root@controller:~# ceph auth get-or-create client.admin
[client.admin]
  key = AQAkBhJdEXpOBxAA4N3g2mW41kxk0I0hd0EF/A==
1
root@controller:~# ceph auth ls
這裏僅展現命令輸出的一部分
1
2
3
4
osd.0
  key: AQAmBhJd2AdSCRAA7RscIgX7OB20WVjbPYDlcw==
  caps: [mon] allow profile osd 
  caps: [osd] allow *
osd盤
1
2
3
4
5
6
client.admin
  key: AQAkBhJdEXpOBxAA4N3g2mW41kxk0I0hd0EF/A==
  caps: [mds] allow *
  caps: [mgr] allow *
  caps: [mon] allow *
  caps: [osd] allow *
admin用戶
還有client.bootstrap-mds metadata、client.bootstrap-osd、
client.bootstrap-rbd、client.bootstrap-rgw
client.cinder、client.glance、mgr.x
x表示:
List rbd images
  1. 好比建立虛擬機c1
    1. root@controller:~# ceph osd pool application enable vms rbd

1
2
root@controller:~# rbd ls images
709e0da6-197d-4d0f-a9d3-4e78552137e9



1
2
root@controller:~# rbd ls vms 
f092dfe4-365b-4dd6-8867-76a311399782_disk

查看mon的映射信息 個人devstack

controller是節點名
八節點高可用環境

dump (VERB) 傾倒;傾卸 (N-COUNT) 垃圾場;垃圾堆
相關文章
相關標籤/搜索