Cephx 實戰演練

<center>cephx</center>node

原文地址:Cephx 實戰演練數據庫

本文就閱讀完徐小胖的大話Cephx後,針對一些猜想和疑惑進行了實戰演練,對原文的一些說法和結論進行了驗證,並進行了一系列的擴展的思考猜測和總結。最後收穫滿滿,不只對原文的一些結論進行了驗證,也發現了其中的一些問題,更多的是本身動手後一些奇妙的場景和發現。bootstrap

本文實戰任務和完成狀況以下:vim

  • [x] 刪除client.admin.keyring
  • [x] 修改cephx配置
  • [x] 修改Monitor keyring
  • [x] 修改OSD keyring
  • [x] 修改client.admin.keyring,經過Mon找回正確的keyring
  • [x] Mon Cap
  • [x] OSD Cap
  • [x] 刪除全部keyring文件再恢復
  • [x] 刪除ceph.conf再恢復
  • [ ] 關閉CephX後不重啓OSD
  • [x] 經過osd.keyring訪問集羣
  • [ ] 配置只能訪問一個RBD的用戶權限

刪除 client.admin.keyring

主節點開始存在keyring,能夠正常訪問集羣segmentfault

[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log  rbdmap
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring
[root@node1 ceph]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_WARN
            no active mgr
            Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
            Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
            application not enabled on 3 pool(s)
            clock skew detected on mon.node2, mon.node3
 
  services:
    mon:     3 daemons, quorum node1,node2,node3
    mgr:     no daemons active
    osd:     6 osds: 5 up, 5 in
    rgw:     1 daemon active
    rgw-nfs: 1 daemon active
 
  data:
    pools:   10 pools, 444 pgs
    objects: 257 objects, 36140 kB
    usage:   6256 MB used, 40645 MB / 46901 MB avail
    pgs:     63.288% pgs not active
             311/771 objects degraded (40.337%)
             158 undersized+degraded+peered
             158 active+undersized+degraded
             65  down
             58  incomplete
             5   active+clean+remapped

keyring文件移動到其餘地方,至關於刪除了keyring,這時訪問集羣報錯app

[root@node1 ceph]# mv ceph.client.admin.keyring /tmp/
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap
[root@node1 ceph]# ceph -s
2017-11-23 18:07:48.685028 7f63f6935700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2017-11-23 18:07:48.685094 7f63f6935700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2017-11-23 18:07:48.685098 7f63f6935700  0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster

再拷貝回來又能夠訪問集羣了測試

[root@node1 ceph]# mv /tmp/ceph.client.admin.keyring ./
[root@node1 ceph]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_WARN
            no active mgr
            Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
            Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
            application not enabled on 3 pool(s)
            clock skew detected on mon.node2, mon.node3

node3因爲/etc/ceph/目錄下沒有keyring文件,因此也沒法鏈接集羣ui

[root@node3 ceph]# ls
ceph.conf  ceph-deploy-ceph.log  rbdmap
[root@node3 ceph]# ceph -s
2017-11-23 17:59:16.659034 7fbe34678700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2017-11-23 17:59:16.659085 7fbe34678700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2017-11-23 17:59:16.659089 7fbe34678700  0 librados: client.admin initialization error (2) No such file or directory
[errno 2] error connecting to the cluster

結論:url

ceph.conf中的auth配置爲cephx的時候,訪問集羣是須要祕鑰文件的spa

修改 cephx 配置

node3節點上的/etc/ceph/目錄下操做,首先將ceph.client.admin.keyring文件刪除,而後將auth配置從cephx改成none,而後先重啓monitor,再重啓osd,這時候依然不能夠訪問集羣,由於cephx是面向整個集羣的,而不是某個節點,接下來須要在其餘節點作同樣的操做,更改cephxnone,而後重啓monitorosd,這時候即可以在沒有keyring文件的狀況下訪問集羣了。

# 刪除keyring文件
[root@node3 ~]# cd /etc/ceph/
[root@node3 ceph]# ls
ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  rbdmap
[root@node3 ceph]# mv ceph.client.admin.keyring /tmp/
# 更改cephx配置
[root@node3 ceph]# cat ceph.conf 
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 192.168.1.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[root@node3 ceph]# vim ceph.conf 
[root@node3 ceph]# cat ceph.conf 
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = none
auth_service_required = none
auth_client_required = none

public network = 192.168.1.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
[root@node3 ceph]# systemctl restart ceph-mon
ceph-mon@               ceph-mon@node3.service  ceph-mon.target         
[root@node3 ceph]# systemctl restart ceph-mon
ceph-mon@               ceph-mon@node3.service  ceph-mon.target         
[root@node3 ceph]# systemctl restart ceph-mon.target
[root@node3 ceph]# systemctl restart ceph-osd.target
# 更改單個節點配置後依然不能夠訪問集羣
[root@node3 ceph]# ceph -s
2017-11-27 23:05:23.022571 7f5200c2f700  0 librados: client.admin authentication error (95) Operation not supported
[errno 95] error connecting to the cluster
# 相應的更改其餘幾個節點並重啓,便又能夠正常訪問集羣了
[root@node3 ceph]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_WARN
    ...

結論:

auth配置爲cephx的時候訪問集羣必需要藉助祕鑰文件,而當auth配置爲none的時候,再也不須要祕鑰文件就能夠訪問集羣了。(更改配置須要集羣全部節點都作才能夠生效,而不是單一節點

刪除monitor祕鑰

/etc/ceph/var/lib//ceph/mon/ceph-node1各有一個mon keyring

[root@node1 ceph-node1]# cd /etc/ceph/
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log  rbdmap
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring
[root@node1 ceph]# cd /var/lib/ceph/mon/ceph-node1/
[root@node1 ceph-node1]# ls
done  keyring  kv_backend  store.db  systemd

先刪除/etc/ceph/ceph-mon.keyring,仍是能夠訪問集羣

[root@node1 ceph]# rm ceph.mon.keyring 
rm: remove regular file ‘ceph.mon.keyring’? y
[root@node1 ceph]# systemctl restart ceph-mon@node1.service 
[root@node1 ceph]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_WARN
            no active mgr
            Reduced data availability: 281 pgs inactive, 65 pgs down, 58 pgs incomplete
            Degraded data redundancy: 311/771 objects degraded (40.337%), 439 pgs unclean, 316 pgs degraded, 316 pgs undersized
            application not enabled on 3 pool(s)
            clock skew detected on mon.node2
...
...

再刪除/var/lib/ceph/mon/ceph-node1/keyring

[root@node1 ceph-node1]# rm keyring 
rm: remove regular file ‘keyring’? y
[root@node1 ceph-node1]# systemctl restart ceph-mon@node1.service 
[root@node1 ceph-node1]# ceph -s

訪問集羣一直timeount,查看log文件發現Mon初始化失敗

2017-11-24 00:33:55.812955 7fa16f995e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory
2017-11-24 00:33:55.812991 7fa16f995e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,
2017-11-24 00:33:55.812999 7fa16f995e40 -1 failed to initialize

ok,那咱們再試試將/var/lib/ceph/mon/ceph-node1/keyring刪除,將etc/ceph/ceph.mon.keyring拷貝回來,這時候意外發生了,竟然mon初始化失敗

結論:

Monitor啓動是須要keyring文件進行祕鑰認證的,而且這個文件必須是/var/lib/ceph/mon/ceph-node1/目錄下的,/etc/ceph/目錄下的ceph.mon.keyring並不起做用

[root@node1 ceph-node1]# rm keyring 
rm: remove regular file ‘keyring’? y
[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log  rbdmap
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring  
[root@node1 ceph]# ceph -s
// timeout
...

mon.log中的現象:

2017-11-24 00:44:26.534865 7ffaf5117e40 -1 auth: error reading file: /var/lib/ceph/mon/ceph-node1/keyring: can't open /var/lib/ceph/mon/ceph-node1/keyring: (2) No such file or directory
2017-11-24 00:44:26.534901 7ffaf5117e40 -1 mon.node1@-1(probing) e1 unable to load initial keyring /etc/ceph/ceph.mon.node1.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,
2017-11-24 00:44:26.534916 7ffaf5117e40 -1 failed to initialize

至此,咱們能夠得出結論monitor初始化的時候依賴的文件是/var/lib/ceph/mon/ceph-node1/keyring而不是/etc/ceph/ceph.mon.keyring

修改 Mon keyring

原始的 keyring

[root@node1 ceph-node1]# cat keyring 
[mon.]
    key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph-node1]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"

將中間的五個A替換成了五個C

[root@node1 ceph-node1]# vim keyring 
[root@node1 ceph-node1]# cat keyring 
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"

重啓查看 Mon keyring

理想結果:

[root@node1 ceph-node1]# systemctl restart ceph-mon.target
[root@node1 ceph-node1]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"

使人疑惑的現實:

[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZCCCCCBAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
[root@node1 ceph]# ceph auth get mon.
exported keyring for mon.
[mon.]
    key = AQCo7fdZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"

能夠看到一會是修改以前的keyring,一會是修改以後的keyring,那遇到這種問題,咱們就經過log觀察如何獲取keyring

node1mon.log中日誌:

2017-11-24 09:30:08.697047 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:08.697106 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1169357136' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:10.020571 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:10.020641 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2455152702' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:11.393391 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:11.393452 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1704778092' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:12.669987 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:12.670049 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/275069695' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:14.113077 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:14.113147 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/3800873459' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:15.742038 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:15.742106 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/1908944728' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:30:17.629681 7f9b73e09700  0 mon.node1@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:30:17.629729 7f9b73e09700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/2193002591' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch

node2mon.log中日誌:

2017-11-24 09:29:23.799402 7fdb3c0ae700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4284881078' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:29:26.030516 7fdb3c0ae700  0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:29:26.030588 7fdb3c0ae700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4157525590' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
2017-11-24 09:29:38.637677 7fdb3c0ae700  0 mon.node2@1(peon) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
2017-11-24 09:29:38.637748 7fdb3c0ae700  0 log_channel(audit) log [INF] : from='client.? 192.168.1.58:0/4028820259' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch

結論:

  • Monitor的祕鑰哪怕被修改過了,也不會影響Monitor的啓動,也就是說Monitor啓動時只要存在祕鑰文件就好,內容忽略並不重要
  • Monitor啓動的時候讀取祕鑰文件是隨機的,並不必定是當前節點的,具體選擇機制須要後期去看源代碼了

修改OSD keyring和修復

OSD啓動的時候須要祕鑰才能夠登陸集羣,這個祕鑰會存在Monitor的數據庫中,因此登陸的時候就會拿本地的keyring和存在Monitor中的keyring相匹配,正確的話才能夠啓動成功。

下面咱們將本地的OSD keyring故意改錯,而後重啓OSD查看效果

# 更改祕鑰文件
[root@node3 ceph]# cd /var/lib/ceph/osd/ceph-2
[root@node3 ceph-2]# ls
activate.monmap  active  block  bluefs  ceph_fsid  fsid  keyring  kv_backend  magic  mkfs_done  ready  systemd  type  whoami
[root@node3 ceph-2]# cat keyring 
[osd.2]
    key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# vim keyring 
[root@node3 ceph-2]# cat keyring 
[osd.2]
    key = BBBp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# systemctl restart ceph-osd
ceph-osd@           ceph-osd@2.service  ceph-osd@5.service  ceph-osd.target     
[root@node3 ceph-2]# systemctl restart ceph-osd
ceph-osd@           ceph-osd@2.service  ceph-osd@5.service  ceph-osd.target     
[root@node3 ceph-2]# systemctl restart ceph-osd@2.service
# 重啓後發現OSD的狀態時down
[root@node3 ceph-2]# ceph osd tree | grep osd.2
 2   hdd 0.00980         osd.2    down  1.00000 1.00000

查看日誌,發現init失敗,緣由是auth認證出錯

2017-11-27 23:52:18.069207 7fae1e8d2d00 -1 auth: error parsing file /var/lib/ceph/osd/ceph-2/keyring
2017-11-27 23:52:18.069285 7fae1e8d2d00 -1 auth: failed to load /var/lib/ceph/osd/ceph-2/keyring: (5) Input/output error
...
2017-11-27 23:52:41.232803 7f58d15ded00 -1  ** ERROR: osd init failed: (5) Input/output error

咱們能夠經過查詢Monitor數據庫獲取正確的keyring,將錯誤的keyring修正過來再重啓OSD

# 查詢Monitor數據庫中的osd keyring
[root@node3 ceph-2]# ceph auth get osd.2
exported keyring for osd.2
[osd.2]
    key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
    caps mgr = "allow profile osd"
    caps mon = "allow profile osd"
    caps osd = "allow *"
# 修正keyring
[root@node3 ceph-2]# vim keyring 
[root@node3 ceph-2]# cat keyring 
[osd.2]
    key = AQCp8/dZ4BHbHxAA/GXihrjCOB+7kZJfgnSy+Q==
[root@node3 ceph-2]# systemctl restart ceph-osd@2.service 
# 重啓OSD後能夠發現osd.2狀態已經變爲up
[root@node3 ceph-2]# ceph osd tree | grep osd.2
 2   hdd 0.00980         osd.2      up  1.00000 1.00000

結論:

OSD啓動須要正確的keyring,錯誤的話則沒法啓動成功,正確的keyring會被存在Monitor的數據庫中

修改Client keyring和修復

以前咱們經過刪除client keyring驗證了當auth=cephx的時候,客戶端須要keyring才能夠訪問集羣,那麼它是像Monitor同樣內容不被care仍是和OSD同樣須要精確匹配keyring呢?

# 修改ceph.client.admin.keyring
[root@node3 ceph-2]# cd /etc/ceph/
[root@node3 ceph]# ls
ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  rbdmap
[root@node3 ceph]# cat ceph.client.admin.keyring 
[client.admin]
    key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
[root@node3 ceph]# vim ceph.client.admin.keyring 
[root@node3 ceph]# cat ceph.client.admin.keyring 
[client.admin]
    key = BBBB7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
# 訪問集羣出錯
[root@node3 ceph]# ceph -s
2017-11-28 00:06:05.771604 7f3a69ccf700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring
2017-11-28 00:06:05.771622 7f3a69ccf700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error
2017-11-28 00:06:05.771634 7f3a69ccf700  0 librados: client.admin initialization error (5) Input/output error
[errno 5] error connecting to the cluster

能夠看出訪問集羣須要正確的keyring,這時候如何修復呢?你們應該可以猜到,它和OSD的原理是同樣的,正確的keyring也存在與Monitor的數據庫

# 直接獲取client.admin出錯
[root@node3 ceph]# ceph auth get client.admin
2017-11-28 00:08:19.159073 7fcabb297700 -1 auth: error parsing file /etc/ceph/ceph.client.admin.keyring
2017-11-28 00:08:19.159079 7fcabb297700 -1 auth: failed to load /etc/ceph/ceph.client.admin.keyring: (5) Input/output error
2017-11-28 00:08:19.159090 7fcabb297700  0 librados: client.admin initialization error (5) Input/output error
[errno 5] error connecting to the cluster
# 須要加上monitor的keyring文件才能夠獲取client.admin.keyring
[root@node3 ceph]# ceph auth get client.admin --name mon. --keyring /var/lib/ceph/mon/ceph-node3/keyring
exported keyring for client.admin
[client.admin]
    key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
# 修正keyring
[root@node3 ceph]# vim ceph
ceph.client.admin.keyring  ceph.conf                  ceph-deploy-ceph.log       
[root@node3 ceph]# vim ceph.client.admin.keyring 
[root@node3 ceph]# cat ceph.client.admin.keyring 
[client.admin]
    key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
# 訪問集羣成功
[root@node3 ceph]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_WARN
    ...

出現了使人驚奇的一幕,就是上面經過ceph auth獲取OSDkeyring能夠正常獲取,而獲取client.admin.keyring卻要加上monitor.keyring,緣由能夠從報錯信息看出,ceph auth須要以客戶端鏈接集羣爲前提。

結論:

Client訪問集羣和OSD同樣,須要正確的keyring與存在Monitor數據庫中對應的keyring相匹配,而且當client.admin.keyring
不正確時,經過ceph auth讀取keyring的時候須要加上monitor keyring的選項

Mon Caps

r 權限

Moniorr權限就是擁有讀權限,對應的讀權限都有哪些操做?在這裏的讀權限其實就是擁有讀取Monitor數據庫中信息的權限,MON做爲集羣的狀態維護者,其數據庫(/var/lib/ceph/mon/ceph-$hostname/store.db)內保存着集羣這一系列狀態圖(Cluster Map),這些Map包含但不限於:

  • CRUSH Map
  • OSD Map
  • MON Map
  • MDS Map
  • PG Map

因此接下來咱們能夠建立一個新的只擁有讀權限的用戶,進行相關操做驗證讀權限具體擁有哪些權限

ceph auth get-or-create client.mon_r mon 'allow r' >> /root/key
[root@node3 ceph]# ceph auth get client.mon_r
exported keyring for client.mon_r
[client.mon_r]
    key = AQABvRxaBS6BBhAAz9uwjYCT4xKavJhobIK3ig==
    caps mon = "allow r"
    
ceph --name client.mon_r --keyring /root/key -s      // ok

ceph --name client.mon_r --keyring /root/key osd crush dump     // ok
ceph --name client.mon_r --keyring /root/key osd getcrushmap -o crushmap.map        // ok

ceph --name client.mon_r --keyring /root/key osd dump       // ok
ceph --name client.mon_r --keyring /root/key osd tree       // ok
ceph --name client.mon_r --keyring /root/key osd stat       // ok

ceph --name client.mon_r --keyring /root/key pg dump        // ok
ceph --name client.mon_r --keyring /root/key pg stat        // ok

嘗試了下兩個寫操做,都顯示報錯權限拒絕

[root@node3 ceph]# rados --name client.mon_r --keyring /root/key -p testpool put crush crushmap.map
error putting testpool/crush: (1) Operation not permitted

[root@node3 ceph]# ceph --name client.mon_r --keyring /root/key osd out osd.0
Error EACCES: access denied

注意:

雖然上面有osdpg等信息,可是這些都隸屬於crush map的範疇中,因此這些狀態數據都是從Monitor獲取的

結論:

Monitor的讀權限對應的是從Monitor數據庫獲取一系列的Map信息,具體的上面也都講的很詳細了,而且該權限只能讀取狀態信息,不能獲取具體數據信息,且不能進行OSD等守護進程寫操做

w 權限

w權限必須配合r權限纔會有效果,不然,單獨w權限執行指令時,是會一直access denied的。因此咱們在測試w權限時,須要附加上r權限才行:

ceph auth get-or-create client.mon_rw mon 'allow rw' >> /root/key

w權限就能夠作一些對組件的非讀操做了,好比:

# 踢出OSD
ceph osd out
# 刪除OSD
ceph osd rm 
# 修復PG
ceph pg repair
# 替換CRUSH
ceph osd setcrushmap
# 刪除MON
ceph mon rm
...
# 還有不少操做,就不一一贅述

結論:

Monr權限能夠讀取集羣各個組件的狀態,可是不能修改狀態,而w權限是能夠作到的

注意:

這裏的w權限能作到的寫權限也只是修改組件的狀態,可是並不包括對集羣對象的讀寫權限,由於這些組件狀態信息是存在Mon,而對象信息是存在OSD裏面的,而這裏的w權限也只是Mon的寫權限,因此也很好理解了。

x 權限

MONx權限很侷限,由於這個權限僅僅和auth相關,好比ceph auth listceph auth get 之類的指令,和w權限相似,x權限也須要r權限組合在一塊兒纔能有效力:

# 用上面建立擁有rw權限的用戶訪問auth list後auth報錯
[root@node3 ~]# ceph --name client.mon_rw --keyring /root/key auth list
2017-11-28 21:28:10.620537 7f0d15967700  0 librados: client.mon_rw authentication error (22) Invalid argument
InvalidArgumentError does not take keyword arguments
# 建立rw權限的用戶訪問auth list成功
[root@node3 ~]# ceph --name client.mon_rx --keyring /root/key auth list
installed auth entries:

osd.0
    key: AQDaTgBav2MgDBAALE1GEEfbQN73xh8V7ISvFA==
    caps: [mgr] allow profile osd
    caps: [mon] allow profile osd
    caps: [osd] allow *
...
...

這邊須要注意的是徐小胖的原文應該是筆誤,他是用的client.mon.rw訪問的,因此說實踐能夠發現不少光看發現不了的東西

結論:

x權限也須要和r權限搭配纔有效果,該權限只能處理與auth相關的操做

* 權限

這沒什麼好說的,猜也能猜到了,就是擁有rwx全部權限

OSD Caps

這一章須要研究一波再發出來

丟失全部祕鑰的再恢復

若是全部祕鑰所有刪除,是否真的能恢復?全部祕鑰包括

  • MON/var/lib/ceph/mon/ceph-$hostname/keyring
  • OSD/var/lib/ceph/osd/ceph-$hostname/keyring
  • Client/etc/ceph/ceph.client.admin.keyring
# 刪除 mon keyring
[root@node1 ceph-node1]# mv keyring /root/
# 刪除 ceph.conf
[root@node1 ceph-node1]# mv /etc/ceph/ceph.conf /root/
# 刪除 client.admin.keyring
[root@node1 ceph-node1]# mv /etc/ceph/ceph.client.admin.keyring /root
# 嘗試訪問集羣報錯
[root@node1 ceph-node1]# ceph -s
2017-11-29 23:57:14.195467 7f25dc4cc700 -1 Errors while parsing config file!
2017-11-29 23:57:14.195571 7f25dc4cc700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:14.195579 7f25dc4cc700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:14.195580 7f25dc4cc700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
# 嘗試獲取auth list報錯
[root@node1 ceph-node1]# ceph auth list
2017-11-29 23:57:27.037435 7f162c5a7700 -1 Errors while parsing config file!
2017-11-29 23:57:27.037450 7f162c5a7700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:27.037452 7f162c5a7700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2017-11-29 23:57:27.037453 7f162c5a7700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

ok,下面開始修復:

僞造 Mon keyring

ceph中除了mon.用戶之外的的帳戶密碼都保存在Mon的數據庫leveldb中,可是mon. 用戶的信息並無保存在數據庫裏,而是在MON啓動時讀取Mon目錄下的keyring 文件獲得的,這也是咱們以前驗證後獲得的結論。因此,咱們能夠隨便僞造一個keyring,放到Mon 目錄下去。而後同步到各個Mon節點,而後重啓三個Mon

[root@node1 ceph-node1]# cd /var/lib/ceph/mon/ceph-node1/
[root@node1 ceph-node1]# ls
done  kv_backend  store.db  systemd
[root@node1 ceph-node1]# vim keyring
# 僞造 keyring,能夠看到裏面還有tony的字樣,能夠看出明顯是僞造的
[root@node1 ceph-node1]# cat keyring 
[mon.]
    key = AQCtonyZAAAAABAAQOysx+Yxbno/2N8W1huZFA==
    caps mon = "allow *"
# 重啓 mon
[root@node1 ceph-node1]# service ceph-mon@node1 restart
Redirecting to /bin/systemctl restart  ceph-mon@node1.service

能夠看到效果:

# monitor log顯示mon.node1@0初始化成功,並被選舉成了monitor leader
2017-11-30 00:15:04.042157 7f8c4e28a700  0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election
2017-11-30 00:15:04.042299 7f8c4e28a700  1 mon.node1@0(electing).elector(934) init, last seen epoch 934
2017-11-30 00:15:04.048498 7f8c4e28a700  0 log_channel(cluster) log [INF] : mon.node1 calling new monitor election
2017-11-30 00:15:04.048605 7f8c4e28a700  1 mon.node1@0(electing).elector(937) init, last seen epoch 937, mid-election, bumping
2017-11-30 00:15:04.078454 7f8c4e28a700  0 log_channel(cluster) log [INF] : mon.node1@0 won leader election with quorum 0,1,2

注意(很重要):

雖說mon在啓動的時候讀取對應的keyring,不在意內容的正確性,可是不表明這個keyring能夠胡亂修改。也就是說這個keyring是要符合某種規範和格式的,在實踐過程我發現keyring前三位必須爲大寫的AQC,固然還有其餘的格式要求,好比結尾是否必需要是==?長度是不是固定的?這個格式要求可能不少,我沒有時間一個一個手動無腦驗證,這個能夠往後查看源碼瞭解實現思路,有興趣的童鞋能夠試試,說不定能夠發現頗有趣的現象。固然說了這麼可能是否意味着很難僞造呢?這個咱們也沒必要擔憂,最好的作法是從別的集羣的Mon keyring拷貝一份過來就能夠了,本身胡亂僞造啓動會報錯以下:

2017-11-29 23:49:50.134137 7fcab3e23700 -1 cephx: cephx_build_service_ticket_blob failed with error invalid key
2017-11-29 23:49:50.134140 7fcab3e23700  0 mon.node1@0(probing) e1 ms_get_authorizer failed to build service ticket
2017-11-29 23:49:50.134393 7fcab3e23700  0 -- 192.168.1.58:6789/0 >> 192.168.1.61:6789/0 conn(0x7fcacd15d800 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=0 cs=0 l=0).handle_connect_reply connect got BADAUTHORIZER

 還原 ceph.conf

沒有/etc/ceph/ceph.conf這個文件,咱們是無法執行ceph相關指令的,因此咱們須要儘量的還原它。首先fsid能夠經過去任意osd目錄(/var/lib/ceph/osd/ceph-$num/)讀取ceph-fsid文件得到,而後mon_initial_membersmon_host表明着集羣每一個節點的hostnameip,這些都是咱們知道的。

# 還原 ceph.conf
[root@node1 ceph-node1]# cat /var/lib/ceph/osd/ceph-0/ceph_fsid 
99480db2-f92f-481f-b958-c03c261918c6
[root@node1 ceph-node1]# vim /etc/ceph/ceph.conf
[root@node1 ceph-node1]# cat /etc/ceph/ceph.conf
[global]
fsid = 99480db2-f92f-481f-b958-c03c261918c6
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.58,192.168.1.61,192.168.1.62
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 192.168.1.0/24

# 經過 mon keyring 訪問集羣狀態成功
[root@node1 ceph-node1]# ceph -s --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyring
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3
    mgr: node1_mgr(active)
    osd: 6 osds: 6 up, 6 in

恢復 ceph.client.keyring

有了Mon keyring,而且能夠執行ceph指令,那麼咱們就能夠經過ceph auth getMonitor leveldb獲取任意keyring

# 經過 Mon 獲取 client.admin.keyring
[root@node1 ceph-node1]# ceph --name mon. --keyring /var/lib/ceph/mon/ceph-node1/keyring auth get client.admin
exported keyring for client.admin
[client.admin]
    key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"
# 建立 /etc/ceph/ceph.client.admin.keyring,並將上面內容更新到該文件
[root@node1 ceph-node1]# vim /etc/ceph/ceph.client.admin.keyring
[root@node1 ceph-node1]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
    key = AQDL7fdZWaQkIBAAsFhvFVQYqSeM/FVSY6o8TQ==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

# 用默認 ceph -s 測試一下,發現能夠正常訪問了

[root@node1 ceph-node1]# ceph -s
  cluster:
    id:     99480db2-f92f-481f-b958-c03c261918c6
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3
    mgr: node1_mgr(active)
    osd: 6 osds: 6 up, 6 in

總結

首先感謝徐小胖給我提供了cephx方面的思路,但願往後多出好文,我也在不斷地拜讀這些優質文章。這篇文章花了我很長時間,你們從日誌的時間能夠看出來,跨度已經有好幾天了,不少實踐真的不是一蹴而就的,須要反覆的嘗試和思考才能獲得最後的成功。Ceph仍是要多動手,看別人文章是好事,可是記得要加以實踐,不然再好的文章也只是想固然,做者說什麼你就跟着他的思路走,你永遠不知作別人一句簡短的話語和結論的背後花了多少時間去推敲和實踐,你看起來一條命令執行成功或者在某一步執行某個命令那也許是別人失敗了無數次總結出來的。因此咱們要本身實踐去驗證,除了能夠驗證原文的觀點正確與否,每每能夠發現一些其餘有用的知識。

經歷此次總結,收穫滿滿,我對cephx的理解又上了一個層次。本文就cephx在不一樣組件中的角色扮演和依賴關係進行梳理,而後再對各組件的cap進行了研究,最後針對各個keyring的恢復給出了詳細的指南和步驟。而後還剩兩項任務沒有完成,等有空進行完善!

相關文章
相關標籤/搜索