如何恢復osd的auth表中的權限

緣由:當你一不當心刪掉了osd的auth信息時,重啓osd服務,此時ceph -s查看發現osd downbootstrap

如:ui

[root@ceph ~]# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.02719 root default                                      
-2 0.01849     host ceph58                                   
 0 0.01849         osd.0        up  1.00000          1.00000 
-3 0.00870     host ceph28                                   
 1 0.00870         osd.1        up  1.00000          1.00000

#集羣OSD所有up

[root@ceph ~]# ceph auth list
installed auth entries:

osd.0
        key: AQDZ7T5ZmLx3MBAAR8Vhqt1UvreMUwSSmdfeSw==
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.1
        key: AQDAFkRZEHhnGxAAjfbGRNNNT5kWvGl4jpKjYg==
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQBA7T5ZAAAAABAAlJhtiG0oJVOeXlBc0Mzokw==
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-osd
        key: AQDA7T5ZBzemGhAAwQgt7wU3kVJps7IoLAg0TA==
        caps: [mon] allow profile bootstrap-osd


#此時查看auth表中,osd.0和osd.1的auth值都正常

[root@ceph ~]# ceph auth del osd.1    #將osd.1 auth值從auth表中刪除
updated
[root@ceph ~]# ceph auth list     
installed auth entries:

osd.0
        key: AQDZ7T5ZmLx3MBAAR8Vhqt1UvreMUwSSmdfeSw==
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQBA7T5ZAAAAABAAlJhtiG0oJVOeXlBc0Mzokw==
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-osd
        key: AQDA7T5ZBzemGhAAwQgt7wU3kVJps7IoLAg0TA==
        caps: [mon] allow profile bootstrap-osd

#此時發現osd.1的auth值已從auth表中清除

[root@ceph ~]# systemctl restart ceph-osd@1  #重啓osd
[root@ceph ~]# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.02719 root default                                      
-2 0.01849     host ceph58                                   
 0 0.01849         osd.0        up  1.00000          1.00000 
-3 0.00870     host ceph28                                   
 1 0.00870         osd.1      down  1.00000          1.00000

#此時發現osd.1已經down了

 

如何恢復?rest

步驟以下:code

  • 進入/var/lib/ceph/osd/ceph-*
[root@ceph ~]# cd /var/lib/ceph/osd/ceph-1
[root@ceph ceph-1]# ls
total 60
-rw-r--r--. 1 root root 202 Jun 17 01:34 activate.monmap
-rw-r--r--. 1 ceph ceph   3 Jun 17 01:44 active
lrwxrwxrwx. 1 ceph ceph  58 Jun 17 01:34 block -> /dev/disk/by-partuuid/87f73ff4-7add-4e83-94e9-29869c7c0123
lrwxrwxrwx. 1 ceph ceph  58 Jun 17 01:34 block.db -> /dev/disk/by-partuuid/f70d731a-666b-4828-8cb7-59c4aa498a91
-rw-r--r--. 1 ceph ceph  37 Jun 17 01:34 block.db_uuid
-rw-r--r--. 1 ceph ceph  37 Jun 17 01:34 block_uuid
lrwxrwxrwx. 1 ceph ceph  58 Jun 17 01:34 block.wal -> /dev/disk/by-partuuid/fbe8751d-c2ae-4db4-8a35-7ab699401b58
-rw-r--r--. 1 ceph ceph  37 Jun 17 01:34 block.wal_uuid
-rw-r--r--. 1 ceph ceph   2 Jun 17 01:34 bluefs
-rw-r--r--. 1 ceph ceph  37 Jun 17 01:34 ceph_fsid
-rw-r--r--. 1 ceph ceph  37 Jun 17 01:34 fsid
-rw-------. 1 ceph ceph 124 Jun 17 18:10 keyring
-rw-r--r--. 1 ceph ceph   8 Jun 17 01:34 kv_backend
-rw-r--r--. 1 ceph ceph  21 Jun 17 01:34 magic
-rw-r--r--. 1 ceph ceph   4 Jun 17 01:34 mkfs_done
-rw-r--r--. 1 ceph ceph   6 Jun 17 01:34 ready
-rw-r--r--. 1 ceph ceph   0 Jun 17 01:44 systemd
-rw-r--r--. 1 ceph ceph  10 Jun 17 01:34 type
-rw-r--r--. 1 ceph ceph   2 Jun 17 01:34 whoami

#其中keying文件記錄的就是該osd部分auth值
  • 修改keying文件
[root@ceph ceph-1]# cat keyring 
[osd.1]
        key = AQDAFkRZEHhnGxAAjfbGRNNNT5kWvGl4jpKjYg==              #默認

[root@ceph ceph-1]# cat keyring 
[osd.1]
        key = AQDAFkRZEHhnGxAAjfbGRNNNT5kWvGl4jpKjYg==
        caps mon = "allow profile osd"                              #增長
        caps osd = "allow *"                                        #增長
  • 將keying文件的內容上傳到mon
[root@ceph ceph-1]# ceph auth import -i keyring 
imported keyring

[root@ceph ceph-1]# systemctl restart ceph-osd@1

[root@ceph ceph-1]# ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.02719 root default                                      
-2 0.01849     host ceph58                                   
 0 0.01849         osd.0        up  1.00000          1.00000 
-3 0.00870     host ceph28                                   
 1 0.00870         osd.1        up  1.00000          1.00000
相關文章
相關標籤/搜索