Ceph分佈式存儲雙活服務配置

Ceph天生帶兩地三中心概念,所謂的雙活就是兩個數據中心(multi-site)。Ceph兩數據中心能夠在一個集羣也能夠在不一樣的集羣中。架構圖(它山之石)以下所示:
Ceph分佈式存儲雙活服務配置python

一、環境信息

Ceph分佈式存儲雙活服務配置

二、建立Master zone

在一個multi-site配置當中的全部rgw都會接收來自master zone group中的mater zone對應的ceph-radosgw的相關配置。所以必須先配置一個master zone group和一個master zone。web

2.1 建立realm

一個realm包含multi-site配置中的zone groups以及zones,而且在該realm中做爲全局惟一的名稱空間。在主集羣任意一個節點上執行下面的命令建立:swift

[root@ceph01 ~]# radosgw-admin -c ceph1 realm create --rgw-realm=xzxj --default  
{
    "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a",
    "name": "xzxj",
    "current_period": "02a14536-a455-4063-a990-24acaf504099",
    "epoch": 1
}

realm只用於本集羣的話,則添加--default選項,radosgw-admin就默認會使用該realm。api

2.2 建立master zonegroup

一個realm必須至少有一個master zone group。架構

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zonegroup create --rgw-zonegroup=all --endpoints=http://192.168.120.53:8080,http://192.168.120.54:8080,http://192.168.120.55:8080,http://192.168.120.56:8080 --rgw-realm=xzxj --master --default

當realm只有一個zone group的話,則指定--default選項,在添加新的zones時就會默認添加到該zone group中。frontend

2.3 建立master zone

爲一個multi-site配置添加新的master zone:z1,以下:dom

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone create --rgw-zonegroup=all --rgw-zone=z1 --endpoints=http://192.168.120.53:8080,http://192.168.120.54:8080,http://192.168.120.55:8080,http://192.168.120.56:8080 --default

這裏並未指定--access-key與--secret。在下面的步驟中,建立用戶的時候會自動的把這些設置添加到zone中。分佈式

2.4 建立系統帳戶

ceph-radosgw守護進程來拉取realm以及period信息以前必須進行認證。在master zone中,建立一個系統用戶來在不一樣daemon之間完成認證:ide

[root@ceph01 ~]# radosgw-admin --cluster ceph1 user create --uid="sync-user" --display-name="sync user" --system
{
    "user_id": "sync-user",
    "display_name": "sync user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "sync-user",
            "access_key": "ZA4TXA65C5TGCPX4B8V6",
            "secret_key": "BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "system": "true",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

當secondary zones須要和master zone完成認證時,須要系統帳戶的access_key與secret_key。最後,將系統用戶添加到master zone中並更新period:測試

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone modify --rgw-zone=z1 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24
[root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit

2.5 更新ceph配置文件

編輯配置文件ceph.conf,添加rgw_zone選項,該選項的值爲master zone的名稱。這裏rgw_zone=z1,有多少個rgw節點,就加多少個,以下:

[root@ceph01 ~]# vi /etc/ceph/ceph1.conf
[client.rgw.ceph01.rgw0]
host = ceph01
keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph01.rgw0/keyring
log file = /var/log/ceph/ceph1-rgw-ceph01.rgw0.log
rgw frontends = beast endpoint=192.168.120.53:8080
rgw thread pool size = 512
rgw_zone=z1

[client.rgw.ceph02.rgw0]
host = ceph02
keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph02.rgw0/keyring
log file = /var/log/ceph/ceph1-rgw-ceph02.rgw0.log
rgw frontends = beast endpoint=192.168.120.54:8080
rgw thread pool size = 512
rgw_zone=z1

[client.rgw.ceph03.rgw0]
host = ceph03
keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph03.rgw0/keyring
log file = /var/log/ceph/ceph1-rgw-ceph03.rgw0.log
rgw frontends = beast endpoint=192.168.120.55:8080
rgw thread pool size = 512
rgw_zone=z1

[client.rgw.ceph04.rgw0]
host = ceph04
keyring = /var/lib/ceph/radosgw/ceph1-rgw.ceph04.rgw0/keyring
log file = /var/log/ceph/ceph1-rgw-ceph04.rgw0.log
rgw frontends = beast endpoint=192.168.120.56:8080
rgw thread pool size = 512
rgw_zone=z1

編輯完成後,同步ceph配置文件至其餘集羣節點,而後在全部rgw節點重啓rgw服務:

[root@ceph01 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

三、建立slave zone

3.1 從master zone拉取realm

使用master zone group中master zone的URL路徑、access key以及secret key來拉取realm到secondary zone對應的宿主機上。若是要拉取一個非默認的realm,請使用--rgw-realm或--realm-id選項:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 realm pull --url=http://192.168.120.53:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24
{
    "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a",
    "name": "xzxj",
    "current_period": "913e666c-57fb-4992-8839-53fe447d8427",
    "epoch": 2
}

注:這裏的access key 和secret是master zone上system 帳戶的access key和secret。

3.2 從master zone拉取period

使用master zone group中master zone的URL路徑、access key以及secret key來拉取period到secondary zone對應的宿主機上。若是是從一個非默認的realm中拉取period,請使用--rgw-realm或--realm-id選項:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 period pull --url=http://192.168.120.53:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24

3.3 建立slave zone

使用master zone group中master zone的URL路徑、access key以及secret key來拉取period到secondary zone對應的宿主機上。若是是從一個非默認的realm中拉取period,請使用--rgw-realm或--realm-id選項。默認狀況下全部的zone都是以active-active配置方式運行,即一個RGW客戶端能夠向任何一個zone寫數據,這個zone會向處於同一個group中的其餘zone複製數據。假如secondary zone並不能接受寫操做的話,請指定--read-only選項來建立一個active-passive配置的zone。另外,須要提供master zone中的access key以及secret key。

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone create --rgw-zonegroup=all --rgw-zone=z2 --endpoints=http://192.168.120.57:8080,http://192.168.120.58:8080,http://192.168.120.59:8080,http://192.168.120.60:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24

3.4 更新period

[root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit
{
    "id": "913e666c-57fb-4992-8839-53fe447d8427",
    "epoch": 4,
    "predecessor_uuid": "02a14536-a455-4063-a990-24acaf504099",
    "sync_status": [],
    "period_map": {
        "id": "913e666c-57fb-4992-8839-53fe447d8427",
        "zonegroups": [
            {
                "id": "8259119d-4ed7-4cfc-af28-9a8e6678c5f7",
                "name": "all",
                "api_name": "all",
                "is_master": "true",
                "endpoints": [
                    "http://192.168.120.53:8080",
                    "http://192.168.120.54:8080",
                    "http://192.168.120.55:8080",
                    "http://192.168.120.56:8080"
                ],
                "hostnames": [],
                "hostnames_s3website": [],
                "master_zone": "91d15c30-f785-4bd1-8e80-d63ab939b259",
                "zones": [
                    {
                        "id": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf",
                        "name": "z2",
                        "endpoints": [
                            "http://192.168.120.57:8080",
                            "http://192.168.120.58:8080",
                            "http://192.168.120.59:8080",
                            "http://192.168.120.60:8080"
                        ],
                        "log_meta": "false",
                        "log_data": "true",
                        "bucket_index_max_shards": 0,
                        "read_only": "false",
                        "tier_type": "",
                        "sync_from_all": "true",
                        "sync_from": [],
                        "redirect_zone": ""
                    },
                    {
                        "id": "91d15c30-f785-4bd1-8e80-d63ab939b259",
                        "name": "z1",
                        "endpoints": [
                            "http://192.168.120.53:8080",
                            "http://192.168.120.54:8080",
                            "http://192.168.120.55:8080",
                            "http://192.168.120.56:8080"
                        ],
                        "log_meta": "false",
                        "log_data": "true",
                        "bucket_index_max_shards": 0,
                        "read_only": "false",
                        "tier_type": "",
                        "sync_from_all": "true",
                        "sync_from": [],
                        "redirect_zone": ""
                    }
                ],
                "placement_targets": [
                    {
                        "name": "default-placement",
                        "tags": [],
                        "storage_classes": [
                            "STANDARD"
                        ]
                    }
                ],
                "default_placement": "default-placement",
                "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a"
            }
        ],
        "short_zone_ids": [
            {
                "key": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf",
                "val": 1058646688
            },
            {
                "key": "91d15c30-f785-4bd1-8e80-d63ab939b259",
                "val": 895340584
            }
        ]
    },
    "master_zonegroup": "8259119d-4ed7-4cfc-af28-9a8e6678c5f7",
    "master_zone": "91d15c30-f785-4bd1-8e80-d63ab939b259",
    "period_config": {
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        },
        "user_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    },
    "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a",
    "realm_name": "xzxj",
    "realm_epoch": 2
}

3.5 更改ceph配置文件並重啓RGW服務

編輯配置ceph.conf,加入rgw_zone=z2:

[root@ceph05 ~]# vi /etc/ceph/ceph2.conf 
[client.rgw.ceph05.rgw0]
host = ceph05
keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph05.rgw0/keyring
log file = /var/log/ceph/ceph2-rgw-ceph05.rgw0.log
rgw frontends = beast endpoint=192.168.120.57:8080
rgw thread pool size = 512
rgw_zone= z2

[client.rgw.ceph06.rgw0]
host = ceph06
keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph06.rgw0/keyring
log file = /var/log/ceph/ceph2-rgw-ceph06.rgw0.log
rgw frontends = beast endpoint=192.168.120.58:8080
rgw thread pool size = 512
rgw_zone= z2

[client.rgw.ceph07.rgw0]
host = ceph07
keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph07.rgw0/keyring
log file = /var/log/ceph/ceph2-rgw-ceph07.rgw0.log
rgw frontends = beast endpoint=192.168.120.59:8080
rgw thread pool size = 512
rgw_zone= z2

[client.rgw.ceph08.rgw0]
host = ceph08
keyring = /var/lib/ceph/radosgw/ceph2-rgw.ceph08.rgw0/keyring
log file = /var/log/ceph/ceph2-rgw-ceph08.rgw0.log
rgw frontends = beast endpoint=192.168.120.60:8080
rgw thread pool size = 512
rgw_zone= z2

編輯完成後,同步ceph配置文件至其餘集羣節點,而後在全部rgw節點重啓rgw服務:

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

四、同步狀態檢查

當secondary zone創建起來併成功運行以後,能夠檢查相應的同步狀態。同步操做會拷貝在master zone中建立的users及buckets到secondary zone中。在master上建立一個candon用戶,而後再slave上查看:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 user create --uid="candon" --display-name="First User"
{
    "user_id": "candon",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "candon",
            "access_key": "Y9WJW2H2N4CLDOOE8FN7",
            "secret_key": "CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
[root@ceph01 ~]# radosgw-admin --cluster ceph1 user list  
[
    "sync-user",
    "candon"
]
[root@ceph05 ~]# radosgw-admin --cluster ceph2 user list  
[
    "sync-user",
    "candon"
]

同步狀態查看:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 sync status
          realm 0f13bb55-68f6-4489-99fb-d79ba8ca959a (xzxj)
      zonegroup 8259119d-4ed7-4cfc-af28-9a8e6678c5f7 (all)
           zone 91d15c30-f785-4bd1-8e80-d63ab939b259 (z1)
  metadata sync no sync (zone is master)
      data sync source: 04231ccf-bb2b-4eff-aba7-a7cb9a3505cf (z2)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source
[root@ceph05 ~]# radosgw-admin --cluster ceph2 sync status 
          realm 0f13bb55-68f6-4489-99fb-d79ba8ca959a (xzxj)
      zonegroup 8259119d-4ed7-4cfc-af28-9a8e6678c5f7 (all)
           zone 04231ccf-bb2b-4eff-aba7-a7cb9a3505cf (z2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 91d15c30-f785-4bd1-8e80-d63ab939b259 (z1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

注意: 雖然secondary zone能夠接收bucket operations,但實際上是經過將該操做轉發給master zone來進行處理的,而後再將處理後的結果同步到secondary zone中。而假如master zone不能正常工做的話,在secondary zone中執行的bucket operations將會失敗。可是object operations是能夠成功的。

五、客戶端測試

這裏分別使用s3客戶端和swift客戶端測試。

5.1 s3客戶端測試

[root@client1 ~]# yum -y install python-boto
[root@client1 ~]# vi s3test.py
import boto
import boto.s3.connection

access_key = 'Y9WJW2H2N4CLDOOE8FN7'
secret_key = 'CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD'

boto.config.add_section('s3')

conn = boto.connect_s3(
    aws_access_key_id = access_key,
    aws_secret_access_key = secret_key,
    host = 'ceph01',
    port = 8080,
    is_secure=False,
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
    print "{name}\t{created}".format(
     name = bucket.name,
     created = bucket.creation_date,
)
[root@client1 ~]# python s3test.py 
my-new-bucket   2020-04-30T07:27:23.270Z

5.2 swift客戶端測試

建立Swift用戶
[root@ceph01 ~]# radosgw-admin --cluster ceph1 subuser create --uid=candon --subuser=candon:swift --access=full           
{
    "user_id": "candon",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [
        {
            "id": "candon:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "candon",
            "access_key": "Y9WJW2H2N4CLDOOE8FN7",
            "secret_key": "CsWtWc40R2kJSi0BEesIjcJ2BroY8sVv821c95ZD"
        }
    ],
    "swift_keys": [
        {
            "user": "candon:swift",
            "secret_key": "VZaiUF8DzLJYtT67Jg5tWZStDWHsmAi6K6KDuGQc"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
--客戶端
[root@client1 ~]# yum -y install python-setuptools
[root@client1 ~]# easy_install pip
[root@client1 ~]# pip install --upgrade setuptools
[root@client1 ~]# swift -A http://192.168.120.53:8080/auth/1.0 -U candon:swift -K 'VZaiUF8DzLJYtT67Jg5tWZStDWHsmAi6K6KDuGQc' list
my-new-bucket

六、failover驗證

將z2設置爲master:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone modify --rgw-zone=z2 --master --default
{
    "id": "04231ccf-bb2b-4eff-aba7-a7cb9a3505cf",
    "name": "z2",
    "domain_root": "z2.rgw.meta:root",
    "control_pool": "z2.rgw.control",
    "gc_pool": "z2.rgw.log:gc",
    "lc_pool": "z2.rgw.log:lc",
    "log_pool": "z2.rgw.log",
    "intent_log_pool": "z2.rgw.log:intent",
    "usage_log_pool": "z2.rgw.log:usage",
    "reshard_pool": "z2.rgw.log:reshard",
    "user_keys_pool": "z2.rgw.meta:users.keys",
    "user_email_pool": "z2.rgw.meta:users.email",
    "user_swift_pool": "z2.rgw.meta:users.swift",
    "user_uid_pool": "z2.rgw.meta:users.uid",
    "otp_pool": "z2.rgw.otp",
    "system_key": {
        "access_key": "ZA4TXA65C5TGCPX4B8V6",
        "secret_key": "BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24"
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "z2.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "z2.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "z2.rgw.buckets.non-ec",
                "index_type": 0
            }
        }
    ],
    "metadata_heap": "",
    "realm_id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a"
}

更新period:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit

最後,重啓集羣節點中的每一個gateway服務:

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

七、災難恢復

當舊master zone恢復後,若是要切換爲原來的zone爲master,執行下面的命令:

[root@ceph01 ~]# radosgw-admin --cluster ceph1 realm pull --url=http://192.168.120.57:8080 --access-key=ZA4TXA65C5TGCPX4B8V6 --secret=BEYnz6QdAvTbt36L7FhwGF2F5rHWeH66cb0eSO24
{
    "id": "0f13bb55-68f6-4489-99fb-d79ba8ca959a",
    "name": "xzxj",
    "current_period": "21a6550a-3236-4b99-9bc0-25268bf1a5c6",
    "epoch": 3
}
[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone modify --rgw-zone=z1 --master --default
[root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit

而後在恢復的master節點上重啓各個gateway服務

[root@ceph01 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

若是要設置standby爲read-only,在standby節點上使用下面的命令:

[root@ceph05 ~]# radosgw-admin --cluster ceph2 zone modify --rgw-zone=example --read-only
[root@ceph05 ~]# radosgw-admin --cluster ceph2 period update --commit

最後重啓standby節點上的gateway服務

[root@ceph05 ~]# systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0

八、Web管理界面點擊object gateway報錯故障處理

若是啓用了web界面並設置了multi-site服務,當點擊Web管理界面點擊object gateway會報錯,由於默認刪除了默認的ceph-dashboard帳戶

[root@ceph01 ~]# radosgw-admin user info --uid=ceph-dashboard
could not fetch user info: no user info saved

在master節點重建此用戶:

[root@ceph01 ~]# radosgw-admin user create --uid=ceph-dashboard --display-name=ceph-dashboard --system

記錄下用戶的access_key和secret_key,而後更新rgw-api-access-key和rgw-api-secret-key:

[root@ceph01 ~]# ceph dashboard set-rgw-api-access-key FX1L1DAY3JXI5J88VZLP
Option RGW_API_ACCESS_KEY updated
[root@ceph01 ~]# ceph dashboard set-rgw-api-secret-key UHArzi8B82sAMwMxUnkWH4dKy2O1iOCK25nV0rI1
Option RGW_API_SECRET_KEY updated

到此,master上能夠正常訪問object gateway,而slave上還須要更新下rgw-api-access-key和rgw-api-secret-key。
最後,在slave集羣任意一個節點更新rgw-api-access-key和rgw-api-secret-key:

[root@ceph06 ~]# ceph dashboard set-rgw-api-access-key FX1L1DAY3JXI5J88VZLP
Option RGW_API_ACCESS_KEY updated
[root@ceph06 ~]# ceph dashboard set-rgw-api-secret-key UHArzi8B82sAMwMxUnkWH4dKy2O1iOCK25nV0rI1
Option RGW_API_SECRET_KEY updated

九、刪除默認的zonegroup以及zone

若是不須要默認的zonegroup或者zone,在主備節點上刪除便可。

[root@ceph01 ~]# radosgw-admin --cluster ceph1 zonegroup delete --rgw-zonegroup=default
[root@ceph01 ~]# radosgw-admin --cluster ceph1 zone delete --rgw-zone=default
[root@ceph01 ~]# radosgw-admin --cluster ceph1 period update --commit

而後編輯/etc/ceph/ceph.conf文件,加入如下內容:

[mon]
mon allow pool delete = true

同步ceph配置文件到其餘節點後,重啓全部mon服務,再刪除默認的pool。

[root@ceph01 ~]# systemctl restart ceph-mon.target
[root@ceph01 ~]# ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
[root@ceph01 ~]# ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it          
[root@ceph01 ~]# ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
相關文章
相關標籤/搜索