(10)橫向擴展ceph集羣

當集羣容量或者計算資源達到必定限定時,就須要對集羣進行擴容,擴容操做主要能夠分爲兩種 :
一、縱向擴展:向已有節點中添加磁盤,容量增長,集羣計算性能不變;
二、橫向擴展:添加新的節點,包括磁盤、內存、cpu資源,能夠達到擴容性能提高的效果;node

(10)橫向擴展ceph集羣

1、 在生產環境中避免新增節點影響性能,添加標識位

生產環境中,通常不會在新節點加入ceph集羣后,當即開始數據回填,這樣會影響集羣性能。因此咱們須要設置一些標誌位,來完成這個目的。
[root@node140 ~]##ceph osd set noin
[root@node140 ~]##ceph osd set nobackfillcentos

在用戶訪問的非高峯時,取消這些標誌位,集羣開始在平衡任務。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfillapp

2、新節點安裝ceph

(1)#手動yum集羣部署

[root@node143 ~]# yum -y install ceph ceph-radosgwide

(2)#檢查安裝的包

[root@node143 ~]# rpm -qa | egrep -i "ceph|rados|rbd"性能

(3)#檢查ceph 安裝本版,須要統一版本

[root@node143 ~]# ceph -v 所有都是(nautilus版本)
ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable)code

3、向ceph集羣中新增節點

ceph能夠無縫擴展,支持在線添加osd和monitor節點blog

(1)健康集羣

[root@node140 ~]# ceph -s 
  cluster:
    id:     58a12719-a5ed-4f95-b312-6efd6e34e558
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum node140,node142 (age 8d)
    mgr: admin(active, since 8d), standbys: node140
    mds: cephfs:1 {0=node140=up:active} 1 up:standby
    osd: 16 osds: 16 up (since 5m), 16 in (since 2w)

  data:
    pools:   5 pools, 768 pgs
    objects: 2.65k objects, 9.9 GiB
    usage:   47 GiB used, 8.7 TiB / 8.7 TiB avail
    pgs:     768 active+clean

(2)當前集羣節點數量爲3

[root@node140 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       8.71826 root default                             
-2       3.26935     host node140                         
 0   hdd 0.54489         osd.0        up  1.00000 1.00000 
 1   hdd 0.54489         osd.1        up  1.00000 1.00000 
 2   hdd 0.54489         osd.2        up  1.00000 1.00000 
 3   hdd 0.54489         osd.3        up  1.00000 1.00000 
 4   hdd 0.54489         osd.4        up  1.00000 1.00000 
 5   hdd 0.54489         osd.5        up  1.00000 1.00000 
-3       3.26935     host node141                         
12   hdd 0.54489         osd.12       up  1.00000 1.00000 
13   hdd 0.54489         osd.13       up  1.00000 1.00000 
14   hdd 0.54489         osd.14       up  1.00000 1.00000 
15   hdd 0.54489         osd.15       up  1.00000 1.00000 
16   hdd 0.54489         osd.16       up  1.00000 1.00000 
17   hdd 0.54489         osd.17       up  1.00000 1.00000 
-4       2.17957     host node142                         
 6   hdd 0.54489         osd.6        up  1.00000 1.00000 
 9   hdd 0.54489         osd.9        up  1.00000 1.00000 
10   hdd 0.54489         osd.10       up  1.00000 1.00000 
11   hdd 0.54489         osd.11       up  1.00000 1.00000

(3)集羣節點複製配置文件和密鑰到新增節點node143

[root@node143 ceph]# ls
ceph.client.admin.keyring   ceph.conf

(4)新增節點具有訪問集羣權限

[root@node143 ceph]# ceph -s 
  cluster:
    id:     58a12719-a5ed-4f95-b312-6efd6e34e558
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum node140,node142 (age 8d)
    mgr: admin(active, since 8d), standbys: node140
    mds: cephfs:1 {0=node140=up:active} 1 up:standby
    osd: 16 osds: 16 up (since 25m), 16 in (since 2w)

  data:
    pools:   5 pools, 768 pgs
    objects: 2.65k objects, 9.9 GiB
    usage:   47 GiB used, 8.7 TiB / 8.7 TiB avail
    pgs:     768 active+clean

(5)準備好磁盤

[root@node143 ceph]# lsblk 
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 557.9G  0 disk 
├─sda1            8:1    0   200M  0 part /boot
└─sda2            8:2    0 519.4G  0 part 
  └─centos-root 253:0    0 519.4G  0 lvm  /
sdb               8:16   0 558.9G  0 disk 
sdc               8:32   0 558.9G  0 disk 
sdd               8:48   0 558.9G  0 disk 
sde               8:64   0 558.9G  0 disk 
sdf               8:80   0 558.9G  0 disk 
sdg               8:96   0 558.9G  0 disk

(6)#將做爲osd磁盤標記爲GPT格式

[root@node143 ]# parted /dev/sdc mklabel GPT
[root@node143 ]# parted /dev/sdd mklabel GPT
[root@node143 ]# parted /dev/sdf mklabel GPT
[root@node143 ]#parted /dev/sdg mklabel GPT
[root@node143 ]# parted /dev/sdb mklabel GPT
[root@node143 ]# parted /dev/sde mklabel GPT

(7)#格式化成爲xfs文件系統

[root@node143 ]# mkfs.xfs -f /dev/sdc
[root@node143 ]# mkfs.xfs -f /dev/sdd 
[root@node143 ]# mkfs.xfs -f /dev/sdb
[root@node143 ]# mkfs.xfs -f /dev/sdf
[root@node143 ]# mkfs.xfs -f /dev/sdg
[root@node143 ]# mkfs.xfs -f /dev/sde

(8)建立osd

[root@node143 ~]# ceph-volume lvm create --data /dev/sdb 
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: /dev/sdb
[root@node143 ~]# ceph-volume lvm create --data /dev/sdc
[root@node143 ~]# ceph-volume lvm create --data /dev/sdd
[root@node143 ~]# ceph-volume lvm create --data /dev/sdf
[root@node143 ~]# ceph-volume lvm create --data /dev/sdg
[root@node143 ~]# ceph-volume lvm create --data /dev/sde
[root@node143 ~]# blkid
/dev/mapper/centos-root: UUID="7616a088-d812-456b-8ae8-38d600eb9f8b" TYPE="xfs" 
/dev/sda2: UUID="6V8bFT-ylA6-bifK-gmob-ah3I-zZ4G-N7EYwD" TYPE="LVM2_member" 
/dev/sda1: UUID="eee4c9af-9f12-44d9-a386-535bde734678" TYPE="xfs" 
/dev/sdb: UUID="TcjeCg-YsBQ-RHbm-UNYT-UoQv-iLFs-f1st2X" TYPE="LVM2_member" 
/dev/sdd: UUID="aSLPmt-ohdJ-kG7W-JOB1-dzOD-D0zp-krWW5m" TYPE="LVM2_member" 
/dev/sdc: UUID="7ARhbT-S9sC-OdZw-kUCq-yp97-gSpY-hfoPFa" TYPE="LVM2_member" 
/dev/sdg: UUID="9MDhh1-bXIX-DwVf-RkIt-IUVm-fPEH-KSbsDd" TYPE="LVM2_member" 
/dev/sde: UUID="oc2gSZ-j3WO-pOUs-qJk6-ZZS0-R8V7-1vYaZv" TYPE="LVM2_member" 
/dev/sdf: UUID="jxQjNS-8xpV-Hc4p-d2Vd-1Q8O-U5Yp-j1Dn22" TYPE="LVM2_member"

(9)#查看建立osd

[root@node143 ~]# ceph-volume lvm list
[root@node143 ~]# lsblk內存

(10)#OSD會自動啓動

[root@node143 ~]# ceph osd tree
ID CLASS WEIGHT   TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       11.98761 root default                             
-2        3.26935     host node140                         
 0   hdd  0.54489         osd.0        up  1.00000 1.00000 
 1   hdd  0.54489         osd.1        up  1.00000 1.00000 
 2   hdd  0.54489         osd.2        up  1.00000 1.00000 
 3   hdd  0.54489         osd.3        up  1.00000 1.00000 
 4   hdd  0.54489         osd.4        up  1.00000 1.00000 
 5   hdd  0.54489         osd.5        up  1.00000 1.00000 
-3        3.26935     host node141                         
12   hdd  0.54489         osd.12       up  1.00000 1.00000 
13   hdd  0.54489         osd.13       up  1.00000 1.00000 
14   hdd  0.54489         osd.14       up  1.00000 1.00000 
15   hdd  0.54489         osd.15       up  1.00000 1.00000 
16   hdd  0.54489         osd.16       up  1.00000 1.00000 
17   hdd  0.54489         osd.17       up  1.00000 1.00000 
-4        2.17957     host node142                         
 6   hdd  0.54489         osd.6        up  1.00000 1.00000 
 9   hdd  0.54489         osd.9        up  1.00000 1.00000 
10   hdd  0.54489         osd.10       up  1.00000 1.00000 
11   hdd  0.54489         osd.11       up  1.00000 1.00000 
-9        3.26935     host node143                         
 7   hdd  0.54489         osd.7        up  1.00000 1.00000 
 8   hdd  0.54489         osd.8        up  1.00000 1.00000 
18   hdd  0.54489         osd.18       up        0 1.00000 
19   hdd  0.54489         osd.19       up        0 1.00000 
20   hdd  0.54489         osd.20       up        0 1.00000 
21   hdd  0.54489         osd.21       up        0 1.00000

====== osd.0 =======
顯示osd.num,num後面會用到。資源

[root@node143 ~]# systemctl enable ceph-osd@7
[root@node143 ~]# systemctl enable ceph-osd@8
[root@node143 ~]# systemctl enable ceph-osd@18
[root@node143 ~]# systemctl enable ceph-osd@19
[root@node143 ~]# systemctl enable ceph-osd@20
[root@node143 ~]# systemctl enable ceph-osd@21

(11)查看集羣,擴容成功

[root@node143 ~]# ceph -s 
  cluster:
    id:     58a12719-a5ed-4f95-b312-6efd6e34e558
    health: HEALTH_WARN
            noin,nobackfill flag(s) set

  services:
    mon: 2 daemons, quorum node140,node142 (age 8d)
    mgr: admin(active, since 8d), standbys: node140
    mds: cephfs:1 {0=node140=up:active} 1 up:standby
    osd: 22 osds: 22 up (since 4m), 18 in (since 9m); 2 remapped pgs
         flags noin,nobackfill

  data:
    pools:   5 pools, 768 pgs
    objects: 2.65k objects, 9.9 GiB
    usage:   54 GiB used, 12 TiB / 12 TiB avail
    pgs:     766 active+clean
             1   active+remapped+backfilling
             1   active+remapped+backfill_wait

(12)記得低峯時段取消標誌位

在用戶訪問的非高峯時,取消這些標誌位,集羣開始在平衡任務。
[root@node140 ~]##ceph osd unset noin
[root@node140 ~]##ceph osd unset nobackfillrem

相關文章
相關標籤/搜索