CentOS7.6部署ceph環境

CentOS7.6部署ceph環境node

測試環境:linux

節點名稱docker

節點IPbootstrap

磁盤centos

節點功能app

Node-1ssh

10.10.1.10/24測試

/dev/sdbui

監控節點阿里雲

Node-2

10.10.1.20/24

/dev/sdb

OSD節點

Node-3

10.10.1.30/24

/dev/sdb

OSD節點

步驟:

  1. 主機信息配置

1.1. 修改三臺主機的主機名

[root@Node-1 ~]# hostnamectl set-hostname Node-1

[root@Node-2 ~]# hostnamectl set-hostname Node-2

[root@Node-3 ~]# hostnamectl set-hostname Node-3

1.2. 修改三臺主機的hosts文件,增長如下記錄:

[root@Node-1 ~]# vi /etc/hosts

10.10.1.10  Node-1

10.10.1.20  Node-2

10.10.1.30  Node-3

1.3. 關閉三臺主機的防火牆和Selinux

[root@Node-1 ~]# systemctl stop firewalld.sevice

[root@Node-1 ~]# systemctl disable firewalld.sevice

[root@Node-1 ~]# vi /etc/sysconfig/selinux

SELINUX=disabled

1.4. 建立集羣用戶cephd

[root@Node-1 ~]# useradd cephd

1.5. 在主節點上配置cephd無密碼訪問

[root@Node-1 ~]# ssh-keygen -t rsa

[root@Node-1 ~]# su – cephd

[cephd@node-1 ~]$ ssh-copy-id cephd@Node-2

[cephd@node-1 ~]$ ssh-copy-id cephd@Node-3

[cephd@node-1 ~]$ cd .ssh/

[cephd@node-1 .ssh]$ vi config

Host Node-1

   Hostname Node-1

   User     cephd

Host  Node-2

   Hostname Node-2

   User     cephd

Host  Node-3

   Hostname  Node-3

   User     cephd

1.6. 更換國內阿里雲的yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum clean all

yum makecache

1.7. 安裝ceph

[root@Node-1 ~]# yum -y install ceph

1.8. 安裝ceph-deploy

[root@Node-1 ~]# yum -y install ceph-deploy

1.9. 部署ceph集羣而且建立cluster目錄

[cephd@node-1 ~]$ mkdir cluster

[cephd@node-1 ~]$ cd cluster

[cephd@node-1 cluster]$ ceph-deploy new Node-1 Node-2 Node-3

[cephd@node-1 cluster]$ vi ceph.conf

[global]

fsid = 77472f89-02d6-4424-8635-67482f090b09 

mon_initial_members = Node-1

mon_host = 10.10.1.10

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

public network=10.10.1.0/24

2.0.安裝ceph

[cephd@node-1 cluster]$ sudo ceph-deploy install Node-1 Node-2 Node-3

2.1.配置初始monitor

[cephd@node-1 cluster]$ sudo ceph-deploy mon create-initial

[cephd@node-1 cluster]$ ls -l

total 164

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-mds.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-mgr.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-osd.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-rgw.keyring

-rw------- 1 cephd cephd     63 Jun 21 10:31 ceph.client.admin.keyring

-rw-rw-r-- 1 cephd cephd    249 Jun 21 10:20 ceph.conf

-rw-rw-r-- 1 cephd cephd 139148 Jul  5 19:20 ceph-deploy-ceph.log

-rw------- 1 cephd cephd     73 Jun 21 10:18 ceph.mon.keyring

[cephd@node-1 cluster]$

2.2.檢查羣集狀態

[cephd@node-1 cluster]$ ceph -s

  cluster:

    id:     77472f89-02d6-4424-8635-67482f090b09

    health: HEALTH_OK

 

  services:

    mon: 1 daemons, quorum Node-1

    mgr: Node-1(active), standbys: Node-2, Node-3

    mds: bjdocker-1/1/1 up  {0=Node-1=up:active}, 2 up:standby

    osd: 3 osds: 3 up, 3 in

 

  data:

    pools:   2 pools, 128 pgs

    objects: 23 objects, 5.02MiB

    usage:   3.07GiB used, 207GiB / 210GiB avail

    pgs:     128 active+clean

 

[cephd@node-1 cluster]$

2.3.建立POOL

[cephd@node-1 cluster]$ ceph osd pool create  store 64

[cephd@node-1 cluster]$ ceph osd pool create  app 64

[root@node-1 ~]# rados df

POOL_NAME USED    OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD      WR_OPS WR     

app            0B       0      0      0                  0       0        0      0      0B  47077 91.8GiB

store     5.02MiB      23      0     46                  0       0        0    126 13.9MiB   3698 6.78MiB

total_objects    23

total_used       3.07GiB

total_avail      207GiB

total_space      210GiB

[root@node-1 ~]#

2.4.建立OSD

ceph-deploy osd create --data /dev/sdb Node-1

ceph-deploy osd create --data /dev/sdb Node-2

ceph-deploy osd create --data /dev/sdb Node-3

2.5.每臺主機建立掛載點/data

[root@node-1 ~]# mkdir /data

2.5.建立cephfs

[cephd@node-1 cluster]$ sudo ceph-deploy mds create Node-1 Node-2 Node-3

[cephd@node-1 cluster]$ sudo ceph fs new bjdocker app store

[cephd@node-1 cluster]$ ceph mds stat

bjdocker-1/1/1 up  {0=Node-1=up:active}, 2 up:standby

[cephd@node-1 cluster]$

2.6.cephfs文件系統掛載

mount -t ceph 10.10.1.10:6789,10.10.1.20:6789,10.10.1.30:6789:/ /data -o name=admin,secret=AQBO6gxdoWbLMBAAJlpIoLRpHlBFNCyVAejV+g==

[cephd@node-1 cluster]$ cat ceph.client.admin.keyring

[client.admin]

        key = AQBO6gxdoWbLMBAAJlpIoLRpHlBFNCyVAejV+g==

[cephd@node-1 cluster]$

[cephd@node-1 cluster]$ df -h

Filesystem                                         Size  Used Avail Use% Mounted on

/dev/mapper/centos-root                             50G  2.8G   48G   6% /

devtmpfs                                           3.9G     0  3.9G   0% /dev

tmpfs                                              3.9G     0  3.9G   0% /dev/shm

tmpfs                                              3.9G  8.9M  3.9G   1% /run

tmpfs                                              3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/mapper/centos-home                             67G   33M   67G   1% /home

/dev/sda1                                         1014M  163M  852M  17% /boot

tmpfs                                              799M     0  799M   0% /run/user/0

10.10.1.10:6789,10.10.1.20:6789,10.10.1.30:6789:/   99G     0   99G   0% /data

[cephd@node-1 cluster]$

集羣PG的計算

PG 總數=(OSD 總數* 100 )/最大副本數

集羣的Pool的PG數

PG總數=((OSD總數*100)/最大副本數 )/ 池數

安裝失敗後清理:

ceph-deploy purgedata [HOST] [HOST...]

ceph-deploy forgetkeys

命令:

[root@node-1 ceph]# ceph –s   //集羣健康情況

[root@node-1 ceph]# ceph osd tree   //查看osd

相關文章
相關標籤/搜索