centos7下ceph-jewel單節點部署

服務器/虛擬機準備

1.配置dns,保證機器能夠使用公網網絡源
2.額外添加幾個盤,做爲數據盤,此處我加了/dev/vdb /dev/vdc /dev/vdd三塊100G的盤linux

準備源

添加centos源和epel源,本文所有使用阿里雲的源centos

[root@dev179 ~] rm /etc/yum.repos.d/* -rf
[root@dev179 ~] curl http://mirrors.aliyun.com/repo/Centos-7.repo > /etc/yum.repos.d/Centos-7.repo 
[root@dev179 ~] curl http://mirrors.aliyun.com/repo/epel-7.repo > /etc/yum.repos.d/epel.repo

添加ceph源,在/etc/yum.repos.d/目錄下建立ceph.repo文件,粘貼如下內容服務器

[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
 
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

更新源網絡

[root@dev179 ~] yum clean all
[root@dev179 ~] yum makecache

服務器配置

// 關閉selinux
[root@dev179 ~] sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@dev179 ~] setenforce 0

// 關閉防火牆
[root@dev179 ~] systemctl stop firewalld 
[root@dev179 ~] systemctl disable firewalld

// 設置服務器ip到/etc/hosts,把1.1.1.1改成服務器的IP地址
[root@dev179 ~] echo 1.1.1.1 $HOSTNAME >> /etc/hosts

// 準備臨時部署目錄
[root@dev179 ~] rm -rf /root/ceph-cluster && mkdir -p /root/ceph-cluster && cd /root/ceph-cluster

部署

// 安裝部署軟件
[root@dev179 ceph-cluster] yum install ceph ceph-radosgw ceph-deploy -y

// 初始化配置,後面的步驟都必須進入/root/ceph-cluster目錄中再執行
[root@dev179 ceph-cluster] ceph-deploy new $HOSTNAME

// 更新默認配置文件
[root@dev179 ceph-cluster] echo osd pool default size = 1 >> ceph.conf
[root@dev179 ceph-cluster] echo osd crush chooseleaf type = 0 >> ceph.conf
[root@dev179 ceph-cluster] echo osd max object name len = 256 >> ceph.conf
[root@dev179 ceph-cluster] echo osd journal size = 128 >> ceph.conf

// 初始化監控節點
[root@dev179 ceph-cluster] ceph-deploy mon create-initial

// 準備磁盤,此處/dev/vdb /dev/vdc /dev/vdd根據實際狀況修改
[root@dev179 ceph-cluster] ceph-deploy osd prepare $HOSTNAME:/dev/vdb $HOSTNAME:/dev/vdc $HOSTNAME:/dev/vdd

上步prepare後,那3個盤會自動建立文件系統並掛載,經過df查看具體掛載目錄app

// 經過df查看具體掛載目錄
[root@dev179 ceph-cluster]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  1.7G   49G   4% /
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.4M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda1               1014M  143M  872M  15% /boot
/dev/mapper/centos-home   36G   33M   36G   1% /home
tmpfs                    380M     0  380M   0% /run/user/0
/dev/vdb1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-0
/dev/vdc1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-1
/dev/vdd1                100G  108M  100G   1% /var/lib/ceph/osd/ceph-2

// 激活osd
[root@dev179 ceph-cluster] ceph-deploy osd activate $HOSTNAME:/var/lib/ceph/osd/ceph-0 $HOSTNAME:/var/lib/ceph/osd/ceph-1 $HOSTNAME:/var/lib/ceph/osd/ceph-2

檢查

第一次檢查會發現是HEALTH_WARN狀態curl

[root@dev179 ceph-cluster]# ceph -s
    cluster 7775a3a4-7315-41fe-b192-2655b11a83a1
     health HEALTH_WARN
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {dev179=172.24.8.179:6789/0}
            election epoch 3, quorum 0 dev179
     osdmap e15: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 299 GB / 299 GB avail
                  64 active+clean

修改pg/pgp值後,集羣狀態變爲HEALTH_OKui

[root@dev179 ceph-cluster]# ceph osd pool set rbd pg_num 128
[root@dev179 ceph-cluster]# ceph osd pool set rbd pgp_num 128
[root@dev179 ceph-cluster]# ceph -s
    cluster 7775a3a4-7315-41fe-b192-2655b11a83a1
     health HEALTH_OK
     monmap e1: 1 mons at {dev179=172.24.8.179:6789/0}
            election epoch 3, quorum 0 dev179
     osdmap e19: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v34: 128 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 299 GB / 299 GB avail
                 128 active+clean
相關文章
相關標籤/搜索