準備階段node
刪除默認的源,國外的比較慢linux
yum clean all rm -rf /etc/yum.repos.d/*.repo
下載阿里雲的base源vim
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
下載阿里雲的epel源centos
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
修改裏面的系統版本爲7.3.1611,當前用的centos的版本的的yum源可能已經清空了ssh
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo sed -i 's/$releasever/7.3.1611/g' /etc/yum.repos.d/CentOS-Base.repo
添加ceph源測試
vim /etc/yum.repos.d/ceph.repo
[ceph]ui
name=ceph阿里雲
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/url
gpgcheck=0spa
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
設置deploy主機的/etc/hosts文件
192.168.0.39 ceph-admin
192.168.0.40 mon1
192.168.0.41 osd1
192.168.0.42 osd2
192.168.0.43 osd3
修改deploy主機上的~/.ssh/config文件
Host ceph-admin Hostname ceph-admin User cephuser Host mon1 Hostname mon1 User cephuser Host osd1 Hostname osd1 User cephuser Host osd2 Hostname osd2 User cephuser Host osd3 Hostname osd3 User cephuser
修改權限
chmod 644 ~/.ssh/config
添加用戶
useradd -d /home/cephuser -m cephuser
passwd cephuser
確保添加的用戶用sudo權限
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
chmod 0440 /etc/sudoers.d/cephuser sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
設置deploy主機能夠無密碼訪問其餘node
su - cephuser
ssh-keygen ssh-copy-id ceph-admin ssh-copy-id mon1 ssh-copy-id osd1 ssh-copy-id osd2 ssh-copy-id osd3
安裝NTP服務
yum install -y ntp ntpdate ntp-doc ntpdate 0.us.pool.ntp.org hwclock --systohc systemctl enable ntpd.service systemctl start ntpd.service
禁用selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
關閉防火牆
ssh root@ceph-admin systemctl stop firewalld systemctl disable firewalld
note:測試時使用的磁盤不要過小,不然後面添加磁盤時會報錯,建議磁盤大小爲20G及以上。
檢查磁盤
sudo fdisk -l /dev/vdb
格式化磁盤
sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100% sudo mkfs.xfs /dev/vdb -f
查看磁盤格式
sudo blkid -o value -s TYPE /dev/vdb
部署階段
安裝ceph-deploy
sudo yum update -y && sudo yum install ceph-deploy -y
建立cluster目錄
su - cephuser
mkdir cluster cd cluster/
建立集羣
ceph-deploy new mon1
修改ceph.conf文件
vim ceph.conf # Your network address public network = 192.168.0.0/24 osd pool default size = 3
安裝ceph
ceph-deploy install ceph-admin mon1 osd1 osd2 osd3
初始化monitor,並收集全部密鑰
ceph-deploy mon create-initial
ceph-deploy gatherkeys mon1
添加OSD到集羣
檢查OSD節點上全部可用的磁盤
ceph-deploy disk list osd1 osd2 osd3
使用zap選項刪除全部osd節點上的分區
ceph-deploy disk zap osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb
準備OSD
ceph-deploy osd prepare osd1:/dev/vdb osd2:/dev/vdb osd3:/dev/vdb
激活OSD
ceph-deploy osd activate osd1:/dev/vdb1 osd2:/dev/vdb1 osd3:/dev/vdb1
查看OSD
ceph-deploy disk list osd1 osd2 osd3
顯示兩個分區
用 ceph-deploy 把配置文件和 admin 密鑰拷貝到管理節點和 Ceph 節點,這樣你每次執行 Ceph 命令行時就無需指定 monitor 地址和 ceph.client.admin.keyring 了
ceph-deploy admin ceph-admin mon1 osd1 osd2 osd3
修改密鑰權限
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
完成!
檢查ceph
檢查ceph狀態
sudo ceph health sudo ceph -s