一. 環境準備:linux
1.關閉防火牆、selinux(全部節點):
vim
# systemctl stop firewalld ; systemctl disable firewalld
# setenforce 0
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config //注意,網上說這種關閉selinux方法,可能致使selinux沒法臨時開啓,可使用如下謹慎的方法
# vim /etc/sysconfig/selinux //將SELINUX=disabled修改便可,重啓虛機
# init 6
2.修改主機名(全部節點):api
# hostnamectl set-hostname ceph01
# hostnamectl set-hostname ceph02
# hostnamectl set-hostname ceph03
隨便找一臺虛機好比ceph01上,修改hosts文件:bash
# vim /etc/hosts
# scp /etc/hosts 192.168.10.11:/etc/
# scp /etc/hosts 192.168.10.12:/etc/
3. SSH登陸免密(ceph01控制節點):服務器
# ssh-keygen //一路回車,生成RSA公鑰
# ssh-copy-id ceph02
# ssh-copy-id ceph03
4. 配置YUM源(全部節點上):
ssh
Centos7源:ide
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
epel源:ui
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
新建Ceph.repo源:
url
# vim Ceph.repo
[ceph-nautilus] name=ceph-nautilus baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/ enabled=1 gpgcheck=0 [ceph-nautilus-noarch] name=ceph-nautilus-noarch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ enabled=1 gpgcheck=0
# yum clean all
# yum makecache
5. 安裝NTP服務(全部節點上):spa
# yum install -y chrony
在ceph01控制節點上:
# vim /etc/chrony.conf
將服務器指向了AD域服務器(網上本身找NTP服務器節點替換便可),下面是容許客戶端的子網:
# systemctl restart chronyd ; systemctl enable chronyd
# chronyc sources
在ceph0二、03節點上:
# vim /etc/chrony.conf
# systemctl restart chronyd ; systemctl enable chronyd
# chronyc sources
二. 部署CEPH:
1. 安裝ceph和ceph-deploy:
在全部節點上:
# yum install -y ceph
在ceph01控制節點上:
# yum install -y ceph-deploy
2. 部署MON:
在ceph01控制節點上:
# mkdir ceph ; cd ceph
# ceph-deploy new ceph01 ceph02 ceph03
# vim ceph.conf
添加 overwrite_conf = true
在全部節點上:
# chown ceph:ceph -R /var/lib/ceph
回到ceph01節點(仍是在/root/ceph目錄下操做):
# ceph-deploy --overwrite-conf mon create-initial
在全部節點上:
# systemctl restart ceph-mon@ceph01
# systemctl restart ceph-mon@ceph02
# systemctl restart ceph-mon@ceph03
3. 部署MGR:
在ceph01上:
# ceph-deploy mgr create ceph01
# ps -ef | grep ceph
# systemctl restart ceph-mgr@ceph01
4. 部署OSD:
我這裏全部節點是/dev/sdb
在ceph01節點(仍是在/root/ceph目錄下操做):
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph01
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph02
# ceph-deploy --overwrite-conf osd create --data /dev/sdb ceph03
將全部.keyring文件拷貝到全部節點/etc/ceph下:
在全部節點上重啓:
# systemctl restart ceph-osd@0
# systemctl restart ceph-osd@1
# systemctl restart ceph-osd@2
在ceph01節點:
# ceph osd tree
5. 部署RGW:
這裏對象存儲做爲單機,只安裝在ceph01節點:
# yum install -y ceph-radosgw
# cepy-deploy --overwrite rgw create ceph01
# ps aux | grep radosgw
# systemctl restart ceph-radosgw@ceph01
ceph桶存儲分片:
若是每一個桶中對象數量較少,好比小於10000, 能夠不操做此步驟, 大於10萬對象,必定要設置下面的參數。
若是設計方案中,一個桶中存儲對象數量大於幾千萬,須要關閉動態分片, 同時設置最大分片數量。
# vim /etc/ceph/ceph.conf //添加如下項
桶動態分片默認開啓:
rgw_dynamic_resharding = false
桶中最大分片的數量:
rgw_override_bucket_index_max_shards = 16
# systemctl restart ceph-radosgw@ceph01 //重啓服務
6. 創建S3帳戶:
# radosgw-admin user create --uid testid --display-name 'admin' --system
保存access_key與secret_key
7.部署Dashboard(ceph01上):
# yum install -y ceph-mgr-dashboard
# ceph mgr module enable dashboard
# ceph dashboard create-self-signed-cert
# ceph dashboard set-login-credentials admin admin //建立登陸用戶,並設置密碼
此時,你在object gateway裏面看不到bucket內容:
# ceph dashboard set-rgw-api-access-key <access_key> //將access_key添加進去
# ceph dashboard set-rgw-api-secret-key <secret_key> //將secret_key添加進去
用網頁打開https://192.168.10.10:8443
若是忘記了S3帳戶的KEY:
# radosgw-admin user info --uid=testid