ceph-admin ceph-mon 爲同一臺服務器 ceph-osd1 爲一臺服務器 ceph-osd2 爲另外一臺服務器
# systemctl stop firewalld.service # systemctl disable firewalld.service
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config # setenforce 0
重啓服務器html
# yum install wget vim curl -y # yum clean all # mkdir /etc/yum.repos/repo # cd /etc/yum.repos/ # mv *.repo repo/ 下載阿里雲的Base源 # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 下載阿里雲的epel源 # wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo # sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo # sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo 添加ceph源 # vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 priority=1 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0 priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 緩存yum 元數據 # yum makecache
將全部節點的時間進行校對linux
# yum install ntp ntpdate 配置方式比較簡單,略過...
# cat /etc/hosts 192.168.203.100 ceph-admin 192.168.203.150 ceph-osd1 192.168.203.200 ceph-osd2
# ssh-keygen -t rsa
一路回車,直到完成web
將 密碼拷貝到其餘各個服務器數據庫
# ssh-copy-id ceph-admin # ssh-copy-id ceph-osd1 # ssh-copy-id ceph-osd2
# mkdir ceph-cluster # cd ceph-cluster # yum install ceph ceph-deploy 注:若是在安裝過程當中遇到問題,須要從新開始安裝,執行如下命令來清空配置(新安裝的不須要操做) 下面的命令會將安裝的包卸載掉 # ceph-deploy purge ceph-admin ceph-osd1 ceph-osd2 下面的命令會清除數據 # ceph-deploy purgedata ceph-admin ceph-osd1 ceph-osd2 下面的命令清除key # ceph-deploy forgetkeys
# ceph-deploy install ceph-admin ceph-osd1 ceph-osd2
# ceph-deploy new ceph-admin
命令執行以後會在當前目錄生成ceph.conf文件,打開文件增長一行內容(表示有兩個osd)vim
osd pool default size = 2 # ceph-deploy --overwrite-conf mon create ceph-admin
注:若是監控節點比較多,請注意查看顯示的信息是否正確centos
# ceph-deploy mon create-initial
# ceph daemon mon.`hostname` mon_status { "name": "adm", "rank": 0, "state": "leader", "election_epoch": 3, "quorum": [ 0 ], "outside_quorum": [], "extra_probe_peers": [], "sync_provider": [], "monmap": { "epoch": 1, "fsid": "7fe7736b-3ea6-4c8a-b3bd-81f9355a51c6", "modified": "2017-08-27 15:25:30.486560", "created": "2017-08-27 15:25:30.486560", "mons": [ { "rank": 0, "name": "adm", "addr": "192.168.203.153:6789\/0" } ] } }
爲存儲節點osd分配磁盤空間(在osd1和osd2 分別建立文件夾,並給予權限) # mkdir /data # chwon ceph.ceph -R /data
經過ceph-admin 節點的ceph-deploy 開啓osd進程,並激活api
# ceph-deploy gatherkeys ceph-admin ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2 # ceph-deploy --overwrite-conf osd prepare ceph-osd1:/data ceph-osd2:/data # ceph-deploy osd activate ceph-osd1:/data ceph-osd2:/data
把ceph-admin節點的配置文件與keying同步至其餘節點緩存
# ceph-deploy admin ceph-admin ceph-osd1 ceph-osd2 # chmod +r /etc/ceph/ceph.client.admin.keyring
若是以上步驟沒有報錯誤,那麼基本上ceph就安裝完了。服務器
# ceph -s cluster 7fe7736b-3ea6-4c8a-b3bd-81f9355a51c6 health HEALTH_OK monmap e1: 1 mons at {adm=192.168.203.153:6789/0} election epoch 3, quorum 0 adm osdmap e27: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v4466: 120 pgs, 8 pools, 105 MB data, 173 objects 13743 MB used, 22012 MB / 35756 MB avail 120 active+clean # ceph health HEALTH_OK
mon-1爲各個monitor所在節點的主機名。 # systemctl start ceph-mon@mon-1.service # systemctl restart ceph-mon@mon-1.service # systemctl stop ceph-mon@mon-1.service 0爲該節點的OSD的id,能夠經過`ceph osd tree`查看 # systemctl start/stop/restart ceph-osd@0.service
查看osd 信息 # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03400 root default -2 0.01700 host ceph-osd1 4 0.01700 osd.4 up 1.00000 1.00000 -3 0.01700 host ceph-osd2 3 0.01700 osd.3 up 1.00000 1.00000 1 0 osd.1 down 0 1.00000 2 0 osd.2 down 0 1.00000 將down的轉檯設置爲out # ceph osd out osd.1 osd.1 is already out. # ceph osd out osd.2 osd.2 is already out. 將osd從集羣中刪除 # ceph osd rm osd.2 removed osd.2 # ceph osd rm osd.1 removed osd.1 # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03400 root default -2 0.01700 host ceph-osd1 4 0.01700 osd.4 up 1.00000 1.00000 -3 0.01700 host ceph-osd2 3 0.01700 osd.3 up 1.00000 1.00000 從CRUSH 中刪除 關於CRUSH介紹 (http://www.cnblogs.com/chenxianpao/p/5568207.html) # ceph osd crush rm osd.3 刪除osd.3的認證信息 # ceph auth del osd.3
若是使用ceph 的對象存儲,須要部署rgw網關。執行如下步驟建立一個新的rgw實例 (下面仍是以ceph-admin爲例)ssh
# ceph-deploy rgw create ceph-admin
寫入數據並查看數據
建立一個普通文件,並向其寫入數據 建立一個pool。格式爲 rados mkpool pool-name
# rados mkpool data
將文件寫入pool。格式:rados put object-name filename --pool=pool-name
# rados put test-object-0 /tmp/aaa --pool=data
查看文件是否在pool中。格式爲rados -p pool-name ls
# rados -p data ls
肯定文件位置。 格式爲ceph osd map pool-name object-name
# ceph osd map data test-object-2 osdmap e27 pool 'data' (7) object 'test-object-2' -> pg 7.cbbef8c8 (7.0) -> up ([1,0], p1) acting ([1,0], p1)
從pool 中讀取文件。格式爲rados get object-name --pool=pool-name filename (filename是你要保存文件)
# rados get test-object-0 --pool=data /tmp/myfile
從pool中刪除文件。 格式爲 rados rm object-name --pool=pool-name
# rados rm test-object-0 --pool=data
在ceph-admin ceph-osd1 ceph-osd2 節點上安裝
# yum localinstall salt-2015.8.1-1.el7.noarch.rpm # rpm -ivh salt-minion-2015.8.1-1.el7.noarch.rpm
在ceph-admin 安裝salt-master
# rpm -ivh salt-master-2015.8.1-1.el7.noarch.rpm
# yum localinstall calamari-server-1.3.3-jewel.el7.centos.x86_64.rpm # yum install mod_wsgi -y
初始化 calamari
# calamari-ctl initialize
須要填寫帳戶、Email、密碼
修改calamari密碼方式
格式 :# calamari-ctl change_password --password {password} {user-name} # calamari-ctl change_password --password 1234567 root
# rpm -ivh diamond-3.4.68-jewel.noarch.rpm # mv /etc/diamond/diamond.conf.example /etc/diamond/diamond.conf
能夠修改數據的刷新頻率。下面兩個文件控制刷新頻率 修改文件 /etc/graphite/storage-schemas.conf(默認60s)
[calamari] pattern = .* retentions = 60s:1d,15m:7d 能夠將 retentions = 60s:1d,15m:7d 修改成 retentions = 3 ---------- ---------- 0s:1d,15m:7d
修改文件 /etc/diamond/diamond.conf
默認是註釋 #interval = 300 修改成 interval = 120
若是在初始化前,能夠修改模板,注意初始化會用模板文件覆蓋 /opt/calamari/salt/salt/base/diamond.conf
修改diamond配置文件 /etc/diamond/diamond.conf
# Graphite server host host = adm
這個host要填寫你的calamari的管理平臺服務器的主機名,這個地方是用diamond收集集羣數據和硬件的數據發送到管理平臺的機器的carbon進程,而後存儲在whisper這個數據庫當中的,全部的須要收集數據的機器都須要修改。 修改完成後 ,重啓diamond
# /etc/init.d/diamond restart
修改salt-minion配置文件 /etc/salt/minion
master:adm
下面命令在每個節點都執行如下。最後一個是節點主機名
# ceph-deploy calamari connect ceph-admin ceph-osd1 ceph-osd2 # cat /etc/salt/minion.d/calamari.conf master: ceph-admin
重啓服務
# systemctl restart salt-minion.service
在salt-master上執行認證(也就是安裝calamari-server的服務器上)查詢當前的認證請求
# salt-key -L
批准認證請求
# salt-key -A
查詢是否正常經過,隨便測試一下
# salt-key -L # salt '*' test.ping # salt '*' ceph.get_heartbeats
配置calamari-server 文件權限
# cd /var/log/calamari # chmod 777 -R * # service supervisord restart
romana是集羣的web管理界面,在calamari-server上安裝
# rpm -ivh romana-1.2.2-36_gc62bb5b.el7.centos.x86_64.rpm
訪問web管理平臺,輸入當前機器的IP地址接口,默認端口是80
從部署流程到測試文件寫入,監控界面來看以及使用感覺來看,這個能夠棄用 ,太TM爛了