Ubuntu下Ceph單節點和多節點安裝小結 php
崔炳華 html
2014年2月26日 linux
Ceph是一個分佈式文件系統,在維持POSIX兼容性的同時加入了複製和容錯功能。Ceph最大的特色是分佈式的元數據服務器,經過CRUSH(Controlled Replication Under Scalable Hashing)這種擬算法來分配文件的location。Ceph的核心是RADOS(ReliableAutonomic Distributed Object Store),一個對象集羣存儲,自己提供對象的高可用、錯誤檢測和修復功能。 git
Ceph生態系統架構能夠劃分爲四部分: 算法
Ceph支持成百上千甚至更多的節點,以上四個部分最好分佈在不一樣的節點上。固然,對於基本的測試,能夠把mon和mds裝在一個節點上,也能夠把四個部分全都部署在同一個節點上。 ubuntu
「網速較慢」或者「安裝軟件失敗」的狀況下,能夠考慮替換成國內的鏡像: vim
# sudo sed -i's#us.archive.ubuntu.com#mirrors.163.com#g' /etc/apt/sources.list 服務器
# sudo apt-get update 網絡
Ubuntu 12.04默認的Ceph版本爲0.41,若是但願安裝較新的Ceph版本,能夠添加key到APT中,更新sources.list: 架構
# sudo wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'| sudo apt-key add -
# sudo echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee/etc/apt/sources.list.d/ceph.list
# sudo apt-get update
# date # 查看系統時間是否正確,正確的話則忽略下面兩步
# sudo date -s "2013-11-0415:05:57" # 設置系統時間
# sudo hwclock -w # 寫入硬件時間
請確保已關閉SELinux(Ubuntu默認未開啓)。
另外,建議關閉防火牆:
# sudo ufw disable # 關閉防火牆
# apt-get install ceph ceph-common ceph-mds
# ceph -v # 將顯示ceph的版本和key信息
# vim /etc/ceph/ceph.conf
[global]
max open files = 131072
#For version 0.55 and beyond, you must explicitly enable
#or disable authentication with "auth" entries in [global].
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd journal size = 1000
#The following assumes ext4 filesystem.
filestore xattruse omap = true
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following settings and replace the values
#in braces with appropriate values, or leave the following settings
#commented out to accept the default values. You must specify the
#--mkfs option with mkcephfs in order for the deployment script to
#utilize the following settings, and you must define the 'devs'
#option for each osd instance; see below.
osd mkfs type = xfs
osd mkfs options xfs = -f #default for xfs is "-f"
osd mount options xfs = rw,noatime # default mount option is"rw,noatime"
#For example, for ext4, the mount option might look like this:
#osd mkfs options ext4 = user_xattr,rw,noatime
#Execute $ hostname to retrieve the name of your host,
#and replace {hostname} with the name of your host.
#For the monitor, replace {ip-address} with the IP
#address of your host.
[mon.a]
host = ceph1
mon addr = 192.168.73.129:6789
[osd.0]
host = ceph1
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following setting for each OSD and specify
#a path to the device if you use mkcephfs with the --mkfs option.
devs = /dev/sdb1
[osd.1]
host= ceph1
devs= /dev/sdb2
[mds.a]
host= ceph1
注意,對於較低的Ceph版本(例如0.42),須要在[mon]項下添加一行內容:mondata = /data/$name,以及在[osd]項下添加一行內容:osd data = /data/$name,以做爲後續的數據目錄;相應的,後續針對數據目錄的步驟也須要調整。
# mkdir -p /var/lib/ceph/osd/ceph-0
# mkdir -p /var/lib/ceph/osd/ceph-1
# mkdir -p /var/lib/ceph/mon/ceph-a
# mkdir -p /var/lib/ceph/mds/ceph-a
對新分區進行xfs或者btrfs的格式化:
# mkfs.xfs -f /dev/sdb1
# mkfs.xfs -f /dev/sdb2
第一次必須先掛載分區寫入初始化數據:
# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
# mount /dev/sdb2 /var/lib/ceph/osd/ceph-1
注意,每次執行初始化以前,都須要先中止Ceph服務,並清空原有數據目錄:
# /etc/init.d/ceph stop
# rm -rf /var/lib/ceph/*/ceph-*/*
而後,就能夠在mon所在的節點上執行初始化了:
# sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph1.keyring
注意,一旦配置文件ceph.conf發生改變,初始化最好從新執行一遍。
在mon所在的節點上執行:
# sudo service ceph -a start
注意,執行上面這步時,可能會遇到以下提示:
=== osd.0 ===
Mounting xfs onceph4:/var/lib/ceph/osd/ceph-0
Error ENOENT: osd.0 does not exist. create it before updating the crush map
執行以下命令後,再重複執行上面那條啓動服務的命令,就能夠解決:
# ceph osd create
# sudo ceph health # 也可使用ceph -s命令查看狀態
若是返回的是HEALTH_OK,則表明成功!
注意,若是遇到以下提示:
HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds
或者遇到以下提示:
HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)
執行以下命令,就能夠解決:
# ceph pg dump_stuck stale && ceph pg dump_stuck inactive && ceph pg dump_stuck unclean
若是遇到以下提示:
HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%)
則說明osd數量不夠,Ceph默認至少須要提供兩個osd。
客戶端掛載mon所在的節點:
# sudo mkdir /mnt/mycephfs
# sudo mount -t ceph 192.168.73.129:6789:/ /mnt/mycephfs
客戶端驗證:
# df -h #若是能查看到/mnt/mycephfs的使用狀況,則說明Ceph安裝成功。
對於多節點的狀況,Ceph有以下要求:
在每一個節點上設置相應的主機名,例如:
# vim /etc/hostname
ceph1
修改/etc/hosts,增長以下幾行:
192.168.73.129 ceph1
192.168.73.130 ceph2
192.168.73.131 ceph3
在每一個節點上建立RSA祕鑰:
# ssh-keygen -t rsa # 一直按肯定鍵便可
# touch /root/.ssh/authorized_keys
先配置ceph1,這樣ceph1就能夠無密碼訪問ceph2和ceph3了:
ceph1# scp /root/.ssh/id_rsa.pub ceph2:/root/.ssh/id_rsa.pub_ceph1
ceph1# scp /root/.ssh/id_rsa.pub ceph3:/root/.ssh/id_rsa.pub_ceph1
ceph1# ssh ceph2 "cat /root/.ssh/id_rsa.pub_ceph1>> /root/.ssh/authorized_keys"
ceph1# ssh ceph3 "cat /root/.ssh/id_rsa.pub_ceph1 >> /root/.ssh/authorized_keys"
節點ceph2和ceph3也須要參照上面的命令進行配置。
在每一個節點上安裝Ceph庫:
# apt-get install ceph ceph-common ceph-mds
# ceph -v # 將顯示ceph的版本和key信息
# vim /etc/ceph/ceph.conf
[global]
max open files = 131072
#For version 0.55 and beyond, you must explicitly enable
#or disable authentication with "auth" entries in [global].
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd journal size = 1000
#The following assumes ext4 filesystem.
filestore xattruse omap = true
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following settings and replace the values
#in braces with appropriate values, or leave the following settings
#commented out to accept the default values. You must specify the
#--mkfs option with mkcephfs in order for the deployment script to
#utilize the following settings, and you must define the 'devs'
#option for each osd instance; see below.
osd mkfs type = xfs
osd mkfs options xfs = -f #default for xfs is "-f"
osd mount options xfs = rw,noatime # default mount option is"rw,noatime"
#For example, for ext4, the mount option might look like this:
#osd mkfs options ext4 = user_xattr,rw,noatime
#Execute $ hostname to retrieve the name of your host,
#and replace {hostname} with the name of your host.
#For the monitor, replace {ip-address} with the IP
#address of your host.
[mon.a]
host = ceph3
mon addr = 192.168.73.131:6789
[osd.0]
host = ceph1
#For Bobtail (v 0.56) and subsequent versions, you may
#add settings for mkcephfs so that it will create and mount
#the file system on a particular OSD for you. Remove the comment `#`
#character for the following setting for each OSD and specify
#a path to the device if you use mkcephfs with the --mkfs option.
devs = /dev/sdb1
[osd.1]
host = ceph2
devs= /dev/sdb1
[mds.a]
host = ceph3
配置文件建立成功以後,還須要拷貝到除純客戶端以外的每一個節點中(而且後續也要始終保持一致):
ceph1# scp /etc/ceph/ceph.conf ceph2:/etc/ceph/ceph.conf
ceph1# scp /etc/ceph/ceph.conf ceph3:/etc/ceph/ceph.conf
在每一個節點上建立數據目錄:
# mkdir -p /var/lib/ceph/osd/ceph-0
# mkdir -p /var/lib/ceph/osd/ceph-1
# mkdir -p /var/lib/ceph/mon/ceph-a
# mkdir -p /var/lib/ceph/mds/ceph-a
對於osd所在的節點ceph1和ceph2,須要對新分區進行xfs或者btrfs的格式化:
# mkfs.xfs -f /dev/sdb1
對於節點ceph1和ceph2,第一次必須先分別掛載分區寫入初始化數據:
ceph1# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
ceph2# mount /dev/sdb1 /var/lib/ceph/osd/ceph-1
注意,每次執行初始化以前,都須要在每一個節點上先中止Ceph服務,並清空原有數據目錄:
# /etc/init.d/ceph stop
# rm -rf /var/lib/ceph/*/ceph-*/*
而後,就能夠在mon所在的節點ceph3上執行初始化了:
# sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph3.keyring
注意,一旦配置文件ceph.conf發生改變,初始化最好從新執行一遍。
在mon所在的節點ceph3上執行:
# sudo service ceph -a start
注意,執行上面這步時,可能會遇到以下提示:
=== osd.0 ===
Mounting xfs onceph4:/var/lib/ceph/osd/ceph-0
Error ENOENT: osd.0 does not exist. create it before updating the crush map
執行以下命令後,再重複執行上面那條啓動服務的命令,就能夠解決:
# ceph osd create
# sudo ceph health # 也可使用ceph -s命令查看狀態
若是返回的是HEALTH_OK,則表明成功!
注意,若是遇到以下提示:
HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds
或者遇到以下提示:
HEALTH_WARN 178 pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2/24 objects degraded(8.333%)
執行以下命令,就能夠解決:
# ceph pg dump_stuck stale && cephpg dump_stuck inactive && ceph pg dump_stuck unclean
若是遇到以下提示:
HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42degraded (50.000%)
則說明osd數量不夠,Ceph默認至少須要提供兩個osd。
客戶端(節點ceph3)掛載mon所在的節點(節點ceph3):
# sudo mkdir /mnt/mycephfs
# sudo mount -t ceph 192.168.73.131:6789:/ /mnt/mycephfs
客戶端驗證:
# df -h #若是能查看到/mnt/mycephfs的使用狀況,則說明Ceph安裝成功。
1) 《Ceph:一個 Linux PB 級分佈式文件系統》,http://www.ibm.com/developerworks/cn/linux/l-ceph/
2) 《Ubuntu 12.04 Ceph分佈式文件系統之部署》,http://hobbylinux.blog.51cto.com/2895352/1175932
3) 《How To Install CephOn FC12, FC上安裝Ceph分佈式文件系統》,http://blog.csdn.net/polisan/article/details/5624207
4) 《Ceph配置說明書》,http://www.doc88.com/p-9062058419946.html