分佈式存儲Ceph的幾種安裝方法,源碼,apt-get,deploy工具,Ubuntu CentOS

最近搞了下分佈式PB級別的存儲CEPH 
嘗試了幾種不一樣的安裝,使用
期間遇到不少問題,和你們一塊兒分享。

1、源碼安裝
說明:源碼安裝能夠了解到系統各個組件, 可是安裝過程也是很費勁的,主要是依賴包太多。 當時嘗試了centos 和 ubuntu 上安裝,都是能夠安裝好的。

1下載ceph    http://ceph.com/download/


2 安裝編譯工具apt-get install automake autoconf automake libtool make

3 解壓
#tar zxvf   ceph-0.72.tar.gz 
#cd   ceph-0.72.tar.gz 
#./autogen.sh

四、

先安裝依賴包

#apt-get install autotools-dev autoconf automake cdbs g++ gcc git libatomic-ops-dev libboost-dev \
libcrypto++-dev libcrypto++ libedit-dev libexpat1-dev libfcgi-dev libfuse-dev \
libgoogle-perftools-dev libgtkmm-2.4-dev libtool pkg-config uuid-dev libkeyutils-dev \
uuid-dev libkeyutils-dev  btrfs-tools python


4 可能遇到錯誤

4.1 fuse:
apt-get install fuse-devel

4.2 tcmalloc:
wget  https://gperftools.googlecode.com/files/gperftools-2.1.zip
安裝google-perftools

4.3 libedit:
  apt-get  install  libedit -devel

4.4 no libatomic-ops found
apt-get  install libatomic_ops-devel

4.5 snappy:

4.6 libleveldb not found:
make
 cp libleveldb.* /usr/lib
cp -r include/leveldb   /usr/local/include

4.7 libaio
apt-get install libaio-dev

4.8 boost
apt-get install libboost-dev
apt-get install libboost-thread-dev
apt-get install  libboost-program-options-dev

4.9 g++
apt-get install g++
5 編譯安裝
#./configure –prefix=/opt/ceph/
#make
#make install







2、使用ubuntn 12.04自帶的ceph 版本多是ceph version 0.41

資源: linux

兩臺機器:一臺server,一臺client,安裝ubuntu12.04 git

其中,server安裝時,另外分出兩個區,做爲osd0osd1的存儲,沒有的話,系統安裝好後,使用loop設備虛擬出兩個也能夠。 github


1 、服務端安裝 CEPH  (MON MDS OSD)
apt-cache search ceph
apt-get install ceph
apt-get install ceph-common

2、添加keyAPT中,更新sources.list,安裝ceph bootstrap

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - ubuntu

echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list vim

apt-get update && sudo apt-get install ceph centos


三、查看版本 app

# ceph-v  //將顯示ceph的版本和key信息 ssh

若是沒有顯示,請執行以下命令

# sudo apt-get update && apt-get upgrade


四、配置文件
# vim /etc/ceph/ceph.conf

[global] 
 
    # For version 0.55 and beyond, you must explicitly enable  
    # or disable authentication with "auth" entries in [global]. 
     
    auth cluster required = none 
    auth service required = none 
    auth client required = none 
 
[osd] 
    osd journal size = 1000 
     
    #The following assumes ext4 filesystem. 
    filestore xattr use omap = true 
 
 
    # For Bobtail (v 0.56) and subsequent versions, you may  
    # add settings for mkcephfs so that it will create and mount 
    # the file system on a particular OSD for you. Remove the comment `#`  
    # character for the following settings and replace the values  
    # in braces with appropriate values, or leave the following settings  
    # commented out to accept the default values. You must specify the  
    # --mkfs option with mkcephfs in order for the deployment script to  
    # utilize the following settings, and you must define the 'devs' 
    # option for each osd instance; see below. 
 
    osd mkfs type = xfs 
    osd mkfs options xfs = -f   # default for xfs is "-f"    
    osd mount options xfs = rw,noatime # default mount option is "rw,noatime" 
 
    # For example, for ext4, the mount option might look like this: 
     
    #osd mkfs options ext4 = user_xattr,rw,noatime 
 
    # Execute $ hostname to retrieve the name of your host, 
    # and replace {hostname} with the name of your host. 
    # For the monitor, replace {ip-address} with the IP 
    # address of your host. 
 
[mon.a] 
 
    host = ceph1 
    mon addr = 192.168.1.1:6789 
 
[osd.0] 
    host = ceph1 
     
    # For Bobtail (v 0.56) and subsequent versions, you may  
    # add settings for mkcephfs so that it will create and mount 
    # the file system on a particular OSD for you. Remove the comment `#`  
    # character for the following setting for each OSD and specify  
    # a path to the device if you use mkcephfs with the --mkfs option. 
     
    devs = /dev/sdb1 
 
[mds.a] 
    host = ceph1 

五、執行初始化

sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring

注意每次初始化 須要 刪除原有 數據 目錄 

rm –rf /var/lib/ceph/osd/ceph-0/*

rm –rf /var/lib/ceph/osd/ceph-1/*

rm –rf /var/lib/ceph/mon/ceph-a/*

rm –rf /var/lib/ceph/mds/ceph-a/*


mkdir -p /var/lib/ceph/osd/ceph-0

mkdir -p /var/lib/ceph/osd/ceph-1

mkdir -p /var/lib/ceph/mon/ceph-a

mkdir -p /var/lib/ceph/mds/ceph-a

六、啓動 

service ceph -a start

七、執行健康檢查

 ceph health

八、磁盤用 ext4 出現 mount 5 錯誤

後來用

mkfs.xfs  -f /dev/sda7

就 好了。 


九、在客戶端上操做:

sudo mkdir /mnt/mycephfs

sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs


3、ceph-deploy安裝

一、下載


https://github.com/ceph/ceph-deploy/archive/master.zip


二、

apt-get install python-virtualenv

 ./bootstrap 

三、

ceph-deploy install ubuntu1

四、

ceph-deploy new ubuntu1

五、

ceph-deploy mon create ubuntu1

六、

ceph-deploy gatherkeys

遇錯提示沒有keyring則執行:

ceph-deploy forgetkeys

會生成

{cluster-name}.client.admin.keyring

{cluster-name}.bootstrap-osd.keyring

{cluster-name}.bootstrap-mds.keyring


七、

ceph-deploy osd create ubuntu1:/dev/sdb1 (磁盤路徑)

可能遇到錯:

一、磁盤已經掛載,用umount

二、磁盤格式化問題,用fdisk分區, mkfs.xfs -f /dev/sdb1 格式化

八、

ceph -s

可能遇到錯誤:

提示沒有osd

 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds

則執行ceph osd create


九、

    cluster faf5e4ae-65ff-4c95-ad86-f1b7cbff8c9a

     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean

     monmap e1: 1 mons at {ubuntu1=12.0.0.115:6789/0}, election epoch 1, quorum 0 ubuntu1

     osdmap e10: 3 osds: 1 up, 1 in

      pgmap v17: 192 pgs, 3 pools, 0 bytes data, 0 objects

            1058 MB used, 7122 MB / 8181 MB avail

                 192 active+degraded


十、客戶端掛摘

注意:須要用用戶名及密碼掛載

10.1查看密碼

cat /etc/ceph/ceph.client.admin.keyring 

ceph-authtool --print-key ceph.client.admin.keyring

AQDNE4xSyN1WIRAApD1H/glMB5VSLwmmnt7UDw==

10.2掛載 


其餘:

一、多臺機器之間要添加ssh 無密碼認證 ssh-keygen

二、最好有單獨的磁盤分區作存儲,格式化也有幾種不一樣方式

三、總會遇到各類錯誤。 只能單獨分析,解決

ceph-deploy forgetkeys

參考資料:
相關文章
相關標籤/搜索