ceph詳細安裝部署教程(多監控節點)

1、前期準備安裝ceph-deploy工具node

   全部的服務器都是用root用戶登陸的linux

一、安裝環境git

   系統centos-6.5json

   設備:1臺admin-node (ceph-ploy)  1臺 monistor 2臺 osdbootstrap

二、關閉全部節點的防火牆及關閉selinux,重啓機器。centos

 service iptables stop服務器

 sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/configapp

 chkconfig iptables offless

 

三、編輯admin-node節點的ceph yum倉庫dom

vi /etc/yum.repos.d/ceph.repo 

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://ceph.com/rpm/el6/noarch/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

四、安裝搜狐的epel倉庫

   rpm -ivh http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm

五、更新admin-node節點的yum源 

    yum clean all

    yum update -y

六、在admin-node節點上創建一個ceph集羣目錄

   mkdir /ceph

   cd  /ceph

七、在admin-node節點上安裝ceph部署工具

    yum install ceph-deploy -y

八、配置admin-node節點的hosts文件

  vi /etc/hosts

10.240.240.210 admin-node

10.240.240.211 node1

10.240.240.212 node2

10.240.240.213 node3


2、配置ceph-deploy部署的無密碼登陸每一個ceph節點   

一、在每一個Ceph節點上安裝一個SSH服務器

   [ceph@node3 ~]$ yum install openssh-server -y

二、配置您的admin-node管理節點與每一個Ceph節點無密碼的SSH訪問。

[root@ceph-deploy ceph]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.


三、複製admin-node節點的祕鑰到每一個ceph節點

 ssh-copy-id root@admin-node

 ssh-copy-id root@node1

 ssh-copy-id root@node2

 ssh-copy-id root@node3

四、測試每臺ceph節點不用密碼是否能夠登陸

 ssh root@node1

 ssh root@node2

 ssh root@node3

五、修改admin-node管理節點的~/.ssh / config文件,這樣它登陸到Ceph節點建立的用戶

Host admin-node

  Hostname admin-node

  User root   

Host node1

  Hostname node1

  User root

Host node2

  Hostname node2

  User root

Host node3

  Hostname node3

  User root

3、用ceph-deploy工具部署ceph集羣

一、在admin-node節點上新建一個ceph集羣

[root@admin-node ceph]#  ceph-deploy new node1 node2 node3      (執行這條命令後node1 node2 node3都做爲了monitor節點,多個mon節點能夠實現互備)

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy new node1 node2 node3

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][DEBUG ] Resolving host node1

[ceph_deploy.new][DEBUG ] Monitor node1 at 10.240.240.211

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node1][DEBUG ] connected to host: admin-node 

[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1

[ceph_deploy.new][DEBUG ] Resolving host node2

[ceph_deploy.new][DEBUG ] Monitor node2 at 10.240.240.212

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node2][DEBUG ] connected to host: admin-node 

[node2][INFO  ] Running command: ssh -CT -o BatchMode=yes node2

[ceph_deploy.new][DEBUG ] Resolving host node3

[ceph_deploy.new][DEBUG ] Monitor node3 at 10.240.240.213

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node3][DEBUG ] connected to host: admin-node 

[node3][INFO  ] Running command: ssh -CT -o BatchMode=yes node3

[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2', 'node3']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.240.240.211', '10.240.240.212', '10.240.240.213']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

查看生成的文件

[root@admin-node ceph]# ls

ceph.conf  ceph.log  ceph.mon.keyring

查看ceph的配置文件,三個節點都變爲了控制節點

[root@admin-node ceph]# cat ceph.conf 

[global]

auth_service_required = cephx

filestore_xattr_use_omap = true

auth_client_required = cephx

auth_cluster_required = cephx

mon_host = 10.240.240.211,10.240.240.212,10.240.240.213

mon_initial_members = node1, node2, node3

fsid = 4dc38af6-f628-4c1f-b708-9178cf4e032b


[root@admin-node ceph]# 


二、部署以前確保ceph每一個節點沒有ceph數據包(先清空以前全部的ceph數據,若是是新裝不用執行此步驟,若是是從新部署的話也執行下面的命令)

[root@ceph-deploy ceph]# ceph-deploy purgedata admin-node node1 node2 node3  

[root@ceph-deploy ceph]# ceph-deploy forgetkeys

[root@ceph-deploy ceph]# ceph-deploy purge admin-node node1 node2 node3


  若是是新裝的話是沒有任何數據的 


三、編輯admin-node節點的ceph配置文件,把下面的配置放入ceph.conf中

   osd pool default size = 2


四、在admin-node節點用ceph-deploy工具向各個節點安裝ceph

[root@admin-node ceph]# ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ...

[admin-node][DEBUG ] connected to host: admin-node 

[admin-node][DEBUG ] detect platform information from remote host

[admin-node][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.5 Final

[admin-node][INFO  ] installing ceph on admin-node

[admin-node][INFO  ] Running command: yum clean all

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates

[admin-node][DEBUG ] Cleaning up Everything

[admin-node][DEBUG ] Cleaning up list of fastest mirrors

[admin-node][INFO  ] Running command: yum -y install wget

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Determining fastest mirrors

[admin-node][DEBUG ]  * base: mirrors.btte.net

[admin-node][DEBUG ]  * epel: mirrors.neusoft.edu.cn

[admin-node][DEBUG ]  * extras: mirrors.btte.net

[admin-node][DEBUG ]  * updates: mirrors.btte.net

[admin-node][DEBUG ] Setting up Install Process

[admin-node][DEBUG ] Package wget-1.12-1.11.el6_5.x86_64 already installed and latest version

[admin-node][DEBUG ] Nothing to do

[admin-node][INFO  ] adding EPEL repository

[admin-node][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] --2014-06-07 22:05:34--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...

[admin-node][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80... connected.

[admin-node][WARNIN] HTTP request sent, awaiting response... 200 OK

[admin-node][WARNIN] Length: 14540 (14K) [application/x-rpm]

[admin-node][WARNIN] Saving to: `epel-release-6-8.noarch.rpm.1'

[admin-node][WARNIN] 

[admin-node][WARNIN]      0K .......... ....                                       100% 73.8K=0.2s

[admin-node][WARNIN] 

[admin-node][WARNIN] 2014-06-07 22:05:35 (73.8 KB/s) - `epel-release-6-8.noarch.rpm.1' saved [14540/14540]

[admin-node][WARNIN] 

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] epel-release                ##################################################

[admin-node][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] ceph-release                ##################################################

[admin-node][INFO  ] Running command: yum -y -q install ceph

[admin-node][DEBUG ] Package ceph-0.80.1-2.el6.x86_64 already installed and latest version

[admin-node][INFO  ] Running command: ceph --version

[admin-node][DEBUG ] ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)

[ceph_deploy.install][DEBUG ] Detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final

[node1][INFO  ] installing ceph on node1

[node1][INFO  ] Running command: yum clean all

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Cleaning repos: base extras updates

[node1][DEBUG ] Cleaning up Everything

[node1][DEBUG ] Cleaning up list of fastest mirrors

[node1][INFO  ] Running command: yum -y install wget

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Determining fastest mirrors

[node1][DEBUG ]  * base: mirrors.btte.net

[node1][DEBUG ]  * extras: mirrors.btte.net

[node1][DEBUG ]  * updates: mirrors.btte.net

[node1][DEBUG ] Setting up Install Process

[node1][DEBUG ] Resolving Dependencies

[node1][DEBUG ] --> Running transaction check

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.8.el6 will be updated

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.11.el6_5 will be an update

[node1][DEBUG ] --> Finished Dependency Resolution

[node1][DEBUG ] 

[node1][DEBUG ] Dependencies Resolved

[node1][DEBUG ] 

[node1][DEBUG ] ================================================================================

[node1][DEBUG ]  Package       Arch            Version                   Repository        Size

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Updating:

[node1][DEBUG ]  wget          x86_64          1.12-1.11.el6_5           updates          483 k

[node1][DEBUG ] 

[node1][DEBUG ] Transaction Summary

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Upgrade       1 Package(s)

[node1][DEBUG ] 

[node1][DEBUG ] Total download size: 483 k

[node1][DEBUG ] Downloading Packages:

[node1][DEBUG ] Running rpm_check_debug

[node1][DEBUG ] Running Transaction Test

[node1][DEBUG ] Transaction Test Succeeded

[node1][DEBUG ] Running Transaction

  Updating   : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Cleanup    : wget-1.12-1.8.el6.x86_64                                     2/2 

  Verifying  : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Verifying  : wget-1.12-1.8.el6.x86_64                                     2/2 

[node1][DEBUG ] 

[node1][DEBUG ] Updated:

[node1][DEBUG ]   wget.x86_64 0:1.12-1.11.el6_5                                                 

[node1][DEBUG ] 

[node1][DEBUG ] Complete!

[node1][INFO  ] adding EPEL repository

[node1][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] --2014-06-07 22:06:57--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.23, 209.132.181.24, 209.132.181.25, ...

[node1][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.23|:80... connected.

[node1][WARNIN] HTTP request sent, awaiting response... 200 OK

[node1][WARNIN] Length: 14540 (14K) [application/x-rpm]

[node1][WARNIN] Saving to: `epel-release-6-8.noarch.rpm'

[node1][WARNIN] 

[node1][WARNIN]      0K .......... ....                                       100% 69.6K=0.2s

[node1][WARNIN] 

[node1][WARNIN] 2014-06-07 22:06:58 (69.6 KB/s) - `epel-release-6-8.noarch.rpm' saved [14540/14540]

[node1][WARNIN] 

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] epel-release                ##################################################

[node1][WARNIN] warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] ceph-release                ##################################################

[node1][INFO  ] Running command: yum -y -q install ceph

[node1][WARNIN] warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][WARNIN] Importing GPG key 0x0608B895:

[node1][WARNIN]  Userid : EPEL (6) <epel@fedoraproject.org>

[node1][WARNIN]  Package: epel-release-6-8.noarch (installed)

[node1][WARNIN]  From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[node1][WARNIN] Warning: RPMDB altered outside of yum.

[node1][INFO  ] Running command: ceph --version

[node1][WARNIN] Traceback (most recent call last):

[node1][WARNIN]   File "/usr/bin/ceph", line 53, in <module>

[node1][WARNIN]     import argparse

[node1][WARNIN] ImportError: No module named argparse

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version


上面報錯信息的解決方法是:在報錯的節點上執行下面的命令

[root@admin-node ~]# yum install *argparse* -y


五、添加初始監控節點並收集密鑰(新的ceph-deploy v1.1.3之後的版本)。

[root@admin-node ceph]# ceph-deploy mon create-initial  

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done

[node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create the monitor keyring file

[node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] ceph-mon: mon.noname-a 10.240.240.211:6789/0 is local, renaming to mon.node1

[node1][DEBUG ] ceph-mon: set fsid to 369daf5a-e844-4e09-a9b1-46bb985aec79

[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1

[node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][WARNIN] /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy.mon][ERROR ] Failed to execute command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors


解決上面報錯信息的方法:

手動在node1 node2 node3節點上執行下面的命令

[root@node1 ~]# yum install redhat-lsb  -y


再次執行上面的命令能夠成功激活監控節點

[root@admin-node ceph]# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][DEBUG ] === mon.node1 === 

[node1][DEBUG ] Starting Ceph mon.node1 on node1...already running

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[node1][DEBUG ] ********************************************************************************

[node1][DEBUG ] status for monitor: mon.node1

[node1][DEBUG ] {

[node1][DEBUG ]   "election_epoch": 6, 

[node1][DEBUG ]   "extra_probe_peers": [

[node1][DEBUG ]     "10.240.240.212:6789/0", 

[node1][DEBUG ]     "10.240.240.213:6789/0"

[node1][DEBUG ]   ], 

[node1][DEBUG ]   "monmap": {

[node1][DEBUG ]     "created": "0.000000", 

[node1][DEBUG ]     "epoch": 2, 

[node1][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node1][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node1][DEBUG ]     "mons": [

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node1][DEBUG ]         "name": "node1", 

[node1][DEBUG ]         "rank": 0

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node1][DEBUG ]         "name": "node2", 

[node1][DEBUG ]         "rank": 1

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node1][DEBUG ]         "name": "node3", 

[node1][DEBUG ]         "rank": 2

[node1][DEBUG ]       }

[node1][DEBUG ]     ]

[node1][DEBUG ]   }, 

[node1][DEBUG ]   "name": "node1", 

[node1][DEBUG ]   "outside_quorum": [], 

[node1][DEBUG ]   "quorum": [

[node1][DEBUG ]     0, 

[node1][DEBUG ]     1, 

[node1][DEBUG ]     2

[node1][DEBUG ]   ], 

[node1][DEBUG ]   "rank": 0, 

[node1][DEBUG ]   "state": "leader", 

[node1][DEBUG ]   "sync_provider": []

[node1][DEBUG ] }

[node1][DEBUG ] ********************************************************************************

[node1][INFO  ] monitor: mon.node1 is running

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...

[node2][DEBUG ] connected to host: node2 

[node2][DEBUG ] detect platform information from remote host

[node2][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node2][DEBUG ] determining if provided host has same hostname in remote

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] deploying mon to node2

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] remote hostname: node2

[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node2][DEBUG ] create the mon path if it does not exist

[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done

[node2][DEBUG ] create a done file to avoid re-doing the mon deployment

[node2][DEBUG ] create the init path if it does not exist

[node2][DEBUG ] locating the `service` executable...

[node2][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node2

[node2][DEBUG ] === mon.node2 === 

[node2][DEBUG ] Starting Ceph mon.node2 on node2...already running

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[node2][DEBUG ] ********************************************************************************

[node2][DEBUG ] status for monitor: mon.node2

[node2][DEBUG ] {

[node2][DEBUG ]   "election_epoch": 6, 

[node2][DEBUG ]   "extra_probe_peers": [

[node2][DEBUG ]     "10.240.240.211:6789/0", 

[node2][DEBUG ]     "10.240.240.213:6789/0"

[node2][DEBUG ]   ], 

[node2][DEBUG ]   "monmap": {

[node2][DEBUG ]     "created": "0.000000", 

[node2][DEBUG ]     "epoch": 2, 

[node2][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node2][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node2][DEBUG ]     "mons": [

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node2][DEBUG ]         "name": "node1", 

[node2][DEBUG ]         "rank": 0

[node2][DEBUG ]       }, 

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node2][DEBUG ]         "name": "node2", 

[node2][DEBUG ]         "rank": 1

[node2][DEBUG ]       }, 

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node2][DEBUG ]         "name": "node3", 

[node2][DEBUG ]         "rank": 2

[node2][DEBUG ]       }

[node2][DEBUG ]     ]

[node2][DEBUG ]   }, 

[node2][DEBUG ]   "name": "node2", 

[node2][DEBUG ]   "outside_quorum": [], 

[node2][DEBUG ]   "quorum": [

[node2][DEBUG ]     0, 

[node2][DEBUG ]     1, 

[node2][DEBUG ]     2

[node2][DEBUG ]   ], 

[node2][DEBUG ]   "rank": 1, 

[node2][DEBUG ]   "state": "peon", 

[node2][DEBUG ]   "sync_provider": []

[node2][DEBUG ] }

[node2][DEBUG ] ********************************************************************************

[node2][INFO  ] monitor: mon.node2 is running

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[ceph_deploy.mon][DEBUG ] detecting platform for host node3 ...

[node3][DEBUG ] connected to host: node3 

[node3][DEBUG ] detect platform information from remote host

[node3][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node3][DEBUG ] determining if provided host has same hostname in remote

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] deploying mon to node3

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] remote hostname: node3

[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node3][DEBUG ] create the mon path if it does not exist

[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done

[node3][DEBUG ] create a done file to avoid re-doing the mon deployment

[node3][DEBUG ] create the init path if it does not exist

[node3][DEBUG ] locating the `service` executable...

[node3][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node3

[node3][DEBUG ] === mon.node3 === 

[node3][DEBUG ] Starting Ceph mon.node3 on node3...already running

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[node3][DEBUG ] ********************************************************************************

[node3][DEBUG ] status for monitor: mon.node3

[node3][DEBUG ] {

[node3][DEBUG ]   "election_epoch": 6, 

[node3][DEBUG ]   "extra_probe_peers": [

[node3][DEBUG ]     "10.240.240.211:6789/0", 

[node3][DEBUG ]     "10.240.240.212:6789/0"

[node3][DEBUG ]   ], 

[node3][DEBUG ]   "monmap": {

[node3][DEBUG ]     "created": "0.000000", 

[node3][DEBUG ]     "epoch": 2, 

[node3][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node3][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node3][DEBUG ]     "mons": [

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node3][DEBUG ]         "name": "node1", 

[node3][DEBUG ]         "rank": 0

[node3][DEBUG ]       }, 

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node3][DEBUG ]         "name": "node2", 

[node3][DEBUG ]         "rank": 1

[node3][DEBUG ]       }, 

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node3][DEBUG ]         "name": "node3", 

[node3][DEBUG ]         "rank": 2

[node3][DEBUG ]       }

[node3][DEBUG ]     ]

[node3][DEBUG ]   }, 

[node3][DEBUG ]   "name": "node3", 

[node3][DEBUG ]   "outside_quorum": [], 

[node3][DEBUG ]   "quorum": [

[node3][DEBUG ]     0, 

[node3][DEBUG ]     1, 

[node3][DEBUG ]     2

[node3][DEBUG ]   ], 

[node3][DEBUG ]   "rank": 2, 

[node3][DEBUG ]   "state": "peon", 

[node3][DEBUG ]   "sync_provider": []

[node3][DEBUG ] }

[node3][DEBUG ] ********************************************************************************

[node3][INFO  ] monitor: mon.node3 is running

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[ceph_deploy.mon][INFO  ] processing monitor mon.node1

[node1][DEBUG ] connected to host: node1 

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node1 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] processing monitor mon.node2

[node2][DEBUG ] connected to host: node2 

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node2 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] processing monitor mon.node3

[node3][DEBUG ] connected to host: node3 

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node3 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum

[ceph_deploy.mon][INFO  ] Running gatherkeys...

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /etc/ceph/ceph.client.admin.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node1.

[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-osd/ceph.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node1.

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-mds/ceph.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node1.

經過上面的輸出信息可知,三個節點都變爲了監控節點


查看ceph集羣目錄多了下面幾個文件

ceph.bootstrap-mds.keyring

ceph.bootstrap-osd.keyring

ceph.client.admin.keyring 


六、添加osd節點

先添加node1節點,進入node1節點查看未分配的分區

[root@admin-node ceph]# ssh node1

[root@node1 ~]# fdisk -l


Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000d6653


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          39      307200   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              39        6401    51104768   83  Linux

/dev/sda3            6401        6528     1015808   82  Linux swap / Solaris


Disk /dev/sdb: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x843e46d0


   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        2610    20964793+   5  Extended

/dev/sdb5               1        2610    20964762   83  Linux


[root@node1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              48G  2.5G   44G   6% /

tmpfs                 242M   68K  242M   1% /dev/shm

/dev/sda1             291M   33M  243M  12% /boot


查看能夠看出第二塊硬盤爲使用,使用第二塊硬盤的sdb5分區做爲osd硬盤

  

在admin-node節點上添加osd設備

[root@admin-node ceph]#ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/var/local/osd0: node3:/var/local/osd1:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][WARNIN] osd keyring does not exist yet, creating one

[node1][DEBUG ] create a keyring file

[ceph_deploy.osd][ERROR ] IOError: [Errno 2] No such file or directory: '/var/lib/ceph/bootstrap-osd/ceph.keyring'

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


解決上面報錯信息的方法以下:(上面的報錯通常出如今添加非監控節點的osd)

上面錯誤信息的意思是:在建立osd節點的時候在osd節點上缺乏/var/lib/ceph/bootstrap-osd/ceph.keyring文件,查看監控節點發現有這個文件,把監控節點上的文件拷貝到node1節點上去便可。

在node1節點上創建一個目錄:mkdir /var/lib/ceph/bootstrap-osd/。

登陸node1:

[root@admin-node ceph]# ssh node1

[root@admin-node ceph]#scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node1:/var/lib/ceph/bootstrap-osd/


再次執行osd初始化命令

[root@admin-node ceph]# ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb5 journal None activate False

[node1][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[node1][WARNIN] mkfs.xfs: No such file or directory

[node1][WARNIN] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/sdb5']' returned non-zero exit status 1

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


上面的報錯信息說明在node1上沒有mkfs.xfs文件,須要在node1上安裝mkfs.xfs文件。

[root@admin-node ceph]# ssh node1

[root@node1 ~]# yum install xfs* -y

再次執行osd初始化命令能夠成功初始化新加入的osd節點

[root@admin-node ceph]# ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb5 journal None activate False

[node1][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[node1][DEBUG ] meta-data=/dev/sdb5              isize=2048   agcount=4, agsize=1310298 blks

[node1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0

[node1][DEBUG ] data     =                       bsize=4096   blocks=5241190, imaxpct=25

[node1][DEBUG ]          =                       sunit=0      swidth=0 blks

[node1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0

[node1][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2

[node1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1

[node1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0

[node1][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb5

[node1][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[node1][WARNIN] last arg is not the whole disk

[node1][WARNIN] call: partx -opts device wholedisk

[node1][INFO  ] checking OSD status...

[node1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

Unhandled exception in thread started by 

Error in sys.excepthook:



在admin節點上激活osd設備

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][WARNIN] got monmap epoch 2

[node1][WARNIN] 2014-06-07 23:36:52.377131 7f1b9a7087a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[node1][WARNIN] 2014-06-07 23:36:52.436136 7f1b9a7087a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[node1][WARNIN] 2014-06-07 23:36:52.437990 7f1b9a7087a0 -1 filestore(/var/lib/ceph/tmp/mnt.LvzAgX) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

[node1][WARNIN] 2014-06-07 23:36:52.470988 7f1b9a7087a0 -1 created object store /var/lib/ceph/tmp/mnt.LvzAgX journal /var/lib/ceph/tmp/mnt.LvzAgX/journal for osd.0 fsid 4dc38af6-f628-4c1f-b708-9178cf4e032b

[node1][WARNIN] 2014-06-07 23:36:52.471176 7f1b9a7087a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.LvzAgX/keyring: can't open /var/lib/ceph/tmp/mnt.LvzAgX/keyring: (2) No such file or directory

[node1][WARNIN] 2014-06-07 23:36:52.471528 7f1b9a7087a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.LvzAgX/keyring

[node1][WARNIN] added key for osd.0

[node1][WARNIN] ERROR:ceph-disk:Failed to activate

[node1][WARNIN] Traceback (most recent call last):

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 2579, in <module>

[node1][WARNIN]     main()

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 2557, in main

[node1][WARNIN]     args.func(args)

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1910, in main_activate

[node1][WARNIN]     init=args.mark_init,

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1724, in mount_activate

[node1][WARNIN]     mount_options=mount_options,

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1544, in move_mount

[node1][WARNIN]     maybe_mkdir(osd_data)

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 220, in maybe_mkdir

[node1][WARNIN]     os.mkdir(*a, **kw)

[node1][WARNIN] OSError: [Errno 2] No such file or directory: '/var/lib/ceph/osd/ceph-0'

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5


上面報錯信息的意思是:在node1節點上沒有/var/lib/ceph/osd/ceph-0這個目錄,須要在node1節點上建立這個目錄。

[root@admin-node ceph]# ssh node1

[root@node1 ~]# mkdir /var/lib/ceph/osd/

[root@node1 ~]# mkdir /var/lib/ceph/osd/ceph-0


再次執行激活osd命令

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][WARNIN] /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory

[node1][WARNIN] ceph-disk: Error: ceph osd start failed: Command '['/sbin/service', 'ceph', 'start', 'osd.0']' returned non-zero exit status 1

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5


上面報錯信息的解決方法:

[root@admin-node ceph]# ssh node1

[root@node1] yum install redhat-lsb  -y


再次執行激活osd命令osd節點能夠正常運行

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][DEBUG ] === osd.0 === 

[node1][DEBUG ] Starting Ceph osd.0 on node1...

[node1][DEBUG ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

[node1][WARNIN] create-or-move updating item name 'osd.0' weight 0.02 at location {host=node1,root=default} to crush map

[node1][INFO  ] checking OSD status...

[node1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json

Unhandled exception in thread started by 

Error in sys.excepthook:


Original exception was:


按上面的方法添加node2 node3爲osd節點


七、複製ceph配置文件及密鑰到mon、osd節點

[root@admin-node ceph]# ceph-deploy admin admin-node node1 node2 node3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy admin admin-node node1 node2 node3

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin-node

[admin-node][DEBUG ] connected to host: admin-node 

[admin-node][DEBUG ] detect platform information from remote host

[admin-node][DEBUG ] detect machine type

[admin-node][DEBUG ] get remote short hostname

[admin-node][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2

[node2][DEBUG ] connected to host: node2 

[node2][DEBUG ] detect platform information from remote host

[node2][DEBUG ] detect machine type

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node3

[node3][DEBUG ] connected to host: node3 

[node3][DEBUG ] detect platform information from remote host

[node3][DEBUG ] detect machine type

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

Unhandled exception in thread started by 

Error in sys.excepthook:


Original exception was:


八、確保你有正確的ceph.client.admin.keyring權限

[root@admin-node ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring


九、查看三臺監控節點的選舉狀態

[root@admin-node ~]# ceph quorum_status --format json-pretty


{ "election_epoch": 30,

  "quorum": [

        0,

        1,

        2],

  "quorum_names": [

        "node1",

        "node2",

        "node3"],

  "quorum_leader_name": "node1",

  "monmap": { "epoch": 2,

      "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b",

      "modified": "2014-06-07 22:38:29.435203",

      "created": "0.000000",

      "mons": [

            { "rank": 0,

              "name": "node1",

              "addr": "10.240.240.211:6789\/0"},

            { "rank": 1,

              "name": "node2",

              "addr": "10.240.240.212:6789\/0"},

            { "rank": 2,

              "name": "node3",

              "addr": "10.240.240.213:6789\/0"}]}}


十、查看集羣運行狀態

[root@admin-node ceph]# ceph health

HEALTH_WARN clock skew detected on mon.node2, mon.node3

出現上面信息的意思是,node1 node2 node3的時間不一致,必須把他們的時間同步,解決方法以下:

把admin-node配置ntp服務器,全部的節點都同步admin-node。

再次執行結果以下:

[root@admin-node ceph]# ceph health

HEALTH_OK


十二、添加一個元數據服務器

[root@admin-node ceph]# ceph-deploy mds create node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mds create node1

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.mds][DEBUG ] remote host will use sysvinit

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create path if it doesn't exist

[ceph_deploy.mds][ERROR ] OSError: [Errno 2] No such file or directory: '/var/lib/ceph/mds/ceph-node1'

[ceph_deploy][ERROR ] GenericError: Failed to create 1 MDSs

解決上面報錯的方法:

[root@admin-node ceph]# ssh node1

Last login: Fri Jun  6 06:41:25 2014 from 10.241.10.2

[root@node1 ~]# mkdir /var/lib/ceph/mds/

[root@node1 ~]# mkdir /var/lib/ceph/mds/ceph-node1


再次執行元數據服務器建立完成

[root@admin-node ceph]# ceph-deploy mds create node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mds create node1

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.mds][DEBUG ] remote host will use sysvinit

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create path if it doesn't exist

[node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node1/keyring

[node1][INFO  ] Running command: service ceph start mds.node1

[node1][DEBUG ] === mds.node1 === 

[node1][DEBUG ] Starting Ceph mds.node1 on node1...

[node1][DEBUG ] starting mds.node1 at :/0


再次查看運行狀態

[root@admin-node ceph]# ceph -w

    cluster 591ef1f4-69f7-442f-ba7b-49cdf6695656

     health HEALTH_OK

     monmap e1: 1 mons at {node1=10.240.240.211:6789/0}, election epoch 2, quorum 0 node1

     mdsmap e4: 1/1/1 up {0=node1=up:active}

     osdmap e9: 2 osds: 2 up, 2 in

      pgmap v22: 192 pgs, 3 pools, 1884 bytes data, 20 objects

            10310 MB used, 30616 MB / 40926 MB avail

                 192 active+clean


2014-06-06 08:12:49.021472 mon.0 [INF] pgmap v22: 192 pgs: 192 active+clean; 1884 bytes data, 10310 MB used, 30616 MB / 40926 MB avail; 10 B/s wr, 0 op/s

2014-06-06 08:14:47.932311 mon.0 [INF] pgmap v23: 192 pgs: 192 active+clean; 1884 bytes data, 10310 MB used, 30615 MB / 40926 MB avail


1三、安裝ceph client

安裝ceph客戶端

[root@admin-node ~]# ceph-deploy install ceph-client

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy install ceph-client

[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts ceph-client

[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-client ...

[ceph-client][DEBUG ] connected to host: ceph-client 

[ceph-client][DEBUG ] detect platform information from remote host

[ceph-client][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final

[ceph-client][INFO  ] installing ceph on ceph-client

[ceph-client][INFO  ] Running command: yum clean all

[ceph-client][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[ceph-client][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates

[ceph-client][DEBUG ] Cleaning up Everything

[ceph-client][DEBUG ] Cleaning up list of fastest mirrors

[ceph-client][INFO  ] Running command: yum -y install wget

[ceph-client][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[ceph-client][DEBUG ] Determining fastest mirrors

[ceph-client][DEBUG ]  * base: mirrors.btte.net

[ceph-client][DEBUG ]  * epel: mirrors.hust.edu.cn

[ceph-client][DEBUG ]  * extras: mirrors.btte.net

[ceph-client][DEBUG ]  * updates: mirrors.btte.net

[ceph-client][DEBUG ] Setting up Install Process

[ceph-client][DEBUG ] Package wget-1.12-1.11.el6_5.x86_64 already installed and latest version

[ceph-client][DEBUG ] Nothing to do

[ceph-client][INFO  ] adding EPEL repository

[ceph-client][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[ceph-client][WARNIN] --2014-06-07 06:32:38--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[ceph-client][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...

[ceph-client][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80... connected.

[ceph-client][WARNIN] HTTP request sent, awaiting response... 200 OK

[ceph-client][WARNIN] Length: 14540 (14K) [application/x-rpm]

[ceph-client][WARNIN] Saving to: `epel-release-6-8.noarch.rpm.1'

[ceph-client][WARNIN] 

[ceph-client][WARNIN]      0K .......... ....                                       100%  359K=0.04s

[ceph-client][WARNIN] 

[ceph-client][WARNIN] 2014-06-07 06:32:39 (359 KB/s) - `epel-release-6-8.noarch.rpm.1' saved [14540/14540]

[ceph-client][WARNIN] 

[ceph-client][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[ceph-client][DEBUG ] Preparing...                ##################################################

[ceph-client][DEBUG ] epel-release                ##################################################

[ceph-client][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-client][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[ceph-client][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[ceph-client][DEBUG ] Preparing...                ##################################################

[ceph-client][DEBUG ] ceph-release                ##################################################

[ceph-client][INFO  ] Running command: yum -y -q install ceph

[ceph-client][DEBUG ] Package ceph-0.80.1-2.el6.x86_64 already installed and latest version

[ceph-client][INFO  ] Running command: ceph --version

[ceph-client][DEBUG ] ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)


把祕鑰及配置文件拷貝到客戶端

[root@admin-node ceph]# ceph-deploy admin ceph-client

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy admin ceph-client

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-client

[ceph-client][DEBUG ] connected to host: ceph-client 

[ceph-client][DEBUG ] detect platform information from remote host

[ceph-client][DEBUG ] detect machine type

[ceph-client][DEBUG ] get remote short hostname

[ceph-client][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf


正常centos6.4的系統是沒有Module rbd的,在進行下面的操做時會出現報錯:

[root@ceph-client ceph]#  rbd map test-1 -p test --name client.admin  -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring

ERROR: modinfo: could not find module rbd

FATAL: Module rbd not found.

rbd: modprobe rbd failed! (256)

解決的方法是升級內核版本:

Once you have deployed the almighty CEPH storage, you will want to be able to actualy use it (RBD).

Before we begin, some notes:

Current CEPH version: 0.67 (「dumpling」).

OS: Centos 6.4 x86_64 (running some VMs on KVM, basic CentOS qemu packages, nothing custom)

Since CEPH RBD module was first introduced with kernel 2.6.34 (and current RHEL/CentOS kernel is 2.6.32) – that means we need a newer kernel.

So, one of the options for the new kernel is, 3.x from elrepo.org:

rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-ml                # will install 3.11.latest, stable, mainline

If you want that new kernel to boot by default, edit /etc/grub.conf, and change the Default=1 to Default=0, and reboot.


1四、在客戶端上應用ceph塊存儲

新建一個ceph pool

[root@ceph-client ceph]# rados mkpool test

在pool中新建一個鏡像

[root@ceph-client ceph]# rbd create test-1 --size 4096 -p test -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring   (「-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring」能夠不用加)  

把鏡像映射到pool塊設備中

[root@ceph-client ceph]#  rbd map test-1 -p test --name client.admin  -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring  (「-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring」能夠不用加)  

查看rbd的映射關係

[root@ceph-client ~]# rbd showmapped

id pool    p_w_picpath       snap device    

0  rbd     foo         -    /dev/rbd0 

1  test    test-1      -    /dev/rbd1 

2  jiayuan jiayuan-img -    /dev/rbd2 

3  jiayuan zhanguo     -    /dev/rbd3 

4  jiayuan zhanguo-5G  -    /dev/rbd4 


把新建的鏡像ceph塊進行格式化

[root@ceph-client dev]# mkfs.ext4 -m0 /dev/rbd1

新建一個掛載目錄

[root@ceph-client dev]# mkdir /mnt/ceph-rbd-test-1

把新建的鏡像ceph塊掛載到掛載目錄

[root@ceph-client dev]# mount /dev/rbd1 /mnt/ceph-rbd-test-1/

查看掛載狀況

[root@ceph-client dev]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              19G  2.5G   15G  15% /

tmpfs                 116M   72K  116M   1% /dev/shm

/dev/sda1             283M   52M  213M  20% /boot

/dev/rbd1             3.9G  8.0M  3.8G   1% /mnt/ceph-rbd-test-1


完成上面的步驟就能夠向新建的ceph文件系統中存數據了。


1五、在客戶端上創建cephFS文件系統

 

[root@ceph-client ~]# mkdir /mnt/mycephfs

[root@ceph-client ~]# mount  -t ceph 10.240.240.211:6789:/ /mnt/mycephfs -v -o name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==   

10.240.240.211:6789:/ on /mnt/mycephfs type ceph (rw,name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==)  


#上述命令中的name和secret參數值來自monitor的/etc/ceph/keyring文件:

[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring 

[client.admin]

        key = AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==

相關文章
相關標籤/搜索