Ceph的部署模式下主要包含如下幾個類型的節點node
Ø CephOSDs: A Ceph OSD 進程主要用來存儲數據,處理數據的replication,恢復,填充,調整資源組合以及經過檢查其餘OSD進程的心跳信息提供一些監控信息給Ceph Monitors . 當Ceph Storage Cluster 要準備2份數據備份時,要求至少有2個CephOSD進程的狀態是active+clean狀態 (Ceph 默認會提供兩份數據備份).python
Ø Monitors:Ceph Monitor 維護了集羣map的狀態,主要包括monitor map, OSD map, PlacementGroup (PG) map, 以及CRUSH map. Ceph 維護了 Ceph Monitors, Ceph OSD Daemons, 以及PGs狀態變化的歷史記錄 (called an 「epoch」).react
Ø MDSs:Ceph Metadata Server (MDS)存儲的元數據表明Ceph的文件系統 (i.e., Ceph Block Devices 以及Ceph ObjectStorage 不適用 MDS). Ceph Metadata Servers 讓系統用戶能夠執行一些POSIX文件系統的基本命令,例如ls,find 等.json
在建立集羣前,首先作好集羣規劃,以下:bootstrap
基於VMware虛擬機部署ceph集羣:vim
節點IP安全 |
Hostname網絡 |
說明less |
192.168.92.100dom |
node0 |
Admin, osd |
192.168.92.101 |
node1 |
Osd,mon |
192.168.92.102 |
node2 |
Osd,mon |
192.168.92.103 |
node3 |
Osd,mon |
192.168.92.109 |
client-node |
用戶端節點;客服端,主要利用它掛載ceph集羣提供的存儲進行測試 |
Node2/var/local/osd0
Node3/var/local/osd0
修改/etc/hosts
[ceph@admin-node my-cluster]$ sudocat /etc/hosts
[sudo] password for ceph:
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4
::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
192.168.92.100 node0
192.168.92.101 node1
192.168.92.102 node2
192.168.92.103 node3
[ceph@admin-node my-cluster]$
分別爲上述5臺主機存儲建立用戶ceph:(使用root權限,或者具備root權限)
建立用戶
sudo adduser -d /home/ceph -m ceph
設置密碼
sudo passwd ceph
設置帳戶權限
echo 「ceph ALL = (root)NOPASSWD:ALL」 | sudo tee /etc/sudoers.d/ceph
sudo chomod 0440 /etc/sudoers.d/ceph
執行命令visudo修改suoders文件:
把Defaults requiretty 這一行修改成修改 Defaults:ceph !requiretty
若是不進行修改ceph-depoy利用ssh執行命令將會出錯
若是在ceph-deploy new <node-hostname>階段依舊出錯:
[ceph@admin-node my-cluster]$ceph-deploy new node3
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy newnode3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] func : <function new at 0xee0b18>
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0xef9a28>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] ssh_copykey :True
[ceph_deploy.cli][INFO ] mon : ['node3']
[ceph_deploy.cli][INFO ] public_network :None
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] cluster_network :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] fsid :None
[ceph_deploy.new][DEBUG ] Creatingnew cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host:admin-node
[node3][INFO ] Running command: ssh -CT -o BatchMode=yesnode3
[node3][DEBUG ] connection detectedneed for sudo
[node3][DEBUG ] connected to host:node3
[ceph_deploy][ERROR ] RuntimeError: remote connection got closed,ensure ``requiretty`` is disabled for node3
[ceph@admin-node my-cluster]$
則須要按照以下方式設置sudo無密碼操做:
使用命令sudo visudo修改:
[ceph@node1 ~]$ sudo grep"ceph" /etc/sudoers
Defaults:ceph !requiretty
ceph ALL=(ALL) NOPASSWD: ALL
[ceph@node1 ~]$
另外:
1.註釋Defaults requiretty
Defaultsrequiretty修改成 #Defaults requiretty, 表示不須要控制終端。
不然會出現sudo: sorry, you must have a tty to run sudo
2.增長行 Defaults visiblepw
不然會出現 sudo: no tty present and no askpass program specified
配置admin-node與其餘節點ssh無密碼root權限訪問其它節點。
第一步:在admin-node主機上執行命令:
ssh-keygen
說明:(爲了簡單點命令執行時直接肯定便可)
第二步:將第一步的key複製至其餘節點
ssh-copy-id ceph@node0
ssh-copy-id ceph@node1
ssh-copy-id ceph@node2
ssh-copy-id ceph@node3
同時修改~/.ssh/config文件增長一下內容:
ceph@admin-node my-cluster]$ cat~/.ssh/config
Host node1
Hostname node1
User ceph
Host node2
Hostname node2
User ceph
Host node3
Hostname node3
User ceph
Host client-node
Hostname client-node
User ceph
[ceph@admin-node my-cluster]$
錯誤信息:
Bad owner or permissions on/home/ceph/.ssh/config fatal: The remote end hung up unexpectedly
解決方案:
$sudo chmod 600 config
第一步:增長 yum配置文件
sudo vim /etc/yum.repos.d/ceph.repo
添加如下內容:
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
第二步:更新軟件源並按照ceph-deploy,時間同步軟件
sudo yum update && sudo yuminstall ceph-deploy
sudo yum install ntp ntpupdatentp-doc
第三步:關閉全部節點的防火牆以及安全選項(在全部節點上執行)以及其餘一些步驟
sudo systemctl stopfirewall.service
sudo setenforce 0
sudo yum installyum-plugin-priorities
總結:通過以上步驟前提條件都準備好了接下來真正部署ceph了。
之前面建立的ceph用戶在admin-node節點上建立目錄
mkdir my-cluster
cd my-cluster
先清空以前全部的ceph數據,若是是新裝不用執行此步驟,若是是從新部署的話也執行下面的命令:
ceph-deploy purgedata {ceph-node}[{ceph-node}]
ceph-deploy forgetkeys
如:
[ceph@node0my-cluster]$ ceph-deploy purgedata admin-node node1 node2 node3
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploypurgedata admin-node node1 node2 node3
…
[node3][INFO ] Running command: sudo rm -rf--one-file-system -- /var/lib/ceph
[node3][INFO ] Running command: sudo rm -rf --one-file-system-- /etc/ceph/
[ceph@node0 cluster]$ ceph-deploy forgetkeys
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deployforgetkeys
…
[ceph_deploy.cli][INFO ] default_release : False
[ceph@admin-node my-cluster]$
命令:
ceph-deploy purge {ceph-node}[{ceph-node}]
如:
[root@ceph-deploy ceph]$ ceph-deploypurge admin-node node1 node2 node3
在admin節點上用ceph-deploy建立集羣,new後面跟的是monitor節點的hostname,若是有多個monitor,則它們的hostname覺得間隔,多個mon節點能夠實現互備。
[ceph@node0 cluster]$ sudo vim/etc/ssh/sshd_config
[ceph@node0 cluster]$ sudo visudo
[ceph@node0 cluster]$ ceph-deploynew node1 node2 node3
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy newnode1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] func : <function new at0x29f2b18>
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x2a15a70>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] ssh_copykey :True
[ceph_deploy.cli][INFO ] mon : ['node1', 'node2','node3']
[ceph_deploy.cli][INFO ] public_network :None
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] cluster_network :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] fsid :None
[ceph_deploy.new][DEBUG ] Creatingnew cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host:node0
[node1][INFO ] Running command: ssh -CT -o BatchMode=yesnode1
[node1][DEBUG ] connection detectedneed for sudo
[node1][DEBUG ] connected to host:node1
[node1][DEBUG ] detect platforminformation from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location ofan executable
[node1][INFO ] Running command: sudo /usr/sbin/ip linkshow
[node1][INFO ] Running command: sudo /usr/sbin/ip addrshow
[node1][DEBUG ] IP addresses found:['192.168.92.101', '192.168.1.102', '192.168.122.1']
[ceph_deploy.new][DEBUG ] Resolvinghost node1
[ceph_deploy.new][DEBUG ] Monitornode1 at 192.168.92.101
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node2][DEBUG ] connected to host:node0
[node2][INFO ] Running command: ssh -CT -o BatchMode=yesnode2
[node2][DEBUG ] connection detectedneed for sudo
[node2][DEBUG ] connected to host:node2
[node2][DEBUG ] detect platforminformation from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location ofan executable
[node2][INFO ] Running command: sudo /usr/sbin/ip linkshow
[node2][INFO ] Running command: sudo /usr/sbin/ip addrshow
[node2][DEBUG ] IP addresses found:['192.168.1.103', '192.168.122.1', '192.168.92.102']
[ceph_deploy.new][DEBUG ] Resolvinghost node2
[ceph_deploy.new][DEBUG ] Monitornode2 at 192.168.92.102
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host:node0
[node3][INFO ] Running command: ssh -CT -o BatchMode=yesnode3
[node3][DEBUG ] connection detectedneed for sudo
[node3][DEBUG ] connected to host:node3
[node3][DEBUG ] detect platforminformation from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location ofan executable
[node3][INFO ] Running command: sudo /usr/sbin/ip linkshow
[node3][INFO ] Running command: sudo /usr/sbin/ip addrshow
[node3][DEBUG ] IP addresses found:['192.168.122.1', '192.168.1.104', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Resolvinghost node3
[ceph_deploy.new][DEBUG ] Monitornode3 at 192.168.92.103
[ceph_deploy.new][DEBUG ] Monitorinitial members are ['node1', 'node2', 'node3']
[ceph_deploy.new][DEBUG ] Monitoraddrs are ['192.168.92.101', '192.168.92.102', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Creating arandom mon key...
[ceph_deploy.new][DEBUG ] Writingmonitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writinginitial config to ceph.conf...
[ceph@node0 cluster]$
查看生成的文件
ceph@admin-node my-cluster]$ ls
ceph.conf ceph.log ceph.mon.keyring
查看ceph的配置文件,Node1節點都變爲了控制節點
[ceph@admin-node my-cluster]$ catceph.conf
[global]
fsid =3c9892d0-398b-4808-aa20-4dc622356bd0
mon_initial_members = node1, node2,node3
mon_host =192.168.92.111,192.168.92.112,192.168.92.113
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
[ceph@admin-node my-cluster]$
修改默認的副本數爲2,即ceph.conf,使osd_pool_default_size的值爲2。若是該行,則添加。
[ceph@admin-node my-cluster]$ grep"osd_pool_default_size" ./ceph.conf
osd_pool_default_size = 2
[ceph@admin-node my-cluster]$
若是IP不惟一,即除ceph集羣使用的網絡外,還有其餘的網絡IP。
好比:
eno16777736:192.168.175.100
eno50332184:192.168.92.110
virbr0:192.168.122.1
那麼就須要在ceph.conf配置文檔[global]部分增長參數public network參數:
public_network ={ip-address}/{netmask}
如:
public_network = 192.168.92.0/6789
admin-node節點用ceph-deploy工具向各個節點安裝ceph:
ceph-deploy install{ceph-node}[{ceph-node} ...]
如:
[ceph@node0 cluster]$ ceph-deployinstall node0 node1 node2 node3
[ceph@node0 cluster]$ ceph-deployinstall node0 node1 node2 node3
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deployinstall node0 node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] testing :None
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x2ae0560>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] dev_commit :None
[ceph_deploy.cli][INFO ] install_mds :False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] adjust_repos :True
[ceph_deploy.cli][INFO ] func : <function install at 0x2a53668>
[ceph_deploy.cli][INFO ] install_all :False
[ceph_deploy.cli][INFO ] repo :False
[ceph_deploy.cli][INFO ] host :['node0', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] install_rgw :False
[ceph_deploy.cli][INFO ] install_tests :False
[ceph_deploy.cli][INFO ] repo_url :None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd :False
[ceph_deploy.cli][INFO ] version_kind :stable
[ceph_deploy.cli][INFO ] install_common :False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] dev :master
[ceph_deploy.cli][INFO ] local_mirror :None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon :False
[ceph_deploy.cli][INFO ] gpg_url :None
[ceph_deploy.install][DEBUG ]Installing stable version jewel on cluster ceph hosts node0 node1 node2 node3
[ceph_deploy.install][DEBUG ]Detecting platform for host node0 ...
[node0][DEBUG ] connection detectedneed for sudo
[node0][DEBUG ] connected to host:node0
[node0][DEBUG ] detect platforminformation from remote host
[node0][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[node0][INFO ] installing Ceph on node0
[node0][INFO ] Running command: sudo yum clean all
[node0][DEBUG ] Loaded plugins:fastestmirror, langpacks, priorities
[node0][DEBUG ] Cleaning repos: CephCeph-noarch base ceph-source epel extras updates
[node0][DEBUG ] Cleaning upeverything
[node0][DEBUG ] Cleaning up list offastest mirrors
[node0][INFO ] Running command: sudo yum -y installepel-release
[node0][DEBUG ] Loaded plugins:fastestmirror, langpacks, priorities
[node0][DEBUG ] Determining fastestmirrors
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excludeddue to repository priority protections
[node0][DEBUG ] Packageepel-release-7-7.noarch already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][INFO ] Running command: sudo yum -y installyum-plugin-priorities
[node0][DEBUG ] Loaded plugins:fastestmirror, langpacks, priorities
[node0][DEBUG ] Loading mirrorspeeds from cached hostfile
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excludeddue to repository priority protections
[node0][DEBUG ] Packageyum-plugin-priorities-1.1.31-34.el7.noarch already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][DEBUG ] Configure Yumpriorities to include obsoletes
[node0][WARNIN] check_obsoletes hasbeen enabled for Yum priorities plugin
[node0][INFO ] Running command: sudo rpm --importhttps://download.ceph.com/keys/release.asc
[node0][INFO ] Running command: sudo rpm -Uvh--replacepkgshttps://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node0][DEBUG ] Retrievinghttps://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node0][DEBUG ] Preparing... ########################################
[node0][DEBUG ] Updating /installing...
[node0][DEBUG ]ceph-release-1-1.el7 ########################################
[node0][WARNIN] ensuring that/etc/yum.repos.d/ceph.repo contains a high priority
[node0][WARNIN] altered ceph.repopriorities to contain: priority=1
[node0][INFO ] Running command: sudo yum -y install cephceph-radosgw
[node0][DEBUG ] Loaded plugins:fastestmirror, langpacks, priorities
[node0][DEBUG ] Loading mirrorspeeds from cached hostfile
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excludeddue to repository priority protections
[node0][DEBUG ] Package1:ceph-10.2.2-0.el7.x86_64 already installed and latest version
[node0][DEBUG ] Package1:ceph-radosgw-10.2.2-0.el7.x86_64 already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][INFO ] Running command: sudo ceph --version
[node0][DEBUG ] ceph version 10.2.2(45107e21c568dd033c2f0a3107dec8f0b0e58374)
….
問題日誌
[ceph@node0 cluster]$ ceph-deployinstall node0 node1 node2 node3
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
…
[node1][WARNIN] ensuring that/etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError:NoSectionError: No section: 'ceph'
[ceph@node0 cluster]$
解決方法:
yum remove ceph-release
再次執行
[ceph@node0 cluster]$ ceph-deployinstall node0 node1 node2 node3
解決方案:
緣由是在失敗節點安裝ceph超時,就須要單獨執行,在失敗的節點上執行以下語句
sudo yum -y install cephceph-radosgw
初始化監控節點並收集keyring:
[ceph@node0 cluster]$ ceph-deploymon create-initial
[ceph@node0 cluster]$ ceph-deploymon create-initial
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy moncreate-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] subcommand :create-initial
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbe46804cb0>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] func :<function mon at 0x7fbe467f6aa0>
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] keyrings :None
[ceph_deploy.mon][DEBUG ] Deployingmon, cluster ceph hosts node1 node2 node3
[ceph_deploy.mon][DEBUG ] detectingplatform for host node1 ...
[node1][DEBUG ] connection detectedneed for sudo
[node1][DEBUG ] connected to host:node1
[node1][DEBUG ] detect platforminformation from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location ofan executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node1][DEBUG ] determining ifprovided host has same hostname in remote
[node1][DEBUG ] get remote shorthostname
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] get remote shorthostname
[node1][DEBUG ] remote hostname:node1
[node1][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] create the mon pathif it does not exist
[node1][DEBUG ] checking for donepath: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does notexist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO ] creating keyring file:/var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create the monitorkeyring file
[node1][INFO ] Running command: sudo ceph-mon --clusterceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring--setuser 1001 --setgroup 1001
[node1][DEBUG ] ceph-mon:mon.noname-a 192.168.92.101:6789/0 is local, renaming to mon.node1
[node1][DEBUG ] ceph-mon: set fsidto 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node1][DEBUG ] ceph-mon: createdmonfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO ] unlinking keyring file/var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create a done fileto avoid re-doing the mon deployment
[node1][DEBUG ] create the init pathif it does not exist
[node1][INFO ] Running command: sudo systemctl enableceph.target
[node1][INFO ] Running command: sudo systemctl enableceph-mon@node1
[node1][WARNIN] Created symlink from/etc/systemd/system/ceph-mon.target.wants/ceph-mon@node1.service to/usr/lib/systemd/system/ceph-mon@.service.
[node1][INFO ] Running command: sudo systemctl startceph-mon@node1
[node1][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[node1][DEBUG ]********************************************************************************
[node1][DEBUG ] status for monitor:mon.node1
[node1][DEBUG ] {
[node1][DEBUG ] "election_epoch": 0,
[node1][DEBUG ] "extra_probe_peers": [
[node1][DEBUG ] "192.168.92.102:6789/0",
[node1][DEBUG ] "192.168.92.103:6789/0"
[node1][DEBUG ] ],
[node1][DEBUG ] "monmap": {
[node1][DEBUG ] "created": "2016-06-2414:43:29.944474",
[node1][DEBUG ] "epoch": 0,
[node1][DEBUG ] "fsid":"4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node1][DEBUG ] "modified": "2016-06-2414:43:29.944474",
[node1][DEBUG ] "mons": [
[node1][DEBUG ] {
[node1][DEBUG ] "addr":"192.168.92.101:6789/0",
[node1][DEBUG ] "name": "node1",
[node1][DEBUG ] "rank": 0
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] "addr":"0.0.0.0:0/1",
[node1][DEBUG ] "name": "node2",
[node1][DEBUG ] "rank": 1
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] "addr":"0.0.0.0:0/2",
[node1][DEBUG ] "name": "node3",
[node1][DEBUG ] "rank": 2
[node1][DEBUG ] }
[node1][DEBUG ] ]
[node1][DEBUG ] },
[node1][DEBUG ] "name": "node1",
[node1][DEBUG ] "outside_quorum": [
[node1][DEBUG ] "node1"
[node1][DEBUG ] ],
[node1][DEBUG ] "quorum": [],
[node1][DEBUG ] "rank": 0,
[node1][DEBUG ] "state": "probing",
[node1][DEBUG ] "sync_provider": []
[node1][DEBUG ] }
[node1][DEBUG ]********************************************************************************
[node1][INFO ] monitor: mon.node1 is running
[node1][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][DEBUG ] detectingplatform for host node2 ...
[node2][DEBUG ] connection detectedneed for sudo
[node2][DEBUG ] connected to host:node2
[node2][DEBUG ] detect platforminformation from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location ofan executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node2][DEBUG ] determining ifprovided host has same hostname in remote
[node2][DEBUG ] get remote shorthostname
[node2][DEBUG ] deploying mon tonode2
[node2][DEBUG ] get remote shorthostname
[node2][DEBUG ] remote hostname:node2
[node2][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] create the mon pathif it does not exist
[node2][DEBUG ] checking for donepath: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does notexist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO ] creating keyring file:/var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create the monitorkeyring file
[node2][INFO ] Running command: sudo ceph-mon --clusterceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring--setuser 1001 --setgroup 1001
[node2][DEBUG ] ceph-mon:mon.noname-b 192.168.92.102:6789/0 is local, renaming to mon.node2
[node2][DEBUG ] ceph-mon: set fsidto 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node2][DEBUG ] ceph-mon: createdmonfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO ] unlinking keyring file/var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create a done fileto avoid re-doing the mon deployment
[node2][DEBUG ] create the init pathif it does not exist
[node2][INFO ] Running command: sudo systemctl enableceph.target
[node2][INFO ] Running command: sudo systemctl enableceph-mon@node2
[node2][WARNIN] Created symlink from/etc/systemd/system/ceph-mon.target.wants/ceph-mon@node2.service to/usr/lib/systemd/system/ceph-mon@.service.
[node2][INFO ] Running command: sudo systemctl startceph-mon@node2
[node2][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][DEBUG ]********************************************************************************
[node2][DEBUG ] status for monitor:mon.node2
[node2][DEBUG ] {
[node2][DEBUG ] "election_epoch": 1,
[node2][DEBUG ] "extra_probe_peers": [
[node2][DEBUG ] "192.168.92.101:6789/0",
[node2][DEBUG ] "192.168.92.103:6789/0"
[node2][DEBUG ] ],
[node2][DEBUG ] "monmap": {
[node2][DEBUG ] "created": "2016-06-2414:43:34.865908",
[node2][DEBUG ] "epoch": 0,
[node2][DEBUG ] "fsid":"4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node2][DEBUG ] "modified": "2016-06-2414:43:34.865908",
[node2][DEBUG ] "mons": [
[node2][DEBUG ] {
[node2][DEBUG ] "addr":"192.168.92.101:6789/0",
[node2][DEBUG ] "name": "node1",
[node2][DEBUG ] "rank": 0
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] "addr":"192.168.92.102:6789/0",
[node2][DEBUG ] "name": "node2",
[node2][DEBUG ] "rank": 1
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] "addr":"0.0.0.0:0/2",
[node2][DEBUG ] "name": "node3",
[node2][DEBUG ] "rank": 2
[node2][DEBUG ] }
[node2][DEBUG ] ]
[node2][DEBUG ] },
[node2][DEBUG ] "name": "node2",
[node2][DEBUG ] "outside_quorum": [],
[node2][DEBUG ] "quorum": [],
[node2][DEBUG ] "rank": 1,
[node2][DEBUG ] "state": "electing",
[node2][DEBUG ] "sync_provider": []
[node2][DEBUG ] }
[node2][DEBUG ]********************************************************************************
[node2][INFO ] monitor: mon.node2 is running
[node2][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][DEBUG ] detectingplatform for host node3 ...
[node3][DEBUG ] connection detectedneed for sudo
[node3][DEBUG ] connected to host:node3
[node3][DEBUG ] detect platforminformation from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location ofan executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node3][DEBUG ] determining ifprovided host has same hostname in remote
[node3][DEBUG ] get remote shorthostname
[node3][DEBUG ] deploying mon tonode3
[node3][DEBUG ] get remote shorthostname
[node3][DEBUG ] remote hostname:node3
[node3][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[node3][DEBUG ] create the mon pathif it does not exist
[node3][DEBUG ] checking for donepath: /var/lib/ceph/mon/ceph-node3/done
[node3][DEBUG ] done path does notexist: /var/lib/ceph/mon/ceph-node3/done
[node3][INFO ] creating keyring file:/var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create the monitorkeyring file
[node3][INFO ] Running command: sudo ceph-mon --clusterceph --mkfs -i node3 --keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring--setuser 1001 --setgroup 1001
[node3][DEBUG ] ceph-mon:mon.noname-c 192.168.92.103:6789/0 is local, renaming to mon.node3
[node3][DEBUG ] ceph-mon: set fsidto 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node3][DEBUG ] ceph-mon: createdmonfs at /var/lib/ceph/mon/ceph-node3 for mon.node3
[node3][INFO ] unlinking keyring file/var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create a done fileto avoid re-doing the mon deployment
[node3][DEBUG ] create the init pathif it does not exist
[node3][INFO ] Running command: sudo systemctl enableceph.target
[node3][INFO ] Running command: sudo systemctl enableceph-mon@node3
[node3][WARNIN] Created symlink from/etc/systemd/system/ceph-mon.target.wants/ceph-mon@node3.service to/usr/lib/systemd/system/ceph-mon@.service.
[node3][INFO ] Running command: sudo systemctl startceph-mon@node3
[node3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[node3][DEBUG ]********************************************************************************
[node3][DEBUG ] status for monitor:mon.node3
[node3][DEBUG ] {
[node3][DEBUG ] "election_epoch": 1,
[node3][DEBUG ] "extra_probe_peers": [
[node3][DEBUG ] "192.168.92.101:6789/0",
[node3][DEBUG ] "192.168.92.102:6789/0"
[node3][DEBUG ] ],
[node3][DEBUG ] "monmap": {
[node3][DEBUG ] "created": "2016-06-2414:43:39.800046",
[node3][DEBUG ] "epoch": 0,
[node3][DEBUG ] "fsid":"4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node3][DEBUG ] "modified": "2016-06-2414:43:39.800046",
[node3][DEBUG ] "mons": [
[node3][DEBUG ] {
[node3][DEBUG ] "addr":"192.168.92.101:6789/0",
[node3][DEBUG ] "name": "node1",
[node3][DEBUG ] "rank": 0
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] "addr":"192.168.92.102:6789/0",
[node3][DEBUG ] "name": "node2",
[node3][DEBUG ] "rank": 1
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] "addr":"192.168.92.103:6789/0",
[node3][DEBUG ] "name": "node3",
[node3][DEBUG ] "rank": 2
[node3][DEBUG ] }
[node3][DEBUG ] ]
[node3][DEBUG ] },
[node3][DEBUG ] "name": "node3",
[node3][DEBUG ] "outside_quorum": [],
[node3][DEBUG ] "quorum": [],
[node3][DEBUG ] "rank": 2,
[node3][DEBUG ] "state": "electing",
[node3][DEBUG ] "sync_provider": []
[node3][DEBUG ] }
[node3][DEBUG ]********************************************************************************
[node3][INFO ] monitor: mon.node3 is running
[node3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.node1
[node1][DEBUG ] connection detectedneed for sudo
[node1][DEBUG ] connected to host:node1
[node1][DEBUG ] detect platforminformation from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location ofan executable
[node1][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node1monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5seconds before retrying
[node1][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node1monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10seconds before retrying
[node1][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node2
[node2][DEBUG ] connection detectedneed for sudo
[node2][DEBUG ] connected to host:node2
[node2][DEBUG ] detect platforminformation from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location ofan executable
[node2][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node3
[node3][DEBUG ] connection detectedneed for sudo
[node3][DEBUG ] connected to host:node3
[node3][DEBUG ] detect platforminformation from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location ofan executable
[node3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and haveformed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory/tmp/tmp5_jcSr
[node1][DEBUG ] connection detectedneed for sudo
[node1][DEBUG ] connected to host:node1
[node1][DEBUG ] detect platforminformation from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] get remote shorthostname
[node1][DEBUG ] fetch remote file
[node1][INFO ] Running command: sudo /usr/bin/ceph--connect-timeout=25 --cluster=ceph--admin-daemon=/var/run/ceph/ceph-mon.node1.asok mon_status
[node1][INFO ] Running command: sudo /usr/bin/ceph--connect-timeout=25 --cluster=ceph --name mon.--keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.adminosd allow * mds allow * mon allow *
[node1][INFO ] Running command: sudo /usr/bin/ceph--connect-timeout=25 --cluster=ceph --name mon.--keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-createclient.bootstrap-mds mon allow profile bootstrap-mds
[node1][INFO ] Running command: sudo /usr/bin/ceph--connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyringauth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[node1][INFO ] Running command: sudo /usr/bin/ceph--connect-timeout=25 --cluster=ceph --name mon.--keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-createclient.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp5_jcSr
[ceph@node0 cluster]$
查看生成的文件
[ceph@node0 cluster]$ ls
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
[ceph@node0 cluster]$
命令
ceph-deploy osd prepare{ceph-node}:/path/to/directory
示例,如1.2.3所示
[ceph@node0 cluster]$ ceph-deployosd prepare node2:/var/local/osd0 node3:/var/local/osd0
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osdprepare node2:/var/local/osd0 node3:/var/local/osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] disk : [('node2', '/var/local/osd0',None), ('node3', '/var/local/osd0', None)]
[ceph_deploy.cli][INFO ] dmcrypt :False
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] subcommand :prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir :/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet :False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x12dddd0>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func :<function osd at 0x12d2398>
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] zap_disk :False
[ceph_deploy.osd][DEBUG ] Preparingcluster ceph disks node2:/var/local/osd0: node3:/var/local/osd0:
[node2][DEBUG ] connection detectedneed for sudo
[node2][DEBUG ] connected to host:node2
[node2][DEBUG ] detect platforminformation from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location ofan executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deployingosd to node2
[node2][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparinghost node2 disk /var/local/osd0 journal None activate False
[node2][DEBUG ] find the location ofan executable
[node2][INFO ] Running command: sudo /usr/sbin/ceph-disk-v prepare --cluster ceph --fs-type xfs -- /var/local/osd0
[node2][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[node2][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[node2][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[node2][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node2][WARNIN] populate_data_path:Preparing osd data dir /var/local/osd0
[node2][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/ceph_fsid.3504.tmp
[node2][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/ceph_fsid.3504.tmp
[node2][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/fsid.3504.tmp
[node2][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/fsid.3504.tmp
[node2][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/magic.3504.tmp
[node2][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/magic.3504.tmp
[node2][INFO ] checking OSD status...
[node2][DEBUG ] find the location ofan executable
[node2][INFO ] Running command: sudo /bin/ceph--cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node2is now ready for osd use.
[node3][DEBUG ] connection detectedneed for sudo
[node3][DEBUG ] connected to host:node3
[node3][DEBUG ] detect platforminformation from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location ofan executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deployingosd to node3
[node3][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparinghost node3 disk /var/local/osd0 journal None activate False
[node3][DEBUG ] find the location ofan executable
[node3][INFO ] Running command: sudo /usr/sbin/ceph-disk-v prepare --cluster ceph --fs-type xfs -- /var/local/osd0
[node3][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node3][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[node3][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[node3][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[node3][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node3][WARNIN] populate_data_path:Preparing osd data dir /var/local/osd0
[node3][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/ceph_fsid.3553.tmp
[node3][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/ceph_fsid.3553.tmp
[node3][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/fsid.3553.tmp
[node3][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/fsid.3553.tmp
[node3][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/local/osd0/magic.3553.tmp
[node3][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/local/osd0/magic.3553.tmp
[node3][INFO ] checking OSD status...
[node3][DEBUG ] find the location ofan executable
[node3][INFO ] Running command: sudo /bin/ceph--cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node3is now ready for osd use.
[ceph@node0 cluster]$
命令:
ceph-deploy osd activate{ceph-node}:/path/to/directory
示例:
[ceph@node0 cluster]$ ceph-deployosd activate node2:/var/local/osd0 node3:/var/local/osd0
查詢狀態:
[ceph@node1 ~]$ ceph -s
cluster 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
health HEALTH_WARN
64 pgs degraded
64 pgs stuck unclean
64 pgs undersized
mon.node2 low disk space
mon.node3 low disk space
monmap e1: 3 mons at{node1=192.168.92.101:6789/0,node2=192.168.92.102:6789/0,node3=192.168.92.103:6789/0}
election epoch 18, quorum 0,1,2node1,node2,node3
osdmap e12: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v173: 64 pgs, 1 pools, 0 bytes data, 0 objects
20254 MB used, 22120 MB / 42374 MBavail
64 active+undersized+degraded
[ceph@node1 ~]$
錯誤日誌:
[node2][WARNIN]ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs','--mkkey', '-i', '0', '--monmap', '/var/local/osd0/activate.monmap','--osd-data', '/var/local/osd0', '--osd-journal', '/var/local/osd0/journal','--osd-uuid', '76f06d28-7e0d-4894-8625-4f55d43962bf', '--keyring', '/var/local/osd0/keyring','--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2016-06-24 15:31:39.9318257fd1150c1800 -1 filestore(/var/local/osd0) mkfs: write_version_stamp() failed:(13) Permission denied
[node2][WARNIN] 2016-06-2415:31:39.931861 7fd1150c1800 -1 OSD::mkfs: ObjectStore::mkfs failed with error-13
[node2][WARNIN]2016-06-24 15:31:39.932024 7fd1150c1800 -1 ** ERROR: error creating empty object store in /var/local/osd0: (13)Permission denied
[node2][WARNIN]
[node2][ERROR] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate--mark-init systemd --mount /var/local/osd0
解決方案:
辦法很簡單,將ceph集羣須要使用的全部磁盤權限,所屬用戶、用戶組改給ceph
chownceph:ceph /var/local/osd0
[ceph@node0 cluster]$ssh node2"sudo chown ceph:ceph /var/local/osd0"
[ceph@node0 cluster]$ssh node3"sudo chown ceph:ceph /var/local/osd0"
問題延伸:
此問題本次修復後,系統重啓磁盤權限會被修改回,致使osd服務沒法正常啓動,這個權限問題很坑,寫了個for 循環,加入到rc.local,每次系統啓動自動修改磁盤權限;
for i in a b c d e f g h i l j k;dochown ceph.ceph /dev/sd"$i"*;done
[ceph@node0 cluster]$ ceph-deployosd prepare node1:/dev/sdb
[ceph_deploy.conf][DEBUG ] foundconfiguration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osdprepare node1:/dev/sdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username :None
[ceph_deploy.cli][INFO ] disk :[('node1', '/dev/sdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt :False
[ceph_deploy.cli][INFO ] verbose :False
[ceph_deploy.cli][INFO ] bluestore :None
[ceph_deploy.cli][INFO ] overwrite_conf :False
[ceph_deploy.cli][INFO ] subcommand :prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir :/etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :<ceph_deploy.conf.cephdeploy.Conf instance at 0x1acfdd0>
[ceph_deploy.cli][INFO ] cluster :ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func :<function osd at 0x1ac4398>
[ceph_deploy.cli][INFO ] ceph_conf :None
[ceph_deploy.cli][INFO ] default_release :False
[ceph_deploy.cli][INFO ] zap_disk :False
[ceph_deploy.osd][DEBUG ] Preparingcluster ceph disks node1:/dev/sdb:
[node1][DEBUG ] connection detectedneed for sudo
[node1][DEBUG ] connected to host:node1
[node1][DEBUG ] detect platforminformation from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location ofan executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deployingosd to node1
[node1][DEBUG ] write clusterconfiguration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparinghost node1 disk /dev/sdb journal None activate False
[node1][DEBUG ] find the location ofan executable
[node1][INFO ] Running command: sudo /usr/sbin/ceph-disk-v prepare --cluster ceph --fs-type xfs -- /dev/sdb
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] set_type: Willcolocate journal with data on /dev/sdb
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_mkfs_options_xfs
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_fs_mkfs_options_xfs
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_mount_options_xfs
[node1][WARNIN] command: Runningcommand: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookuposd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] ptype_tobe_for_name:name = journal
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] create_partition:Creating journal partition num 2 size 5120 on /dev/sdb
[node1][WARNIN] command_check_call:Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal--partition-guid=2:ddc560cc-f7b8-40fb-8f19-006ae2ef03a2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106--mbrtogpt -- /dev/sdb
[node1][DEBUG ] Creating new GPTentries.
[node1][DEBUG ] The operation hascompleted successfully.
[node1][WARNIN] update_partition:Calling partprobe on created device /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Runningcommand: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[node1][WARNIN] prepare_device:Journal is GPT partition/dev/disk/by-partuuid/ddc560cc-f7b8-40fb-8f19-006ae2ef03a2
[node1][WARNIN] prepare_device:Journal is GPT partition/dev/disk/by-partuuid/ddc560cc-f7b8-40fb-8f19-006ae2ef03a2
[node1][WARNIN] get_dm_uuid: get_dm_uuid/dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] set_data_partition:Creating osd partition on /dev/sdb
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] ptype_tobe_for_name:name = data
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] create_partition:Creating data partition num 1 size 0 on /dev/sdb
[node1][WARNIN] command_check_call:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data--partition-guid=1:805bfdb4-97b8-48e7-a42e-a734a47aa533--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[node1][DEBUG ] The operation hascompleted successfully.
[node1][WARNIN] update_partition:Calling partprobe on created device /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Runningcommand: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[node1][WARNIN]populate_data_path_device: Creating xfs fs on /dev/sdb1
[node1][WARNIN] command_check_call:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[node1][DEBUG ]meta-data=/dev/sdb1 isize=2048 agcount=4,agsize=982975 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=0 finobt=0
[node1][DEBUG ] data = bsize=4096 blocks=3931899, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting/dev/sdb1 on /var/lib/ceph/tmp/mnt.9sdF7v with options noatime,inode64
[node1][WARNIN] command_check_call:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1/var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] command: Running command:/sbin/restorecon /var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] populate_data_path:Preparing osd data dir /var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.9sdF7v/ceph_fsid.5102.tmp
[node1][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph/var/lib/ceph/tmp/mnt.9sdF7v/ceph_fsid.5102.tmp
[node1][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.9sdF7v/fsid.5102.tmp
[node1][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.9sdF7v/fsid.5102.tmp
[node1][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.9sdF7v/magic.5102.tmp
[node1][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.9sdF7v/magic.5102.tmp
[node1][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.9sdF7v/journal_uuid.5102.tmp
[node1][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.9sdF7v/journal_uuid.5102.tmp
[node1][WARNIN] adjust_symlink:Creating symlink /var/lib/ceph/tmp/mnt.9sdF7v/journal ->/dev/disk/by-partuuid/ddc560cc-f7b8-40fb-8f19-006ae2ef03a2
[node1][WARNIN] command: Runningcommand: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] command: Runningcommand: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] unmount: Unmounting/var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] command_check_call:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.9sdF7v
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] command_check_call:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d-- /dev/sdb
[node1][DEBUG ] Warning: The kernelis still using the old partition table.
[node1][DEBUG ] The new table willbe used at the next reboot.
[node1][DEBUG ] The operation hascompleted successfully.
[node1][WARNIN] update_partition:Calling partprobe on prepared device /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Runningcommand: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command_check_call:Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[node1][INFO ] checking OSD status...
[node1][DEBUG ] find the location ofan executable
[node1][INFO ] Running command: sudo /bin/ceph--cluster=ceph osd stat --format=json
[node1][WARNIN] there is 1 OSD down
[node1][WARNIN] there is 1 OSD out
[ceph_deploy.osd][DEBUG ] Host node1is now ready for osd use.
[ceph@node0 cluster]$
[ceph_deploy.osd][DEBUG ] will useinit type: systemd
[node1][DEBUG ] find the location ofan executable
[node1][INFO ] Running command: sudo /usr/sbin/ceph-disk-v activate --mark-init systemd --mount /dev/sdb
[node1][WARNIN] main_activate: path= /dev/sdb
[node1][WARNIN] get_dm_uuid:get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] command: Runningcommand: /sbin/blkid -p -s TYPE -o value -- /dev/sdb
[node1][WARNIN] Traceback (mostrecent call last):
[node1][WARNIN] File "/usr/sbin/ceph-disk", line9, in <module>
[node1][WARNIN] load_entry_point('ceph-disk==1.0.0','console_scripts', 'ceph-disk')()
[node1][WARNIN] File"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4994, inrun
[node1][WARNIN] main(sys.argv[1:])
[node1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py",line 4945, in main
[node1][WARNIN] args.func(args)
[node1][WARNIN] File"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3299, inmain_activate
[node1][WARNIN] reactivate=args.reactivate,
[node1][WARNIN] File"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3009, inmount_activate
[node1][WARNIN] e,
[node1][WARNIN]ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device/dev/sdb: Line is truncated:
[node1][ERROR ] RuntimeError:command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError:Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd--mount /dev/sdb
[root@ceph-osd-1 ceph-cluster]# cephauth del osd.5
updated
[root@ceph-osd-1 ceph-cluster]# cephosd rm 5
removed osd.5