IP地址
|
角色
|
58.220.31.60
|
DeployNode,Client
|
58.220.31.61
|
MdsNode, MonNode
|
58.220.31.63
|
osdNode2
|
58.220.31.64
|
osdNode3 |
d. SSH設置,保證deployNode能夠順利訪問其餘各節點
我這篇博客有對SSH 無密碼訪問的詳細設置 html
在/etc/yum.repos.d/目錄下,建立ceph.repo文件,輸入以下內容 node
name=Ceph packages for $basearch gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/$basearch priority=1 gpgcheck=1 type=rpm-md [ceph-source] name=Ceph source packages gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/SRPMS priority=1 gpgcheck=1 type=rpm-md [Ceph-noarch] name=Ceph noarch packages gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc enabled=1 baseurl=http://ceph.com/rpm-hammer/el6/noarch priority=1 gpgcheck=1 type=rpm-md
1. 配置/etc/ceph/ceph.conf git
[global] fsid = 8587ec10-fe1a-41f5-9795-9d38ef20b493 mon_initial_members = mdsnode mon_host = 58.220.31.61 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd_pool_default_size = 2 osd_pool_default_min_size = 1 osd_journal_size = 10000 osd_pool_default_pg_num = 366 osd_pool_default_pgp_num = 366 public_network=58.220.31.0/24
sudo ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
3 生成管理員密鑰環,生成 client.admin 用戶並加入密鑰環 shell
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
sudo ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
5. 用規劃好的主機名、對應 IP 地址、和 FSID 生成一個監視器圖,並保存爲 /etc/ceph/monmap bootstrap
monmaptool --create --generate -c /etc/ceph/ceph.conf /etc/ceph/monmap
6. 用監視器圖和密鑰環組裝守護進程所需的初始數據 服務器
sudo ceph-mon --mkfs -i mdsnode --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
7. 配置相關/etc/ceph/目錄下keyring文件的讀寫權限,不然在啓動監視器時會報以下錯誤: 性能
[ceph@mdsnode ceph]$ sudo /etc/init.d/ceph start mon.mdsnode === mon.mdsnode === Starting Ceph mon.mdsnode on mdsnode...already running [ceph@mdsnode ceph]$ ceph -s 2015-08-31 11:32:17.378858 7f543b014700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2015-08-31 11:32:17.378864 7f543b014700 0 librados: client.admin initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound
執行命令sudo chmod 777 /etc/ceph/* 後就OK 了 測試
sudo /etc/init.d/ceph start mon.mdsnode
在osdnode2 節點下 準備/var/local/osd2 目錄 sudo mkdir /var/local/osd2
在osdnode3 節點下 準備/var/local/osd3 目錄 sudo mkdir /var/local/osd3 ui
2. keyring文件準備
將監控節點下/etc/ceph 裏面的配置文件,keyring文件拷貝至osd節點的對應目錄中 阿里雲
3. 激活osd進程
ceph-deploy osd prepare osdnode2:/var/local/osd2 osdnode3:/var/local/osd3 ceph-deploy osd activate osdnode2:/var/local/osd2 osdnode3:/var/local/osd3
此處並無使用硬盤進行分區後使用,而是直接使用了文件目錄。
[ceph@deploynode mnt]$ sudo ceph-fuse -m 58.220.31.61:6789 /mnt/mycephfs/ ceph-fuse[30250]: starting ceph client 2015-09-01 14:33:50.695812 7f59778ce760 -1 init, newargv = 0x37fe9e0 newargc=11 ceph-fuse[30250]: ceph mount failed with (110) Connection timed out ceph-fuse[30248]: mount failed: (110) Connection timed out
[ceph@deploynode mycluster]$ sudo ceph-fuse -m 58.220.31.61:6789 /mnt/mycephfs ceph-fuse[30854]: starting ceph client 2015-09-01 15:29:03.046430 7fc53c465760 -1 init, newargv = 0x33ada00 newargc=11 ceph-fuse[30854]: starting fuse [ceph@deploynode mycluster]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 250G 15G 223G 7% / tmpfs 32G 12K 32G 1% /dev/shm /dev/sda1 9.8G 62M 9.2G 1% /boot ceph-fuse 499G 72G 427G 15% /mnt/mycephfs
2015-09-01 17:19:02.268177 mon.0 [INF] pgmap v3744: 192 pgs: 192 active+clean; 34189 MB data, 113 GB used, 359 GB / 498 GB avail; 15046 kB/s wr, 117 op/s 2015-09-01 17:19:03.313847 mon.0 [INF] pgmap v3745: 192 pgs: 192 active+clean; 34213 MB data, 114 GB used, 359 GB / 498 GB avail; 12841 kB/s wr, 152 op/s 2015-09-01 17:19:07.269050 mon.0 [INF] pgmap v3746: 192 pgs: 192 active+clean; 34249 MB data, 114 GB used, 359 GB / 498 GB avail; 12375 kB/s wr, 91 op/s
1. ceph 中文文檔路徑 http://mirrors.myccdn.info/ceph/doc/docs_zh/output/html/architecture/
2. 一鍵刪除ceph相關配置文件腳本
service ceph -a stop dirs=(/var/lib/ceph/bootstrap-mds/* /var/lib/ceph/bootstrap-osd/* /var/lib/ceph/mds/* \ /var/lib/ceph/mon/* /var/lib/ceph/tmp/* /var/lib/ceph/osd/* /var/run/ceph/* /var/log/ceph/* /var/lib/ceph/*) for d in ${dirs[@]}; do sudo rm -rf $d echo $d done done
3. 測試時有時候小文件過多時,直接用rm -rf *沒法刪除,可採用以下命令:
其中/tmp/empty/ 爲實現建立好的空文件夾
sudo rsync --delete-before -d /tmp/empty/ /mnt/mycephfs/