ceph部署

1、部署準備:

準備5臺機器(linux系統爲centos7.6版本),固然也能夠至少3臺機器並充當部署節點和客戶端,能夠與ceph節點共用:
    1臺部署節點(配一塊硬盤,運行ceph-depoly)
    3臺ceph節點(配兩塊硬盤,第一塊爲系統盤並運行mon,第二塊做爲osd數據盤)
    1臺客戶端(可使用ceph提供的文件系統,塊存儲,對象存儲)
 
(1)全部ceph集羣節點(包括客戶端)設置靜態域名解析;
複製代碼
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.253.135 controller 192.168.253.194 compute 192.168.253.15 storage 192.168.253.10 dlp
複製代碼

 

 
(2)全部集羣節點(包括客戶端)建立cent用戶,並設置密碼,後執行以下命令:
 
useradd cent && echo "123" | passwd --stdin cent echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph chmod 440 /etc/sudoers.d/ceph

 

 
(3)在部署節點切換爲cent用戶,設置無密鑰登錄各節點包括客戶端節點
 
  su - cent 
ceph@dlp15:17:01~#ssh-keygen ceph@dlp15:17:01~#ssh-copy-id dlp ceph@dlp15:17:01~#ssh-copy-id controller ceph@dlp15:17:01~#ssh-copy-id compute ceph@dlp15:17:01~#ssh-copy-id storage
 
 
 

(4)在部署節點切換爲cent用戶,在cent用戶家目錄,設置以下文件:vi config node

 而後設置以下權限:linux

 

複製代碼
Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
複製代碼

   chmod 600 ./ssh/configshell

2、全部節點配置國內ceph源:

(1)全部節點下載阿里雲的鏡像源,並刪除或者移動rdo-release-yunwei.repo去另外一個目錄下

  wget https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/vim

 2)接着yum clean all ,yum makecache緩存原數據。執行
  
 
  wget  http://download2.yunwei.edu/shell/ceph-j.tar.gz
 
 
(3)將下載好的rpm拷貝到全部節點,並安裝。注意ceph-deploy-xxxxx.noarch.rpm 只有部署節點用到,其餘節點不須要,部署節點也須要安裝其他的rpm包
 
(4)在部署節點(cent用戶下執行):安裝 ceph-deploy, 在root用戶下,進入下載好的rpm包目錄,執行:
 yum -y  localinstall ./*

 

建立ceph工做目錄
  mkdir ceph  && cd ceph
 
(5)在部署節點(cent用戶下ceph目錄下):配置新集羣
 
  ceph-deploy  new  controller compute storage
  vim conf
  添加:osd_pool_ default_size = 2
 
(6)在部署節點執行(ceph目錄下):全部節點安裝ceph軟件
 
  ceph-deploy install dlp controller compute  storage
 
 
   7)  初始化集羣(ceph目錄下)
 
   ceph-deploy mon create-initial
 
若是報錯1:
eph部署monitor時出現"monitor is not yet in quorum.

這是由於防火牆沒關閉,去各個節點關閉全部防火牆centos

再執行:緩存

  ceph-deploy --overwrite-conf  mon create-initialbash

 
報錯2:
複製代碼
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite [ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors 緣由:修改了ceph用戶裏的ceph.conf文件內容,可是沒有把這個文件裏的最新消息發送給其餘節點,全部要推送消息 解決:ceph-deploy --overwrite-conf config push node1-4   ceph-deploy --overwrite-conf mon create node1-4
複製代碼

 

 
  列出節點磁盤:ceph-deploy disk list node1
  擦淨節點磁盤:ceph-deploy disk zap controller:/dev/sdb         #擦除選擇磁盤而不是分區
 
 8) 給每一個節點分區
 
  fdisk /dev/sdb                 #本身注意是分哪一個硬盤,注意w保存
 
   9)準備osd(Object Storage Daemon:對象存儲保護程序)
 
ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1

 

 10)激活osd
 
ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1

 

   1 1)   在部署節點transfer config files
 
ceph-deploy admin dlp controller compute storage

  sudo chmod 644 /etc/ceph/ceph.client.admin.keyring(在其餘節點上)app

 

   12  )在ceph集羣中任意節點檢測:
 
複製代碼
[root@controller old]# ceph -s cluster 8e03f0d7-06cb-49c6-b0fa-b9764e85e61a health HEALTH_OK monmap e1: 3 mons at {compute=192.168.253.194:6789/0,controller=192.168.253.135:6789/0,storage=192.168.253.15:6789/0} election epoch 6, quorum 0,1,2 storage,controller,compute osdmap e14: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v2230: 64 pgs, 1 pools, 0 bytes data, 0 objects 24995 MB used, 27186 MB / 52182 MB avail 64 active+clean
複製代碼

 

 

 3、rbd塊設備設置

 
 
  建立rbdrbd create disk01 --size 10G --image-feature layering    刪除:rbd rm disk01
 
列示rbd:rbd ls -l
 
[cent@dlp ceph]$ rbd create disk01 --size 10G --image-feature layering [cent@dlp ceph]$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK disk01 10240M 2

 

 
映射rbd的image map:sudo rbd map disk01      取消映射:sudo rbd unmap disk01
 
[root@controller ~]# rbd map disk01      #在root目錄下這麼些,其餘目錄下要加sudo在前面/dev/rbd0

將rbd0成功映射到了/dev/目錄下,可是lsblk並沒有法查看到/dev/rbd0,這是由於尚未格式化和掛載。dom

 

顯示map:rbd showmapped
 
 
格式化disk01文件系統xfs:sudo mkfs.xfs /dev/rbd0
 
複製代碼
[root@controller ~]# mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=17, agsize=162816 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
複製代碼

 

掛載硬盤:sudo mount /dev/rbd0 /mnt
複製代碼
[root@controller ~]# mount /dev/rbd0 /mnt [root@controller ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 20G 0 disk └─sdb1 8:17 0 20G 0 part /var/lib/ceph/osd/ceph-0 sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part /var/lib/ceph/osd/ceph-3 sr0 11:0 1 4.2G 0 rom /mnt rbd0 252:0 0 10G 0 disk /mnt
複製代碼
 
驗證是否掛着成功:df -hT
 
 
 
中止ceph-mds服務:
systemctl stop ceph-mds@node1
 
ceph mds fail 0
 
列示存儲池  
ceph osd lspools
顯示結果:0 rbd,
  
刪除存儲池
ceph osd pool rm rdb --yes-i-really-really-mean-it
 
 
 

4、刪除環境:

 

   清空集羣信息ssh

   ceph-deploy purge dlp node1 node2 node3 controller

 ceph-deploy purgedata dlp node1 node2 node3 controller
   
  忘記身份認證的密鑰
 ceph-deploy forgetkeys
 
 rm -rf ceph*
相關文章
相關標籤/搜索