ceph部署

1、部署準備:

準備5臺虛擬機(linux系統爲centos7.6版本)   java

1臺部署節點(配一塊硬盤,運行ceph-depoly)node

3臺ceph節點(每臺配置兩塊硬盤,第一塊爲系統盤並運行mon,第二塊做爲osd數據盤)python

1臺客戶端(可使用ceph提供的文件系統,塊存儲,對象存儲)linux

(1)全部ceph集羣節點(包括客戶端)設置靜態域名解析;vim

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.24.11 dlp
192.168.24.8 controller
192.168.24.9 compute
192.168.24.10 storage

(2)全部集羣節點(包括客戶端)建立cent用戶,並設置密碼,後執行以下命令:centos

useradd cent && echo "123" | passwd --stdin cent

echo
-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod
440 /etc/sudoers.d/ceph

(3)在部署節點切換爲cent用戶,設置無密鑰登錄各節點包括客戶端節點app

su – cent

ceph@dlp15:
17:01~#ssh-keygen
ceph@dlp15:
17:01~#ssh-copy-id controller
ceph@dlp15:
17:01~#ssh-copy-id compute
ceph@dlp15:
17:01~#ssh-copy-id storage
ceph@dlp15:
17:01~#ssh-copy-id dlp

(4)在部署節點切換爲cent用戶,在cent用戶家目錄,設置以下文件:vi~/.ssh/config# create new ( define all nodes and users )dom

su – cent

cd .ssh
vim config
Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
chmod
600 ~/.ssh/config

2、全部節點配置國內ceph源:

(1)all-node(包括客戶端)在/etc/yum.repos.d/建立 ceph-yunwei.repossh

cd /etc/yum.repos.d

vim ceph
-yunwei.repo
[ceph
-yunwei] name=ceph-yunwei-install baseurl=https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/ enable=1 gpgcheck=0

(2)到國內ceph源中https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/下載以下所需rpm包。注意:紅色框中爲ceph-deploy的rpm,只須要在部署節點安裝,下載須要到https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新對應的ceph-deploy-xxxxx.noarch.rpm 下載ui

ceph-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-deploy-1.5.39-0.noarch.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm

(3)將下載好的rpm拷貝到全部節點,並安裝。注意ceph-deploy-xxxxx.noarch.rpm 只有部署節點用到,其餘節點不須要,部署節點也須要安裝其他的rpm

(4)在部署節點(cent用戶下執行):安裝 ceph-deploy,在root用戶下,進入下載好的rpm包目錄,執行:

yum localinstall -y ./*

注意:如遇到以下報錯:

 

處理辦法:

#安裝依賴包
python-distribute
#將這個源移走
rdo-release-yunwei.repo 
#在安裝ceph-deploy-1.5.39-0.noarch.rpm
yum localinstall ceph-deploy-1.5.39-0.noarch.rpm -y
#查看版本:
ceph -v

(5)在部署節點(cent用戶下執行):配置新集羣

ceph-deploy new controller compute storage

vim ./ceph.conf

#添加:

osd_pool_default_size = 2
 可選參數以下:
public_network = 192.168.254.0/24
cluster_network = 172.16.254.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8
osd_crush_chooseleaf_type = 1
  
[mon]
mon_clock_drift_allowed = 0.5
  
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_max_sync_interval = 5
filestore_min_sync_interval = 0.1
filestore_fd_cache_size = 655350
filestore_omap_header_cache_size = 655350
filestore_fd_cache_random = true
osd op threads = 8
osd disk threads = 4
filestore op threads = 8
max_open_files = 655350

(6)在部署節點執行(cent用戶下執行):全部節點安裝ceph軟件

全部節點有以下軟件包:

root@rab116:13:59~/cephjrpm#ls
ceph-10.2.11-0.el7.x86_64.rpm               ceph-resource-agents-10.2.11-0.el7.x86_64.rpm    librbd1-10.2.11-0.el7.x86_64.rpm
ceph-base-10.2.11-0.el7.x86_64.rpm          ceph-selinux-10.2.11-0.el7.x86_64.rpm            librbd1-devel-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm        ceph-test-10.2.11-0.el7.x86_64.rpm               librgw2-10.2.11-0.el7.x86_64.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm  libcephfs1-10.2.11-0.el7.x86_64.rpm              librgw2-devel-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm        libcephfs1-devel-10.2.11-0.el7.x86_64.rpm        python-ceph-compat-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm          libcephfs_jni1-10.2.11-0.el7.x86_64.rpm          python-cephfs-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm   libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm    python-rados-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm           librados2-10.2.11-0.el7.x86_64.rpm               python-rbd-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm           librados2-devel-10.2.11-0.el7.x86_64.rpm         rbd-fuse-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm           libradosstriper1-10.2.11-0.el7.x86_64.rpm        rbd-mirror-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm       libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm  rbd-nbd-10.2.11-0.el7.x86_64.rpm

全部節點安裝上述軟件包(包括客戶端):

yum localinstall ./* -y

(7)在部署節點執行,全部節點安裝ceph軟件

ceph-deploy install dlp controller compute storage

(8)在部署節點初始化集羣(cent用戶下執行):

ceph-deploy mon create-initial

(9)每一個節點將第二塊硬盤作分區(注意,存儲節點的磁盤是sdc

fdisk /dev/sdb

列出節點磁盤

ceph-deploy disk list controller

擦淨節點磁盤

ceph-deploy disk zap controller:/dev/sdb1

(10)準備Object Storage Daemon:

ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

(11)激活Object Storage Daemon:

ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdb1

(12)在部署節點transfer config files

ceph-deploy admin dlp controller compute storage

(每一個節點作)sudo chmod
644 /etc/ceph/ceph.client.admin.keyring

(13)在ceph集羣中任意節點檢測:

ceph -s

3、客戶端設置:

(1)客戶端也要有cent用戶:

useradd cent && echo "123" | passwd --stdin cent

echo
-e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod440
/etc/sudoers.d/ceph

在部署節點執行,安裝ceph客戶端及設置:

ceph-deploy install controller

ceph-deploy admin controller

(2)客戶端執行

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

(3)客戶端執行,塊設備rdb配置:

建立rbd:rbd create disk01(起的名字) --size 10G(指定rdb大小) --image-feature layering
刪除:rbd rm disk01(rdb名字)
列示rbd:rbd ls –l

如今只是建立好了一塊10G的硬盤,要想用的話,還須要映射

映射rbd的image map:sudo rbd map disk01
取消映射:sudo rbd unmap disk01
顯示map:rbd showmapped

映射完就能夠用lsblk查看到這塊硬盤了,但還須要格式化掛載才能夠用

格式化disk01文件系統xfs:sudo mkfs.xfs /dev/rbd0
掛載硬盤:sudo mount /dev/rbd0 /mnt
驗證是否掛着成功:df -hT

若是不想用這塊硬盤了,怎麼辦?

umonut /dev/rbd0 /mnt

sudo rbd unmap disk01

lsblk

rbd rm disk01

(4)File System配置:

在部署節點執行,選擇一個node來建立MDS:

ceph-deploy mds create node1

如下操做在node1上執行:

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

在MDS節點node1上建立 cephfs_data 和  cephfs_metadata 的 pool

ceph osd pool create cephfs_data 128

ceph osd pool create cephfs_metadata 128

開啓pool:

ceph fs new cephfs cephfs_metadata cephfs_data
顯示ceph fs:
ceph fs ls

ceph mds stat

如下操做在客戶端執行,安裝ceph-fuse:

yum -y install ceph-fuse

獲取admin key:

sshcent@node1"sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key

chmod600 admin.key

掛載ceph-fs:

mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key

df-h

中止ceph-mds服務:

systemctl stop ceph-mds@node1

ceph mds fail
0
ceph fs rm cephfs
--yes-i-really-mean-it
ceph osd lspools
顯示結果:
0 rbd,1 cephfs_data,2 cephfs_metadata,
ceph osd pool rm cephfs_metadata cephfs_metadata
--yes-i-really-really-mean-it

4、刪除環境:

ceph-deploy purge dlp node1 node2 node3 controller

ceph
-deploy purgedata dlp node1 node2 node3 controller
ceph
-deploy forgetkeys rm -rf ceph*
相關文章
相關標籤/搜索