1.添加epel-release拓展源node
yum install --nogpgcheck -y epel-release網絡
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7ssh
2.添加ceph源ui
vi /etc/yum.repos.d/ceph.repourl
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1server
3.安裝ceph準備get
更新主機庫文件:yum update -yit
下載ceph-deploy:yum install ceph-deploy -yio
下載安裝ntp服務:yum install ntp ntpdate ntp-doc openssh-server yum-plugin-priorities -yimport
修改/etc/hosts文件,添加IP-主機名映射,例如:192.168.1.111 node1
建立目錄放置ceph安裝後的文件並進入目錄:mkdir my-cluster ; cd my-cluster
使用ceph-deploy建立新的集羣:ceph-deploy new node1(最後爲主機名)
修改ceph.conf配置文件,添加一下內容
osd pool default size = 3 #建立3個副本
public_network = 192.168.1.0/24 #公用網絡
cluster_network = 192.1681.0/24 #集羣網絡
4.ceph-deploy下載安裝ceph程序:ceph-deploy install node1
5.劃分三個大小相等的分區,而且所有大於10GB:例fdisk /dev/sdb
ceph-deploy mon create-initial ceph-deploy admin node1 chmod +r /etc/ceph/ceph.client.admin.keyring ceph-disk prepare --cluster node1 --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb1 剩餘的兩個分區命令同樣,只須要修改/dev/sdb* 上面的uuid使用ceph -s能夠查看,就是第一行cluster後面的那串字符,配置文件中能夠修改 ceph-disk activate /dev/sdb1 ceph osd getcrushmap -o a.map crushtool -d a.map -o a vi a rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type osd #默認爲host,修改成osd step emit crushtool -c a -o b.map ceph osd setcrushmap -i b.map ceph osd tree ceph -s