Ubuntu 14.04下Salt的使用及安裝ceph

概述

本文介紹 Ubuntu 環境下使用 saltstack 。 node

環境

測試環境爲 Ubuntu server 14.04 。 linux

禁用 : 全部 Ubuntu 系統都禁用 selinux , iptables 。 git

5 個運行 Ubuntu server 14.04 x86_64 的虛擬機: bootstrap

192.168.1.119 ceph-node1 192.168.1.111 ceph-node2 192.168.1.112 ceph-node3 192.168.1.113 ceph-node4 192.168.1.114 ceph-node5

咱們分配 saltstack 中的角色: 服務器

全部節點 都擔任 Minion 角色,ceph-node1 同時擔任 Master 角色。 app

主機名

請按上面的機器分配,設置好每一個機器的主機名。編輯各機器上的 /etc/hostname 文件便可。並修改 /etc/hosts裏的 127.0.1.1 指向該名。本測試配置完成後是這樣的: less

ouser@ceph-node1:~$ sudo salt '*' cmd.run 'grep 127.0.1.1 /etc/hosts' ceph-node2:  127.0.1.1 ceph-node2 ceph-node4:  127.0.1.1 ceph-node4 ceph-node1:  127.0.1.1 ceph-node1 ceph-node5:  127.0.1.1 ceph-node5 ceph-node3:  127.0.1.1 ceph-node3 ouser@ceph-node1:~$ sudo salt '*' cmd.run 'cat /etc/hostname' ceph-node1:  ceph-node1 ceph-node5:  ceph-node5 ceph-node4:  ceph-node4 ceph-node3:  ceph-node3 ceph-node2:  ceph-node2

安裝

全部安裝在相應角色虛擬機上執行。 ssh

Master 角色

sudo apt-get install salt-master salt-minion

Minion 角色

sudo apt-get install salt-minion

配置

只需配置 Minion 便可,編輯每一個 Minion 角色機器上的 /etc/salt/minion 文件,配置 master 選項: ide

master: 192.168.1.119

並重啓全部 Minion 角色服務器上的 salt-minion 服務: 工具

sudo /etc/init.d/salt-minion restart

測試

注意 : 除特別說明,如下全部命令均在 Master 服務器上執行。

接受 Minion 的認證

全部的 Minion 配置完成,並重啓 salt-minion 服務後。咱們在 Master 服務器上執行 sudo salt-key -L 命令能夠查看到當前 等待認證的列表:

$ sudo salt-key -L Accepted Keys: Unaccepted Keys: ceph-node1 ceph-node2 ceph-node3 ceph-node4 ceph-node5 Rejected Keys:

運行 sudo salt-key -A 授受全部這些認證:

$ sudo salt-key -A The following keys are going to be accepted: Unaccepted Keys: ceph-node1 ceph-node2 ceph-node3 ceph-node4 ceph-node5 Proceed? [n/Y] Y Key for minion ceph-node1 accepted. Key for minion ceph-node2 accepted. Key for minion ceph-node3 accepted. Key for minion ceph-node4 accepted. Key for minion ceph-node5 accepted.

批量測試命令

$ sudo salt '*' test.ping ceph-node2:  True ceph-node1:  True ceph-node5:  True ceph-node4:  True ceph-node3:  True

批量執行命令

$ sudo salt '*' cmd.run 'hostname -s' ceph-node2:  ceph-node2 ceph-node5:  ceph-node5 ceph-node1:  ceph-node1 ceph-node4:  ceph-node4 ceph-node3:  ceph-node3

安裝Ceph

參考

概述

本文手動安裝 ceph 環境。使用 saltstack 批量管理。

請先保證按照上面手冊安裝好 saltstack 環境。

機器分配以下:

192.168.1.119 ceph-node1 192.168.1.111 ceph-node2 192.168.1.112 ceph-node3 192.168.1.113 ceph-node4 192.168.1.114 ceph-node5

saltstack 角色劃分

全部節點均擔任 Minion 角色, ceph-node1 同時擔任 Minion 角色

Ceph 角色劃分

主機名

請按上面的機器分配,設置好每一個機器的主機名。編輯各機器上的 /etc/hostname 文件便可。並修改 /etc/hosts裏的 127.0.1.1 指向該名。本測試配置完成後是這樣的:

ouser@ceph-node1:~$ sudo salt '*' cmd.run 'grep 127.0.1.1 /etc/hosts' ceph-node2:  127.0.1.1 ceph-node2 ceph-node4:  127.0.1.1 ceph-node4 ceph-node1:  127.0.1.1 ceph-node1 ceph-node5:  127.0.1.1 ceph-node5 ceph-node3:  127.0.1.1 ceph-node3 ouser@ceph-node1:~$ sudo salt '*' cmd.run 'cat /etc/hostname' ceph-node1:  ceph-node1 ceph-node5:  ceph-node5 ceph-node4:  ceph-node4 ceph-node3:  ceph-node3 ceph-node2:  ceph-node2

全部節點均擔任 osd 節點角色。

注意 : 全部 salt 命令操做都是在 saltstack Master 角色服務器上執行。

準備

解決 locale 警告:

sudo salt '*' cmd.run 'locale-gen zh_CN.UTF-8'

安裝

Ceph Storage Cluster

$ sudo salt '*' cmd.run 'apt-get install ceph ceph-mds'

Deploy a Cluster Manually

全部 Ceph clusters 都須要:至少 1 臺 monitor , 至少 as many OSDs as copies of an object stored on the cluster 。

Monitor Bootstrapping

這是第一步,咱們將在 ceph-node1 節點安裝 monitor 服務。

啓用 monitor 須要:

  • Unique Identifier : fsid
  • Cluster Name : 默認的名字是 ceph
  • Monitor Name : 默認是 hostname ,可使用 hostname -s 命令獲取短名字
  • Monitor Map :
  • Monitor Keyring : Monitors 之間的通訊須要一個密鑰。
  • Administrator Keyring : 使用 ceph 命令,須要有個 client.admin 用戶。

操做流程

咱們在 ceph-node1 上啓用 monitor .

登陸 ceph-node1 。

確認 /etc/ceph 存在。

默認咱們使用 ceph 做爲 cluster 名字,所以建立配置文件 /etc/ceph/ceph.conf。

使用 uuidgen 命令生成 fsid

$ uuidgen 4e7d2940-7824-4b43-b85e-1078a1b54cb5

配置文件 ceph.conf 裏 fsid :

fsid = 4e7d2940-7824-4b43-b85e-1078a1b54cb5

配置 ceph.conf 其它:

mon initial members = ceph-node1 mon host = 192.168.1.119

Create a keyring for your cluster and generate a monitor secret key.

$ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

Generate an administrator keyring, generate a client.admin user and add the user to the keyring.

$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

Add the client.admin key to the ceph.mon.keyring.

$ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

Generate a monitor map using the hostname(s), host IP address(es) and the FSID. Save it as /tmp/monmap:

$ monmaptool --create --add ceph-node1 192.168.1.119 --fsid 4e7d2940-7824-4b43-b85e-1078a1b54cb5 /tmp/monmap

Create a default data directory (or directories) on the monitor host(s).

$ sudo mkdir /var/lib/ceph/mon/ceph-node1

Populate the monitor daemon(s) with the monitor map and keyring.

$ sudo ceph-mon --mkfs -i ceph-node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

最終 /etc/ceph/ceph.con 文件內容配置爲下:

[global] fsid = 4e7d2940-7824-4b43-b85e-1078a1b54cb5 mon initial members = ceph-node1 mon host = 192.168.1.119 public network = 192.168.1.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = 1024 filestore xattr use omap = true osd pool default size = 2 osd pool default min size = 1 osd pool default pg num = 333 osd pool default pgp num = 333 osd crush chooseleaf type = 1

啓動 monitor :

sudo start ceph-mon id=ceph-node1

Verify that Ceph created the default pools.

sudo ceph osd lspools

應該能看見下面信息:

0 data,1 metadata,2 rbd,

Verify that the monitor is running :

ouser@ceph-node1:~$ sudo ceph -s cluster 4e7d2940-7824-4b43-b85e-1078a1b54cb5 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds monmap e1: 1 mons at {ceph-node1=192.168.1.119:6789/0}, election epoch 2, quorum 0 ceph-node1 osdmap e1: 0 osds: 0 up, 0 in pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 192 creating

Adding OSDs

如今咱們設置好了 1 個 monitor , 是時候增長 osd 了。只有增長足夠的 osd , 咱們的 cluster 才能達到active + clean 正確狀態 。 osd pool default size = 2 設置決定至少須要 2 個 osd 節點加入。

在 bootstrapping monitor 後,個人 cluster 有了一個默認的 CRUSH map,可是該 CRUSH 還沒有任何 Ceph OSD Daemons 映射到 Ceph Node 。

Short Form

Ceph 提供一個 ceph-disk 工具,能夠用來爲 Ceph 準備磁盤,分區或目錄。 ceph-disk 工具會自動執行下面Long Form 步驟。

在 ceph-node1, ceph-node2 上執行下面命令創建 OSD :

$ sudo ceph-disk prepare --cluster ceph --cluster-uuid 4e7d2940-7824-4b43-b85e-1078a1b54cb5 --fs-type ext4 /dev/hdd1

Activate the OSD:

$ sudo ceph-disk activate /dev/hdd1

Long Form

進入 OSD 節點。手動創建 OSD 並加入到 cluster 和 CRUSH map 。

取得一個 UUID :

$ uuidgen b373f62e-ddf6-41d5-b8ee-f832318a31e1

創建 OSD , 若是沒有指定 UUID ,它啓動時自動分配一個 UUID 。下面的命令會輸出 osd number ,後面須要用到:

$ sudo ceph osd create b373f62e-ddf6-41d5-b8ee-f832318a31e1 1

進入新的 OSD 節點,並執行:

$ ssh {new-osd-host} $ sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}

If the OSD is for a drive other than the OS drive, prepare it for use with Ceph, and mount it to the directory you just created:

$ ssh {new-osd-host} $ sudo mkfs -t {fstype} /dev/{hdd} $ sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}

Initialize the OSD data directory.

$ ssh {new-osd-host} $ sudo ceph-osd -i 1 --mkfs --mkkey --osd-uuid b373f62e-ddf6-41d5-b8ee-f832318a31e1

Register the OSD authentication key. The value of ceph for ceph-{osd-num} in the path is the $cluster-$id. If your cluster name differs from ceph, use your cluster name instead.:

$ sudo ceph auth add osd.{osd-num} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-{osd-num}/keyring


Storage: Ceph

Build a three node ceph storage cluster

It is recommended you look through the Official installation documents for the most up to date information : http://ceph.com/docs/master/install/

Currently, it's not possible to build the cluster on proxmox host. For a production system you need 3 servers minimum. For testing you can get by with less, although you may be unable to properly test all the features of the cluster.

Proxmox Supports CEPH >= 0.56

Prepare nodes

  • Install Ubuntu.

It is recommended to use Ubuntu 12.04 LTS, this is the distribution used by Inktank for Ceph development. (you need recent filesystem version and glibc)

  • Create SSH key on server1 and distribute it.

Generate a ssh key

ssh-keygen -t rsa

and copy it to the other servers

ssh-copy-id user@server2
ssh-copy-id user@server3
  • Configure ntp on all nodes to keep time updated:
sudo apt-get install ntp

Install Ceph-Deploy

  • Create entries for all other Ceph nodes in /etc/hosts
  • Add Ceph repositories
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
  • Install packages
sudo apt-get update
sudo apt-get install ceph-deploy

Create cluster using Ceph-Deploy

  • Create your cluster
ceph-deploy new server1
  • Install Ceph on all nodes
ceph-deploy install server1 server2 server3

You could also run:

ceph-deploy install server{1..3}
  • Add a Ceph monitor.
ceph-deploy mon create server{1..3}

(You must have an odd number of monitors. If you only have one it will be a single point of failure so consider using at least 3 for high availability.)

  • Gather keys
ceph-deploy gatherkeys server1
  • Prepare OSDs on each server
For each data disk, you need 1 osd daemon.It is assumed that these disks are empty and contain no data, zap will delete all data on disks.Verify the names of your data disks!
sudo fdisk -l

For servers that are not identical:

ceph-deploy osd --zap-disk create server1:sdb
ceph-deploy osd --zap-disk create server2:sdb
ceph-deploy osd --zap-disk create server3:sdc

For 3 identical servers, each with 3 data disks (sdb, sdc, sdd)

ceph-deploy osd --zap-disk create server{1..3}:sd{b..d}
By default the journal is placed on the same disk. To change this specify the path to the journal: ceph-deploy osd prepare {node-name}:{disk}[:{path/to/journal}]
  • Check the health of the cluster
sudo ceph -s

Customize Ceph

  • Set your number of placement groups
sudo ceph osd pool set rbd pg_num 512

The following formula is generally used:

Total PGs = (# of OSDs * 100) / ReplicasTake this result and round up to the nearest Power of 2. For 9 OSDS you would do:9 * 100 = 900Default number of replicas is 2 so 900/2 = 450 rounded to the next power of 2 so 512.
  • Create a new pool
sudo ceph osd pool create {name_of_pool} {pg_num}

Example:

sudo ceph osd pool create pve_data 512
  • Change the number of replica groups for a pool
sudo ceph osd pool set {name_of_pool} size {number_of_replicas}

Example:

sudo ceph osd pool set pve_data size 3

Configure Proxmox to use the ceph cluster

GUI

You can use proxmox GUI to add the rbd storage

Manual configuration edit

edit your /etc/pve/storage.cfg and add the configuration

rbd: mycephcluster
       monhost 192.168.0.1:6789;192.168.0.2:6789;192.168.0.3:6789
       pool rbd  (optional, default =r rbd)
       username admin (optional, default = admin)
       content images

note: you must use ip (not dns fqdn) for monhost

Authentication

If you use cephx authentication, you need to copy the keyfile from Ceph to Proxmox VE host.

Create the /etc/pve/priv/ceph directory

mkdir /etc/pve/priv/ceph

Copy the keyring

scp cephserver1:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/StorageID.keyring
  • The keyring must be named to match your Storage ID
  • Copying the keyring generally requires root privileges. If you do not have the root account enabled on Ceph, you can "sudo scp" the keyring from the Ceph server to Proxmox.
  • Note that for early versions of Ceph *Argonaut*, the keyring was named ceph.keyring rather than ceph.client.admin.keyring
相關文章
相關標籤/搜索