CentOS7.1配置Ceph集羣環境

1、環境準備工做


(1) 節點要求
    ==》節點配置硬件最低要求
    角色        設備                     最小配置                                 推薦配置
    -----------------------------------------------------------------------------------------------------------------
    ceph-osd       RAM                 500M RAM for per daemon                    1GB RAM for 1TB of storage per daemon
                Volume Storage         1x storage drive per daemon                >1TB storage drive per daemon
                Journal Storage     5GB(default)                            SSD, >1GB for 1TB of storage per daemon
                Network             2x 1GB Ethernet NICs                    2x10GB Ethernet NICs
    -----------------------------------------------------------------------------------------------------------------
    ceph-mon       RAM                 1 GB per daemon                         2 GB per daemon
                Disk Space             10 GB per daemon                        >20 GB per daemon  
                Network             2x 1GB Ethernet NICs                    2x10GB Ethernet NICs
    -----------------------------------------------------------------------------------------------------------------
    ceph-mds       RAM                 1 GB minimum per daemon                 >2GB per daemon
                Disk Space             1 MB per daemon                         >1MB per daemon
                Network             2x 1GB Ethernet NICs                    2x10GB Ethernet NICs
           
    ==》OS環境
    CentOS7
    linux distribution:3.10.0-229.el7.x86_64
   
    ==》搭建環境
    a) 電腦*1 (RAM>6G Disk>100G)
    b) VirtualBox
    c) CentOS7.1(3.10.0-229.el7.x86_64).ISO安裝包
   
    ==》基本環境搭建,配置節點
      主機名         角色             OS                                           磁盤
    =====================================================================================================
    a) admnode    deploy-node    CentOS7.1(3.10.0-229.el7.x86_64)            
    b) node1      mon,osd        CentOS7.1(3.10.0-229.el7.x86_64)         Disk(/dev/sdb  capacity:10G)
    c) node2      osd            CentOS7.1(3.10.0-229.el7.x86_64)          Disk(/dev/sdb  capacity:10G)
    d) node3      osd            CentOS7.1(3.10.0-229.el7.x86_64)          Disk(/dev/sdb  capacity:10G)
           
(2) 節點設置代理上網
    a)修改/etc/yum.conf,加入下面的內容
    proxy=http://<proxyserver's IP>:port/
    proxy_username=<G08's username>
    proxy_password=<G08's passwork>html


    b)設置全局http代理(非root用戶,root用戶須要在/etc/profile.d/目錄下,新建一個XXX.sh文件,添加 export https_proxy=XXXXX),在/etc/environment 文中添加如下內容
    http_proxy=http://username:password@proxyserver:port/
    https_proxy=http://username:password@proxyserver:port/
   
(3) 集羣內yum源配置(注:若是配置這一步驟,請不要設置步驟(2)節點設置代理上網。 )
    3.1 yum源服務器端配置
        a) 在admnode上經過yum或rpm包安裝vsftpd
            # yum install vsftpd
           
            啓動vsftpd服務
            # systemctl start vsftpd
           
            關閉防火牆
            # service iptables stop
           
            關閉selinux
            # setenforce 0 node

            確保/etc/yum.conf中沒有設置代理,註釋掉代理設置的下面三行
            ####proxy=http://<proxyserver's IP>:port/   
            ####proxy_username=<G08's username>
            ####proxy_password=<G08's passwork>
            
            確保控制檯沒有設置http代理以及其餘的代理服務(如yum.conf的代理,http代理等)
            # unset http_proxy
           
            使用瀏覽器驗證ftp服務是否正常。
            在服務端機器和客戶端機器的瀏覽器地址欄上輸入ftp://ip/pub/ ,瀏覽器會顯示相應文件夾目錄。
           
        b) ceph安裝包拷貝。
            將全部的須要的rpm包拷貝到 /var/ftp/pub/<self-content>目錄下。
           
        c) 建立YUM源.
            安裝createrepo工具
            # yum install createrepo
           
            生成yum源的repodata依賴文件
            # createrepo /var/ftp/pub
           
       
        d) 配置本地YUM源.
            在/etc/yum.repos.d/目錄下新建*.repo文件(例:local_yum.repo),添加如下內容:
            [local_yum]                                #庫名稱
            name=local_yum                            #名稱描述
            baseurl=ftp://10.167.221.108/pub/        #yum源目錄,填寫Server端FTP服務器的IP。
            enabled=1                                #是否啓用該yum源,0爲禁用,1爲使用
            gpgcheck=0                                #檢查GPG-KEY,0爲不檢查,1爲檢查
           
            修改默認源(注:通常只須要修改CentOS-Base.repo源便可,但若是有其餘源生效,就在後綴加上.bak)
            # cd /etc/yum.repos.d/
            # mv CentOS-Base.repo CentOS-Base.repo.bak
           
            更新服務器端yum源,已便客戶端識別改動的rpm包。
            # yum clean all
            # createrepo --update /var/ftp/pub/
            # createrepo /var/ftp/pub/
           
        e) 查看所擁有的源,以及經過本地yum源安裝軟件
            #yum repolist all
           
            # yum install <local-yum-Software-package-name>
       
    3.2 YUM客戶端的配置
        a) 準備工做。
            關閉防火牆
            # service iptables stop
           
            關閉selinux
            # setenforce 0
            linux

             確保/etc/yum.conf中沒有設置代理,註釋掉代理設置的下面三行
            ####proxy= http://<proxyserver's IP>:port/   
            ####proxy_username=<G08's username>
            ####proxy_password=<G08's passwork>


            確保控制檯沒有設置http代理以及其餘的代理服務
            # unset http_proxy
           
            使用瀏覽器驗證ftp服務是否正常。在瀏覽器地址欄上輸入ftp://ip/pub/ ,瀏覽器會顯示相應文件夾目錄。
           
        b)     配置集羣服務器YUM源.
            在/etc/yum.repos.d/目錄下新建*.repo文件(例:local_yum.repo),添加如下內容:
            [local_yum]                                #庫名稱
            name=local_yum                            #名稱描述
            baseurl=ftp://10.167.221.108/pub/        #yum源目錄,填寫Server端FTP服務器的IP。
            enabled=1                                #是否啓用該yum源,0爲禁用,1爲使用
            gpgcheck=0                                #檢查GPG-KEY,0爲不檢查,1爲檢查
           
            修改默認源(注:通常只須要修改CentOS-Base.repo源便可,但若是有其餘源生效,就在後綴加上.bak)
            # cd /etc/yum.repos.d/
            # mv CentOS-Base.repo CentOS-Base.repo.bak
           
        e) 查看所擁有的源,以及經過本地yum源安裝軟件
            #yum repolist all
           
            # yum install <local-yum-Software-package-name>    json

 

2、Ceph節點安裝配置(http://docs.ceph.com/docs/master/start/quick-start-preflight/)


(1) 安裝ceph-deploy(其實該步驟只須要在admin節點上安裝ceph-deploy便可)
    (a)輸入命令
        # sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

    (b)輸入命令
        # sudo vim /etc/yum.repos.d/ceph.repovim

        並將以下信息拷貝(注:若是是要在安裝hammer版本,將下面的"rpm-infernalis"替換成"rpm-hammer")
        [ceph]
        name=Ceph packages for $basearch
        baseurl=http://download.ceph.com/rpm-infernalis/el7/$basearch
        enabled=1
        priority=2
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asccentos

        [ceph-noarch]
        name=Ceph noarch packages
        baseurl=http://download.ceph.com/rpm-infernalis/el7/noarch
        enabled=1
        priority=2
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc瀏覽器

        [ceph-source]
        name=Ceph source packages
        baseurl=http://download.ceph.com/rpm-infernalis/el7/SRPMS
        enabled=0
        priority=2
        gpgcheck=1
        type=rpm-md
        gpgkey=https://download.ceph.com/keys/release.asc服務器


       (c) 安裝ceph-deploy:
        # sudo yum install ceph-deploy網絡

(2) 安裝NTP,輸入
    # sudo yum install ntp ntpdate ntp-doc
 
(3) 安裝SSH Server
    # sudo yum install openssh-server
 
(4) 建立Ceph Deploy User,其中{username}部分須要本身指定用戶,並將{username}替換掉。
    # sudo useradd -d /home/{username} -m {username}
    # sudo passwd {username}less

    例:
    # sudo useradd -d /home/cephadmin -m cephadmin
    # sudo passwd cephadmin

(5) 確保建立的{username}有sudo權限
    # echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
    # sudo chmod 0440 /etc/sudoers.d/{username}

    例:
    # echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
    # sudo chmod 0440 /etc/sudoers.d/cephadmin

(6) Enable Password-less SSH
    a) 用上面的{username}用戶生成SSH keys(不要用root用戶),輸入命令,而後直接所有按回車鍵:
        # ssh-keygen
        Generating public/private rsa key pair.
        Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa):
        Created directory '/home/cephadmin/.ssh'.
        Enter passphrase (empty for no passphrase):
        Enter same passphrase again:
        Your identification has been saved in /home/cephadmin/.ssh/id_rsa.
        Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub.
        The key fingerprint is:
        1c:f1:23:84:3d:60:81:c2:75:a0:e3:6d:93:03:66:92 cephadmin@s3101490.g01.fujitsu.local
        The key's randomart image is:
        +--[ RSA 2048]----+
        | . .oo==o        |
        | .o..o..oo       |
        |E *.    o.o      |
        | = + . . o .     |
        |  . *   S        |
        |   . o           |
        |                 |
        |                 |
        |                 |
        +-----------------+
    b) 將上述生成的key拷貝到其餘全部節點,輸入命令
        # ssh-copy-id {username}@node1’s IP
        # ssh-copy-id {username}@node2’s IP
        # ssh-copy-id {username}@node3’s IP

        例:
        # ssh-copy-id cephadmin@10.167.225.111
        # ssh-copy-id cephadmin@10.167.225.114

(7) Enable Networking On Bootup
    navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-{iface} file has ONBOOT set to yes.

(8) Ensure connectivity using ping with short hostnames (hostname -s).Hostnames should resolve to a network IP address, not to the loopback IP address
    a) 輸入命令,修改主機名(例如修改成admnode)。(注:修正生效須要重啓網卡)
        # vim /etc/hostname

    b) 指定全部節點的主機名和IP之間的對應關係。(注:修正生效須要重啓網卡)
        # vim /etc/hosts
        在末尾添加
        <node1's IP>  admnode
        <node2's IP>  node1
        <node3's IP>  node2

        例:
        10.167.225.111 admnode
        10.167.225.114 node1
        10.167.225.116 node2

    c) 確保各個主機間可以經過 ping 主機名 的方式鏈接。示例以下:
        [root@localhost etc]# ping node1
        PING node1 (10.167.225.114) 56(84) bytes of data.
        64 bytes from node1 (10.167.225.114): icmp_seq=1 ttl=64 time=0.442 ms
        64 bytes from node1 (10.167.225.114): icmp_seq=2 ttl=64 time=0.453 ms
        64 bytes from node1 (10.167.225.114): icmp_seq=3 ttl=64 time=0.417 ms
        ...

(9) 打開防火牆中須要的端口
    # sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

(10) 確保ceph-deploy 能正常鏈接建立ceph其他節點(例如,若是不設置可能在部署時會提示以下錯誤:[ceph_deploy.osd][ERROR ] remote connection got closed, ensure ``requiretty`` is disabled for node2)
    輸入以下命令
    # sudo visudo

    找到以下Defaults requiretty設置選項
    #
    # Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
    #         You have to run "ssh -t hostname sudo <cmd>".
    #
    Defaults    requiretty

    將"Defaults    requiretty"修改成"Defaults:ceph !requiretty"

(11) 設置SELinux,將enforcing 改爲permissive
    # sudo setenforce 0

    # vim /etc/selinux/config
    將其中的"SELINUX=enforcing"修改成"SELINUX=permissive"

(12) 安裝yum-plugin-priorities
    # sudo yum install yum-plugin-priorities
    確保/etc/yum/pluginconf.d/priorities.conf有內容:
        [main] 
        enabled = 1
   
(13)關閉防火牆
    # sudo systemctl stop firewalld
   
(14)安裝redhat-lsb(安裝該軟件包,確保有lib/lsb/init-functions 目錄)
    # yum install redhat-lsb

3、最基本節點Ceph Storage Cluster配置(http://docs.ceph.com/docs/master/start/quick-ceph-deploy/)


基本環境搭建,配置節點
      主機名         角色               磁盤                         
    ================================================================
    a) admnode      deploy-node      
    b) node1          mon1                Disk(/dev/sdb  capacity:10G)
    c) node2          osd.0                Disk(/dev/sdb  capacity:10G)
    d) node3          osd.1                Disk(/dev/sdb  capacity:10G)

(1) admin節點切換到自定義的cephadmin用戶(避免使用sodu 或 root用戶調用ceph-deploy)

(2) 在cephadmin用戶下建立一個ceph-cluster的目錄,用來保存執行ceph-deploy命令後輸出的文件。
    # mkdir ceph-cluster
    # cd ceph-cluster

(3) Create a Cluster
    a) ceph-cluster的目錄下,使用ceph-deploy命令,用初始的monitor節點建立cluster。
    # ceph-deploy new {initial-monitor-node(s)}
    例如
    # ceph-deploy new node1
   
    b) 將默認的osd數量從3改爲2。
    修改ceph-cluster目錄下的 ceph.cong 配置文件,在[global]區域後添加:
        osd pool default size = 2
        osd pool default min size = 2
        osd pool default pg num = 512
        osd pool default pgp num = 512
        osd crush chooseleaf type = 1
               
        [osd]
        osd journal size = 1024

   
(4) 在每一個節點中安裝Ceph(因爲公司網絡的限制,使用橋接模式時沒法上網,)
    # ceph-deploy install {ceph-node}[{ceph-node} ...]
   
    例)
    # ceph-deploy install admnode node1 node2
   
    啓動全部ceph服務(注意:新版本中已經用ceph.target 替代了ceph.service)
    # sudo systemctl start ceph.target
   
(5) 在admin節點上初始化某個(或多個節點爲mon節點)monitor(s)
       
    b) 初始化monitor。
    # ceph-deploy mon create-initial
   
(6) 增長兩個OSDs
    a) 查看集羣節點的磁盤節點信息,例如 /dev/sdb
        # ceph-deploy disk list <節點Host名>
   
    b) 準備OSDs
        # ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
       
    c) 激活OSDs節點(注:prepare OSDs時,ceph-deploy會自動格式化磁盤,做成/sdb1數據盤和/sdb2日誌盤,這裏使用數據盤/sdb1,而非整個/dev/sdb)
        # ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1
       
        備)若是OSD激活失敗,或者OSD的狀態是down,查看
        http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
        http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#osd-not-running
       
    d)  推送configuration file 和 admin key 給admin節點和其餘全部Ceph節點,以即可以在全部節點上執行ceph CLI命令(如ceph -s),而不用必須在monitor節點上執行。
        # ceph-deploy admin admnode node1 node2 node3
       
    e) 確保對ceph.client.admin.keyring有足夠權限
        # sudo chmod +r /etc/ceph/ceph.client.admin.keyring
   
    f) 若是激活OSDs節點成功,則可經過在mon節點上執行ceph -s 或 ceph -w 查看到以下active+clean的信息
            [root@node1 etc]# ceph -w
                cluster 62d61946-b429-4802-b7a7-12289121a022
                 health HEALTH_OK
                 monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
                        election epoch 2, quorum 0 node2
                 osdmap e9: 2 osds: 2 up, 2 in
                  pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
                        67916 kB used, 18343 MB / 18409 MB avail
                              64 active+clean
           
            2016-03-08 20:12:00.436008 mon.0 [INF] pgmap v15: 64 pgs: 64 active+clean; 0 bytes data, 67916 kB used, 18343 MB / 18409 MB avail

 

4、完整集羣節點Ceph Storage Cluster配置(http://docs.ceph.com/docs/master/start/quick-ceph-deploy/)


    完整環境搭建,配置節點
          主機名         角色               磁盤                         
        ================================================================
        a) admnode      deploy-node      
        b) node1          mon1,osd.2,mds        Disk(/dev/sdb  capacity:10G)
        c) node2          osd.0,mon2            Disk(/dev/sdb  capacity:10G)
        d) node3          osd.1,mon3            Disk(/dev/sdb  capacity:10G)       

(1) 在Node1上增長一個OSD
    # ceph-deploy osd prepare node1:/dev/sdb
    # ceph-deploy osd activate node1:/dev/sdb1
   
    執行成功後,集羣的狀態以下「
    [root@node1 etc]# ceph -w
        cluster 62d61946-b429-4802-b7a7-12289121a022
         health HEALTH_OK
         monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
                election epoch 2, quorum 0 node2
         osdmap e13: 3 osds: 3 up, 3 in
          pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
                102032 kB used, 27515 MB / 27614 MB avail
                      64 active+clean

            2016-03-08 21:21:29.930307 mon.0 [INF] pgmap v23: 64 pgs: 64 active+clean; 0 bytes data, 102032 kB used, 27515 MB / 27614 MB avail

(2) 在Node1增長一個MDS(若是要使用CephFS,必需要有一個metadata server)
    # ceph-deploy mds create node1
   
(3) 增長RGW實例(爲了使用Ceph Object Gateway組件)
    # ceph-deploy rgw create node1

(4) 增長Monitors,根據Monitor節點法定人數(quorum)的要求,Monitors機器須要奇數以上的節點,所以增長2個MON節點,同時,MON集羣之間須要時間同步。
    4.1 MONs節點之間配置時間同步(admnode做爲NTP服務器,因爲不鏈接外網,所以將使用local時間做爲ntp服務提供給ntp客戶端)。
        a) 在admnode節點上配置局域網NTP服務器(使用local時間)。
            a.1) 編輯/etc/ntp.conf,註釋掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
            添加"server 127.127.1.0 fudge"和」127.127.1.0 stratum 8「這兩行
                # Use public servers from the pool.ntp.org project.
                # Please consider joining the pool (http://www.pool.ntp.org/join.html).
                #server 0.centos.pool.ntp.org iburst
                #server 1.centos.pool.ntp.org iburst
                #server 2.centos.pool.ntp.org iburst
                #server 3.centos.pool.ntp.org iburst
                server 127.127.1.0 fudge
                127.127.1.0 stratum 8
           
            a.2) admin節點啓用ntpd服務
            # sudo systemctl restart ntpd
               
            a.3) 查看ntpd服務啓動信息
            # ntpstat
                synchronised to local net at stratum 6
                   time correct to within 12 ms
                   polling server every 64 s
            # ntpq -p
                     remote           refid      st t when poll reach   delay   offset  jitter
                ==============================================================================
                *LOCAL(0)        .LOCL.           5 l    3   64  377    0.000    0.000   0.000

        b) 在其他node1,node2,Node3三個須要配置Monitor服務的節點上,配置NTP,與NTP服務器同步時間。
            b.1) 確保ntpd服務關閉
            # sudo systemctl stop ntpd
           
            b.2) 使用 ntpdate 命令先於NTP服務同步,確保offset在1000s內。
            # sudo ntpdate <admnode's IP or hostname>
                 9 Mar 16:59:26 ntpdate[31491]: adjust time server 10.167.225.136 offset -0.000357 sec

            b.3) 編輯/etc/ntp.conf,註釋掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
            添加NTP服務器(admnode節點)的IP"server 10.167.225.136"
                # Use public servers from the pool.ntp.org project.
                # Please consider joining the pool (http://www.pool.ntp.org/join.html).
                #server 0.centos.pool.ntp.org iburst
                #server 1.centos.pool.ntp.org iburst
                #server 2.centos.pool.ntp.org iburst
                #server 3.centos.pool.ntp.org iburst
                server 10.167.225.136
               
            b.4) 啓動ntpd服務
            # sudo systemctl start ntpd
           
            b.5) 查看ntpd服務啓動信息
            # ntpstat
                synchronised to NTP server (10.167.225.136) at stratum 7
                   time correct to within 7949 ms
                   polling server every 64 s

            # ntpq -p
                     remote           refid      st t when poll reach   delay   offset  jitter
                ==============================================================================
                *admnode           LOCAL(0)         6 u    6   64    1    0.223   -0.301   0.000

    4.2    在集羣中增長兩個MON
        a) 新增Monitor節點
        # ceph-deploy mon add node2
        # ceph-deploy mon add node3
   
        b) 節點安裝成功後查看集羣狀態以下:
        # ceph -s
            cluster 62d61946-b429-4802-b7a7-12289121a022
             health HEALTH_OK
             monmap e3: 3 mons at {node1=10.167.225.137:6789/0,node2=10.167.225.138:6789/0,node3=10.167.225.141:6789/0}
                    election epoch 8, quorum 0,1,2 node2,node3,node4
             osdmap e21: 3 osds: 3 up, 3 in
              pgmap v46: 64 pgs, 1 pools, 0 bytes data, 0 objects
                    101 MB used, 27513 MB / 27614 MB avail
                          64 active+clean
        c)檢查quorum的狀態
            # ceph quorum_status --format json-pretty
            輸出以下:
                {
                    "election_epoch": 8,
                    "quorum": [
                        0,
                        1,
                        2
                    ],
                    "quorum_names": [
                        "node1",
                        "node2",
                        "node3"
                    ],
                    "quorum_leader_name": "node2",
                    "monmap": {
                        "epoch": 3,
                        "fsid": "62d61946-b429-4802-b7a7-12289121a022",
                        "modified": "2016-03-09 17:50:29.370831",
                        "created": "0.000000",
                        "mons": [
                            {
                                "rank": 0,
                                "name": "node1",
                                "addr": "10.167.225.137:6789\/0"
                            },
                            {
                                "rank": 1,
                                "name": "node2",
                                "addr": "10.167.225.138:6789\/0"
                            },
                            {
                                "rank": 2,
                                "name": "node3",
                                "addr": "10.167.225.141:6789\/0"
                            }
                        ]
                    }
                }

5、Ceph Block Device塊設備


(1) 使用要求:
    a) 集羣環境搭建成功
    b) 集羣的狀態是 active+clean。
    c) 節點配置,將admnode也做爲client-node使用
          主機名         角色               磁盤                         
        ================================================================
        a) admnode      deploy-node,client-node     
        b) node1          mon1,osd.2,mds        Disk(/dev/sdb  capacity:10G)
        c) node2          osd.0,mon2            Disk(/dev/sdb  capacity:10G)
        d) node3          osd.1,mon3            Disk(/dev/sdb  capacity:10G)   

(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-rbd/
    a) 在client-node建立Block Device Image,這裏使用默認的rbd pool(使用ceph osd lspools查看)
        # ceph osd lspools
        0 rbd,
       
        # rbd create --size 1024 blockDevImg
        # rbd ls rbd
        blockDevImg
       
        # rbd info blockDevImg
            rbd image 'blockDevImg':
                size 1024 MB in 256 objects
                order 22 (4096 kB objects)
                block_name_prefix: rb.0.1041.74b0dc51
                format: 1
               
    b) 在client-node將該image 映射給一個 block device。
        # sudo rbd map blockDevImg --name client.admin       
        /dev/rbd0
       
    c) 使用該block device在client-node上建立一個文件系統。
        # sudo mkfs.ext4 -m0 /dev/rbd/rbd/blockDevImg
        mke2fs 1.42.9 (28-Dec-2013)
        Discarding device blocks: done                           
        Filesystem label=
        OS type: Linux
        Block size=4096 (log=2)
        Fragment size=4096 (log=2)
        Stride=1024 blocks, Stripe width=1024 blocks
        65536 inodes, 262144 blocks
        0 blocks (0.00%) reserved for the super user
        First data block=0
        Maximum filesystem blocks=268435456
        8 block groups
        32768 blocks per group, 32768 fragments per group
        8192 inodes per group
        Superblock backups stored on blocks:
            32768, 98304, 163840, 229376

        Allocating group tables: done                           
        Writing inode tables: done                           
        Creating journal (8192 blocks): done
        Writing superblocks and filesystem accounting information: done

        d) 掛載文件系統
        # sudo mkdir /mnt/ceph-block-device
        # sudo mount /dev/rbd/rbd/blockDevImg /mnt/ceph-block-device
        # cd /mnt/ceph-block-device
        # mount
        ...
        /dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

 

6、Ceph filesystem文件系統的使用


(1) 使用要求:
    a) 集羣環境搭建成功
    b) 集羣的狀態是 active+clean。
    c) 節點配置,將admnode也做爲client-node使用,在client-node節點上操做。
          主機名         角色               磁盤                         
        ================================================================
        a) admnode      deploy-node,client-node     
        b) node1          mon1,osd.2,mds        Disk(/dev/sdb  capacity:10G)
        c) node2          osd.0,mon2            Disk(/dev/sdb  capacity:10G)
        d) node3          osd.1,mon3            Disk(/dev/sdb  capacity:10G)
       
(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-cephfs/#create-a-secret-file
    a) 新建兩個pools(metadata pool and data pool )
    命令:ceph osd pool create <creating_pool_name> <pg_num>
    參數:creating_pool_name : 要建立的pool的名字
          pg_num : Placement Group的個數
       
        # ceph osd pool create cephfs_data 512
        pool 'cephfs_data' created
        # ceph osd pool create cephfs_metadatea 512
        pool 'cephfs_metadatea' created
        # ceph osd lspools
        0 rbd,1 cephfs_data,2 cephfs_metadatea,

    b) 建立一個Filesystem,
    命令:ceph fs new <fs_name> <metadata_pool_name> <data_pool_name>
    參數:fs_name : 文件系統名
          metadata_pool_name :  metadata pool's name
          data_pool_name :data pool's name
       
        # ceph fs new cephfs cephfs_metadatea cephfs_data
        new fs with metadata pool 2 and data pool 1

    c) 一旦文件系統建立成功,可看到MDS(s)進入active state
        # ceph mds stat
        e5: 1/1/1 up {0=node1=up:active}

    d) 在管理節點admnode建立Secret File
        # cat ceph.client.admin.keyring
        [client.admin]
            key = AQDrv95WLfajLhAAmUyN/wCoq6cxS9xOYfy9Zw==

        在/etc/ceph/目錄下新建admin.secret文件,拷貝粘貼key的值 AQDrv95WLfajLhAAmUyN/wCoq6cxS9xOYfy9Zw==
        # vim /etc/ceph/admin.secret

        新建一個mycephfs目錄
        #sudo mkdir /mnt/mycephfs

    e) 掛載Ceph FS做爲內核驅動(詳細見http://docs.ceph.com/docs/master/man/8/mount.ceph/)
        sudo mount -t ceph <Monitor's IP or monitor host name>:<Ceph host port,default 6789>:/ <mountpoint> -o name=<RADOS user to authenticate as when using cephx>,secretfile=<path to file containing the secret key to use with cephx>
       
        # sudo mount -t ceph 10.167.225.137:6789:/ /mnt/mycephfs/ -o name=admin,secretfile=/etc/ceph/admin.secret

        經過命令查看,新增了一個類型爲cpeh的文件系統掛載點
        # mount
        ...
        /dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
        10.167.225.137:6789:/ on /mnt/mycephfs type ceph (rw,relatime,name=admin,secret=<hidden>,nodcache)

 

7、Ceph Object Gateway


(1) 使用要求:
    a) 集羣的狀態是 activ+clean。節點配置,將admnode也做爲client-node使用
                  主機名         角色               磁盤                         
        ================================================================
        a) admnode      deploy-node,client-node     
        b) node1          mon1,osd.2,mds        Disk(/dev/sdb  capacity:10G)
        c) node2          osd.0,mon2            Disk(/dev/sdb  capacity:10G)
        d) node3          osd.1,mon3            Disk(/dev/sdb  capacity:10G)
    b) 確保7480端口沒有被佔用,或者沒有被防火牆屏蔽(打開方法見http://docs.ceph.com/docs/master/start/quick-start-preflight/)
    c) Client節點安裝了 Ceph Object Gateway 包,若沒有安裝使用以下命令
        # ceph-deploy install --rgw <client-node> [<client-node> ...]

(2) 使用方法
    a) 在deploy節點是,使用命令給client節點建立rgw實例
        # ceph-deploy rgw create <client-node's-host-name>
        ...
        [node1][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
        [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node1 and default port 7480
       
        經過以下命令查詢7480端口,確認Ceph Object Gateway (RGW)服務開啓
        # sudo netstat -tlunp | grep 7480
            tcp        0      0 0.0.0.0:7480            0.0.0.0:*               LISTEN      10399/radosgw 
   
    b) 查看RGW服務,經過在client節點的瀏覽器打開以下網址,則RRW服務測試成功。
        # http://<ip-of-client-node or client node's host name>:7480         網頁中會出現以下相似XML配置信息:         <?xml version="1.0" encoding="UTF-8"?>         <ListAllMyBucketsResult>           <Owner>             <ID>anonymous</ID>             <DisplayName></DisplayName>           </Owner>             <Buckets>           </Buckets>         </ListAllMyBucketsResult>

相關文章
相關標籤/搜索