搭建 MFS 分佈式文件系統

MFS分佈式文件系統

 

MFS是一種半分佈式文件系統,它是由波蘭人開發的。MFS文件系統可以實現RAID的功能,不但可以更節約存儲成本,並且不比專業的存儲系統差,它還能夠實如今線擴展。
 
分佈式文件系統是指文件系統管理的物理存儲資源下不必定直接鏈接在本地節點上,而是經過計算機網絡與節點相連。web

分佈式文件系統的優勢是集中訪問、簡化操做、數據容災,以及提升了文件的存取性能。vim

MFS文件系統的組成架構:

  • 元數據服務器(Master):在整個體系中負責管理文件系統,維護元數據;
  • 元數據日誌服務器(Metalogger):備份Master服務器的變化日誌文件,文件類型爲changlog_ml.*.mfs。當Master服務器數據丟失或者損壞時,能夠從日誌服務器中取得文件,進行恢復;
  • 數據存儲服務器(Chunk Server):真正存儲的數據的服務器。存儲文件時,會把文件分塊保存,並在數據服務器之間進行復制。數據服務器越多,可以使用的容量則越大,可靠性就越高,性能也就越好;
  • 客戶端(Client):能夠像掛載NFS同樣掛載MFS文件系統,其操做是相同的。

MFS讀取數據的過程:

  1. 客戶端向元數據服務器發出讀請求;
  2. 元數據服務器把所需數據存放的位置(ChunkServer的IP地址和Chunk編號)告知客戶端;
  3. 客戶端向已知的ChunkServer請求發送數據;
  4. Chunkserver向客戶端發送數據。

MFS寫入數據的過程:

  1. 客戶端向元數據服務器發送寫入請求;
  2. 元數據服務器與ChunkServer進行交互,但元數據服務器只在某些服務器建立新的分塊Chunks,建立成功後由ChunkServers告知元數據服務器操做成功;
  3. 元數據服務器告知客戶端,能夠在哪一個ChunkServer的哪些Chunks吸入數據;
  4. 客戶端向指定的ChunkServer寫入數據;
  5. 該ChunkServer與其餘ChunkServer進行數據同步,同步成功後ChunkServer告知客戶端數據寫入成功;
  6. 客戶端告知元數據服務器本次寫入完畢。

系統環境

主機 操做系統 IP地址
Master Server Centos 7.3 X86_64 192.168.1.11
Metalogger Centos 7.3 X86_64 192.168.1.12
Chunk1 Centos 7.3 X86_64 192.168.1.13
Chunk2 Centos 7.3 X86_64 192.168.1.14
Chunk3 Centos 7.3 X86_64 192.168.1.15
Client Centos 7.3 X86_64 192.168.1.22

開始部署

 

Master Server:

 

  1. 添加鍵值
    # curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS

     

  2. 添加庫條目
    # curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

     

  3. 安裝mfsmaster軟件包
    yum -y install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli

    確認配置文件,在/etc/mfs下生成了相關的配置文件(mfsexports.cfg、mfsmaster.cfg等)
    如下配置文件均採用默認值,不需作修改:mfsmaster.cfg、mfsexports.cfg、mfstopology.cfg

     瀏覽器

  4. 啓動mfsmaster,檢查應用是否啓動
    mfsmaster start
    ps -ef | grep mfs

     

Metalogger:

 

  1. 添加鍵值
    # curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
  2. 添加庫條目
    # curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

     

  3. 安裝mfsmetalogger軟件包
    yum -y install moosefs-metalogger

     

  4. 編輯mfsmetalogger.cfg配置文件
    ## vim /etc/mfs/mfsmetalogger.cfg
    ······省略部分語句
    ###############################################
    # RUNTIME OPTIONS                             #
    ###############################################
     
    # user to run daemon as (default is mfs)
    # WORKING_USER = mfs
     
    # group to run daemon as (optional - if empty then default user group will be used)
    # WORKING_GROUP = mfs
     
    # name of process to place in syslog messages (default is mfsmetalogger)
    # SYSLOG_IDENT = mfsmetalogger
     
    # whether to perform mlockall() to avoid swapping out mfsmetalogger process (default is 0, i.e. no)
    # LOCK_MEMORY = 0
     
    # Linux only: limit malloc arenas to given value - prevents server from using huge amount of virtual memor    y (default is 4)
    # LIMIT_GLIBC_MALLOC_ARENAS = 4
     
    # Linux only: disable out of memory killer (default is 1)
    # DISABLE_OOM_KILLER = 1
     
    # nice level to run daemon with (default is -19; note: process must be started as root to increase priorit    y, if setting of priority fails, process retains the nice level it started with)
    # NICE_LEVEL = -19
     
    # set default umask for group and others (user has always 0, default is 027 - block write for group and bl    ock all for others)
    # FILE_UMASK = 027
     
    # where to store daemon lock file (default is /var/lib/mfs)
    # DATA_PATH = /var/lib/mfs
     
    # number of metadata change log files (default is 50)
    # BACK_LOGS = 50
     
    # number of previous metadata files to be kept (default is 3)
    # BACK_META_KEEP_PREVIOUS = 3
     
    # metadata download frequency in hours (default is 24, should be at least BACK_LOGS/2)
    # META_DOWNLOAD_FREQ = 24
     
    ###############################################
    # MASTER CONNECTION OPTIONS                   #
    ###############################################
     
    # delay in seconds before next try to reconnect to master if not connected (default is 5)
    # MASTER_RECONNECTION_DELAY = 5
     
    # local address to use for connecting with master (default is *, i.e. default local address)
    # BIND_HOST = *
     
    # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster)
     
    修改成Master的IP地址 
    MASTER_HOST = 192.168.1.11
     
     
    # MooseFS master supervisor port (default is 9419)
    # MASTER_PORT = 9419
     
    # timeout in seconds for master connections (default is 10)
    # MASTER_TIMEOUT = 10

     

  5. 啓動mfsmetalogger,檢查應用是否啓動
    mfsmetalogger start
    ps -ef | grep mfs

     

ChunkServers:

 
以上三臺數據存儲服務器配置一致
 安全

  1. 添加鍵值
    # curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS

     

  2. 添加庫條目
    # curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

     

  3. 安裝mfsmaster軟件包
    yum -y install moosefs-chunkserver

     

  4. 修改主配置文件,修改Master的IP地址
    ## vim /etc/mfs/mfschunkserver.cfg
    ······省略部分信息
    ###############################################
    # MASTER CONNECTION OPTIONS                   #
    ###############################################
     
    # labels string (default is empty - no labels)
    # LABELS =
     
    # local address to use for master connections (default is *, i.e. default local address)
    # BIND_HOST = *
     
    # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster)
     
    # 修改成Master的IP地址
    MASTER_HOST = 192.168.1.11
     
    # MooseFS master command port (default is 9420)
    # MASTER_PORT = 9420
     
     
    # timeout in seconds for master connections. Value >0 forces given timeout, but when value is 0 then CS as    ks master for timeout (default is 0 - ask master)
    # MASTER_TIMEOUT = 0
     
    # delay in seconds before next try to reconnect to master if not connected # MASTER_RECONNECTION_DELAY = 5
     
    # authentication string (used only when master requires authorization)
    # AUTH_CODE = mfspassword

     

  5. 指定數據存儲服務器分配給MFSMaster使用的文件位置
    ## vim /etc/mfs/mfshdd.cfg
    ······省略部分信息
    # This file keeps definitions of mounting points (paths) of hard drives to use with chunk server.
    # A path may begin with extra characters which swiches additional options:
    #  - '*' means that this hard drive is 'marked for removal' and all data will be replicated to other hard drives (usually on other chunkservers)
    #  - '<' means that all data from this hard drive should be moved to other hard drives
    #  - '>' means that all data from other hard drives should be moved to this hard drive
    #  - '~' means that significant change of total blocks count will not mark this drive as damaged
    # If there are both '<' and '>' drives then data will be moved only between these drives
    # It is possible to specify optional space limit (after each mounting point), there are two ways of doing that:
    #  - set space to be left unused on a hard drive (this overrides the default setting from mfschunkserver.cfg)
    #  - limit space to be used on a hard drive
    # Space limit definition: [0-9]*(.[0-9]*)?([kMGTPE]|[KMGTPE]i)?B?, add minus in front for the first option.
    #
    # Examples:
    #
    # use hard drive '/mnt/hd1' with default options:
    #/mnt/hd1
    #
    # use hard drive '/mnt/hd2', but replicate all data from it:
    #*/mnt/hd2
    #
    # use hard drive '/mnt/hd3', but try to leave 5GiB on it:
    #/mnt/hd3 -5GiB
    #
    # use hard drive '/mnt/hd4', but use only 1.5TiB on it:
    #/mnt/hd4 1.5TiB
    #
    # use hard drive '/mnt/hd5', but fill it up using data from other drives
    #>/mnt/hd5
    #
    # use hard drive '/mnt/hd6', but move all data to other hard drives
    #</mnt/hd6
    #
    # use hard drive '/mnt/hd7', but ignore significant change of hard drive total size (e.g. compressed file systems)
    #~/mnt/hd7
     
    #提供給MFS的分區目錄
    /data

     

  6. 建立目錄並修改屬主/屬組,啓動chunkserver服務,檢查應用是否啓動
    mkdir /data
    chown -R mfs:mfs /data
    mfschunkserver start
    ps -ef | grep mfs

Client:

 

  1. 添加鍵值
    # curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS

     

  2. 添加庫條目
    # curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo

     

  3. 安裝mfsmaster軟件包
    yum -y install moosefs-client

     

  4. 建立掛載點,加載fuse模塊到內核,並掛載MFS
    mkdir -p /mfs/data
    modprobe fuse
    mfsmount /mfs/data -H 192.168.1.11

     

MFS監控

 
經過yum安裝方式已經默認安裝好Mfscgiserv功能,它是同Python編寫的一個web服務器,其監聽端口爲9425,能夠在Master Server上經過mfscgiserv命令開啓,而後利用瀏覽器打開就能夠全面監控全部客戶端掛載、Chunk Server、Master Server,以及客戶端的各類操做等。
 
其中各部分的含義以下:服務器

  • Info部分:顯示了MFS的基本信息
  • Server部分:列出現有的Chunk Server
  • Disks部分:列出每一臺Chunk Server的磁盤目錄及使用量
  • Exports部分:列出被共享的目錄,便可被掛載的目錄
  • Mounts部分:顯示被掛載的狀況
  • Operations部分:顯示正在執行的操做
  • Master Charts部分:顯示Master Server的操做狀況,包括讀取、寫入、建立目錄、刪除目錄等

 

MFS經常使用操做

 

mfsgetgoal與mfssetgoal命令
 網絡

目標是指文件被複制的份數,設定了複製的份數後就能夠經過mfsgetgoal命令來證明,也能夠經過mfssetgoal來改變設定。架構

mfscheckfile與mfsfileinfo命令app

實際的副本分數能夠經過mfscheckfile和mfsfileinfo命令來證明。curl

mfsdirinfo命令分佈式

整個目錄樹的內容摘要能夠經過一個功能加強的、等同於「du -s」的命令mfsdirinfo來顯示。

 

維護MFS

最重要的就是維護元數據服務器,而元數據服務器最重要的目錄爲/var/lib/mfs/,MFS數據的存儲、修改、更新等操做變化都會記錄咋這個目錄的某個文件中,所以只要保證這個目錄的數據安全,就可以保證整個MFS文件系統的安全性和可靠性。/var/lib/mfs/目錄下的數據由兩部分組成:一部分是元數據服務器的改變日誌,文件名稱相似於changelog.*.mfs;另外一部分是元數據文件metadata.mfs,運行mfsmaster時該文件會被命名爲metadata.mfs.back。只要保證了這兩部數據的安全,即便元數據服務器遭到致命×××,也能夠經過備份的元數據文件來部署一套元數據服務器。

相關文章
相關標籤/搜索