RHEL 6.5-----MFS

主機名 IP  安裝服務
master  192.168.30.130   mfsmaster、mfsmetalogger
node-1  192.168.30.131  chunkserver 
node-2  192.168.30.132  mfsclient
node-3  192.168.30.133  chunkserver

 

 

 

 

 

在master上安裝node

[root@master ~]# yum install -y rpm-build gcc gcc-c++ fuse-devel zlib-devel -y 
上傳軟件包
[root@master ~]# rz
添加mfs運行用戶
[root@master ~]# useradd -s /sbin/nologin mfs
[root@master ~]# tar -xf mfs-1.6.27-5.tar.gz -C /usr/local/src/
[root@master ~]# cd /usr/local/src/mfs-1.6.27/
[root@master mfs-1.6.27]# ./configure --prefix=/usr/local/mfs \
> --with-default-user=mfs \
> --with-default-group=mfs
[root@master mfs-1.6.27]# make -j 4 && make install 
[root@master mfs-1.6.27]# echo $?
0

相關文件介紹c++

[root@master mfs-1.6.27]# cd /usr/local/mfs/
[root@master mfs]# ll
total 20
drwxr-xr-x 2 root root 4096 Jun  4 14:56 bin #客戶端工具
drwxr-xr-x 3 root root 4096 Jun  4 14:56 etc #服務器的配置文件所在位置
drwxr-xr-x 2 root root 4096 Jun  4 14:56 sbin #服務端啓動程序
drwxr-xr-x 4 root root 4096 Jun  4 14:56 share #幫助手冊
drwxr-xr-x 3 root root 4096 Jun  4 14:56 var #元數據目錄,可在配置文件中定義到其餘目錄

建立配置文件vim

[root@master mfs]# cd /usr/local/mfs/etc/mfs/
[root@master mfs]# ll
total 28
-rw-r--r-- 1 root root  531 Jun  4 14:56 mfschunkserver.cfg.dist
-rw-r--r-- 1 root root 4060 Jun  4 14:56 mfsexports.cfg.dist
-rw-r--r-- 1 root root   57 Jun  4 14:56 mfshdd.cfg.dist
-rw-r--r-- 1 root root 1020 Jun  4 14:56 mfsmaster.cfg.dist
-rw-r--r-- 1 root root  417 Jun  4 14:56 mfsmetalogger.cfg.dist
-rw-r--r-- 1 root root  404 Jun  4 14:56 mfsmount.cfg.dist
-rw-r--r-- 1 root root 1123 Jun  4 14:56 mfstopology.cfg.dist
[root@master mfs]# cp mfsmaster.cfg.dist mfsmaster.cfg #配置文件
[root@master mfs]# cp mfsexports.cfg.dist mfsexports.cfg #輸出目錄配置文件
[root@master mfs]# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg #元數據日誌
[root@master mfs]# cp metadata.mfs.empty metadata.mfs #首次安裝master時,會自動生成一個名爲metadata.mfs.empty的元數據文件,可是該文件是空的,MooseFS master必須有文件metadata.mfs才能夠運行 

啓動服務服務器

[root@master mfs]# chown -R mfs:mfs /usr/local/mfs/
[root@master mfs]# /usr/local/mfs/sbin/mfsmaster start 
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... file not found
if it is not fresh installation then you have to restart all active mounts !!!
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
create new empty filesystemmetadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
[root@master mfs]# /usr/local/mfs/sbin/mfsmaster start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 1
directory inodes: 1
file inodes: 0
chunks: 0
metadata file has been loaded
stats file has been loaded
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

查看端口是否偵聽到      session

[root@master ~]# netstat -antup | grep 942*
tcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      2502/mfsmaster      
tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      2502/mfsmaster      
tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      2502/mfsmaster      
tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      1942/tgtd           
tcp        0      0 192.168.30.130:3260         192.168.30.131:33414        ESTABLISHED 1942/tgtd           
tcp        0      0 :::3260                     :::*                        LISTEN      1942/tgtd 

設置開機啓動及關閉服務app

[root@master ~]# echo "/usr/local/mfs/sbin/mfsmaster start" >> /etc/rc.local 
[root@master ~]# chmod +x /etc/rc.local 
[root@master ~]# /usr/local/mfs/sbin/mfsmaster stop
sending SIGTERM to lock owner (pid:2502)
waiting for termination ... terminated

查看日誌文件tcp

[root@master ~]# ll /usr/local/mfs/var/mfs/
total 764
-rw-r----- 1 mfs mfs     95 Jun  4 15:48 metadata.mfs
-rw-r----- 1 mfs mfs     95 Jun  4 15:33 metadata.mfs.back.1
-rw-r--r-- 1 mfs mfs      8 Jun  4 14:56 metadata.mfs.empty
-rw-r----- 1 mfs mfs     10 Jun  4 15:21 sessions.mfs
-rw-r----- 1 mfs mfs 762516 Jun  4 15:48 stats.mfs

指定須要共享的權限ide

[root@master ~]# cd /usr/local/mfs/etc/mfs/
[root@master mfs]# vim mfsexports.cfg

# Allow "meta".
*                       .       rw#此行下面添加兩行
192.168.30.131          /       rw,alldirs,maproot=0
192.168.30.132          /       rw,alldirs,maproot=0

啓動服務工具

[root@master mfs]# sh /etc/rc.local 
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 1
directory inodes: 1
file inodes: 0
chunks: 0
metadata file has been loaded
stats file has been loaded
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly

在master上配置元數據日誌服務器測試

root@master src]# rm -rf mfs-1.6.27/
[root@master src]# ls /root/
anaconda-ks.cfg  install.log.syslog   Pictures
Desktop          ip_forward~          Public
Documents        ip_forwarz~          Templates
Downloads        mfs-1.6.27-5.tar.gz  Videos
install.log      Music                
[root@master src]# tar -xf /root/mfs-1.6.27-5.tar.gz -C /usr/local/src/
[root@master src]# cd mfs-1.6.27/
[root@master mfs-1.6.27]# ./configure \
> --prefix=/usr/local/mfsmeta \
> --with-default-user=mfs \
> --with-default-group=mfs
[root@master mfs-1.6.27]# make -j 4 && make install && echo $? && cd /usr/local/mfsmeta && ls 
.....
0
bin  etc  sbin  share  var
[root@master mfsmeta]# pwd
/usr/local/mfsmeta
[root@master mfsmeta]# cd etc/mfs/
[root@master mfs]# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg
[root@master mfs]# vim mfsmetalogger.cfg
..........
MASTER_HOST = 192.168.30.130
.......
啓動服務和關閉服務的方法
[root@master mfs]# chown -R mfs:mfs /usr/local/mfsmeta/
[root@master mfs]# /usr/local/mfsmeta/sbin/mfsmetalogger start
working directory: /usr/local/mfsmeta/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
[root@master mfs]# echo "/usr/local/mfsmeta/sbin/mfsmetalogger start " >> /etc/rc.local #設置開機啓動
[root@master mfs]# /usr/local/mfsmeta/sbin/mfsmetalogger stop
sending SIGTERM to lock owner (pid:7535)
waiting for termination ... terminated
[root@master mfs]# /usr/local/mfsmeta/sbin/mfsmetalogger start
working directory: /usr/local/mfsmeta/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
#查看端口
[root@master mfs]# lsof -i :9419
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mfsmaster 2258  mfs    8u  IPv4  14653      0t0  TCP *:9419 (LISTEN)
mfsmaster 2258  mfs   11u  IPv4  21223      0t0  TCP master:9419->master:44790 (ESTABLISHED)
mfsmetalo 7542  mfs    8u  IPv4  21222      0t0  TCP master:44790->master:9419 (ESTABLISHED)

在node-1上安裝數據服務器(chunk-server),chunk-server存儲數據的特色:在普通的文件系統上存儲數據塊或碎片做爲文件,所以,在chunk-server上看不到完整的文件。

[root@node-1 ~]# tar -xf mfs-1.6.27-5.tar.gz -C /usr/local/src/
[root@node-1 ~]# cd /usr/local/src/mfs-1.6.27/
[root@node-1 mfs-1.6.27]# useradd -s /sbin/nologin mfs
[root@node-1 mfs-1.6.27]# ./configure --prefix=/usr/local/mfschunk --with-default-user=mfs --with-default-group=mfs
[root@node-1 mfs-1.6.27]# make -j 4 && make install && echo $? && cd /usr/local/mfschunk 
............
0
[root@node-1 mfschunk]# pwd
/usr/local/mfschunk
[root@node-1 mfschunk]# ls
etc  sbin  share  var
[root@node-1 mfschunk]# cd etc/mfs/
建立配置文件,並修改
[root@node-1 mfs]# cp mfschunkserver.cfg.dist mfschunkserver.cfg
[root@node-1 mfs]# cp mfshdd.cfg.dist mfshdd.cfg
[root@node-1 mfs]# vim mfschunkserver.cfg
# WORKING_USER = mfs  #運行用戶
# WORKING_GROUP = mfs #運行用戶組
# SYSLOG_IDENT = mfschunkserver  #master sever在syslog中的標識,用以識別該服務的日誌
# LOCK_MEMORY = 0 #是否執行mlockall()避免mfsmaster進程溢出(默認爲0)
# NICE_LEVEL = -19 #運行的優先級(默認-19)

# DATA_PATH = /usr/local/mfschunk/var/mfs  #數據存放路徑,該目錄分三類文件(changlelog,sessions,stats)

# MASTER_RECONNECTION_DELAY = 5 #

# BIND_HOST = *
MASTER_HOST = 192.168.30.130 #元數據服務器的名稱或地址,(主機名或ip都可)
MASTER_PORT = 9420 #默認9420,能夠啓用

# MASTER_TIMEOUT = 60

# CSSERV_LISTEN_HOST = *
# CSSERV_LISTEN_PORT = 9422 #用於與其餘數據存儲服務器間的鏈接,一般爲數據複製

# HDD_CONF_FILENAME = /usr/local/mfschunk/etc/mfs/mfshdd.cfg #用於指定MFS使用的磁盤空間的配置文件的位置
# HDD_TEST_FREQ = 10

# deprecated, to be removed in MooseFS 1.7
# LOCK_FILE = /var/run/mfs/mfschunkserver.lock
# BACK_LOGS = 50
# CSSERV_TIMEOUT = 5
創建mfs分區,這裏以/tmp目錄爲例,生產環境下通常是一個獨立的磁盤
[root@node-1 mfs]# vim mfshdd.cfg
# mount points of HDD drives
#
#/mnt/hd1
#/mnt/hd2
#etc.
/tmp
開始啓動服務
[root@node-1 mfs]# chown -R mfs:mfs /usr/local/mfschunk/
[root@node-1 mfs]# /usr/local/mfschunk/sbin/mfschunkserver start
working directory: /usr/local/mfschunk/var/mfs
lockfile created and locked
initializing mfschunkserver modules ...
hdd space manager: path to scan: /tmp/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly
[root@node-1 mfs]# echo "/usr/local/mfschunk/sbin/mfschunkserver start " >> /etc/rc.local 
[root@node-1 mfs]# chmod +x /etc/rc.local 
[root@node-1 mfs]# ls /tmp/ #所有是存儲塊,人工沒法識別
00  11  22  33  44  55  66  77  88  99  AA  BB  CC  DD  EE  FF
01  12  23  34  45  56  67  78  89  9A  AB  BC  CD  DE  EF  02  13  24  35  46  57  68  79  8A  9B  AC  BD  CE  DF  F0  03  14  25  36  47  58  69  7A  8B  9C  AD  BE  CF  E0  F1  04  15  26  37  48  59  6A  7B  8C  9D  AE  BF  D0  E1  F2  
05  16  27  38  49  5A  6B  7C  8D  9E  AF  C0  D1  E2  F3  06  17  28  39  4A  5B  6C  7D  8E  9F  B0  C1  D2  E3  F4  07  18  29  3A  4B  5C  6D  7E  8F  A0  B1  C2  D3  E4  F5  08  19  2A  3B  4C  5D  6E  7F  90  A1  B2  C3  D4  E5  F6  09  1A  2B  3C  4D  5E  6F  80  91  A2  B3  C4  D5  E6  F7  
0A  1B  2C  3D  4E  5F  70  81  92  A3  B4  C5  D6  E7  F8  
0B  1C  2D  3E  4F  60  71  82  93  A4  B5  C6  D7  E8  F9
0C  1D  2E  3F  50  61  72  83  94  A5  B6  C7  D8  E9  FA
0D  1E  2F  40  51  62  73  84  95  A6  B7  C8  D9  EA  FB
0E  1F  30  41  52  63  74  85  96  A7  B8  C9  DA  EB  FC
0F  20  31  42  53  64  75  86  97  A8  B9  CA  DB  EC  FD
10  21  32  43  54  65  76  87  98  A9  BA  CB  DC  ED  FE
關閉服務的方法
[root@node-1 mfs]# /usr/local/mfschunk/sbin/mfschunkserver stop
sending SIGTERM to lock owner (pid:7395)
waiting for termination ... terminated

在node-2上配置客戶端

[root@node-2 ~]# yum install -y rpm-build gcc gcc-c++ fuse-devel zlib-devel lrzsz
[root@node-2 ~]# useradd -s /sbin/nologin mfs
[root@node-2 ~]# rz
[root@node-2 ~]# tar -xf mfs-1.6.27-5.tar.gz -C /usr/local/src/
[root@node-2 ~]# cd /usr/local/src/mfs-1.6.27/
[root@node-2 mfs-1.6.27]# ./configure --prefix=/usr/local/mfsclient \
> --with-default-user=mfs \
> --with-default-group=mfs \
> --enable-mfsmount
[root@node-2 mfs-1.6.27]# make -j 4 && make install && echo $? && cd /usr/local/mfsclient
......
0
[root@node-2 mfsclient]# pwd
/usr/local/mfsclient
[root@node-2 mfsclient]# cd bin/
[root@node-2 bin]# ls
mfsappendchunks  mfsfileinfo    mfsgettrashtime  mfsrgettrashtime  mfssetgoal
mfscheckfile     mfsfilerepair  mfsmakesnapshot  mfsrsetgoal       mfssettrashtime
mfsdeleattr      mfsgeteattr    mfsmount         mfsrsettrashtime  mfssnapshot
mfsdirinfo       mfsgetgoal     mfsrgetgoal      mfsseteattr       mfstools
[root@node-2 bin]# mkdir /mfs
[root@node-2 bin]# lsmod | grep fuse
fuse                   73530  2 
#若是這裏沒有加載出來fuse模塊,手動執行下以下命令
[root@node-2 ~]# modprobe fuse
建立mfsmount命令
[root@node-2 bin]# ln -s /usr/local/mfsclient/bin/mfsmount /usr/bin/mfsmount
掛載測試
[root@node-2 ~]# mfsmount /mfs/ -H 192.168.30.130 -p
MFS Password:
mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root
[root@node-2 ~]# df -h 
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_master-LogVol00   18G  4.2G   13G  25% /
tmpfs                           2.0G   72K  2.0G   1% /dev/shm
/dev/sda1                       194M   35M  150M  19% /boot
/dev/sr0                        3.6G  3.6G     0 100% /media/cdrom
192.168.30.130:9421              13G     0   13G   0% /mfs
[root@node-2 ~]# echo "modprobe fuse">>/etc/rc.local 
[root@node-2 ~]# echo "/usr/local/mfsclient/bin/mfsmount /mfs -H 192.168.30.130 -p">> /etc/rc.local 
[root@node-2 ~]# chmod +x /etc/rc.local 

在node-1上查看監控狀態

[root@node-1 ~]# tree /tmp/
/tmp/
├── 00
├── 01
├── 02
├── 03
├── 04
├── 05
├── 06
├── 07
├── 08
├── 09
├── 0A
..............
265 directories, 63 files
在node-2上
[root@node-2 ~]# cp /etc/passwd /mfs/
而後在node-1上查看
[root@node-1 ~]# tree /tmp/
/tmp/
├── 00
├── 01
│   └── chunk_0000000000000001_00000001.mfs
├── 02
├── 03
├── 04
├── 05
├── 06
├── 07
├── 08
├── 09
.........
265 directories, 64 files

在node-2上能夠看獲得文件

[root@node-2 ~]# ls /mfs/
passwd

在master上啓動網頁監控,主要用戶監控MFS各節點的狀態信息,網頁監控能夠部署到任意一臺機器上。

[root@master mfs]# /usr/local/mfs/sbin/mfscgiserv  start  
lockfile created and locked
starting simple cgi server (host: any , port: 9425 , rootpath: /usr/local/mfs/share/mfscgi)

查看權限

配置密碼以及添加一臺存儲和複製份數

在master上

[root@master mfs]# cd /usr/local/mfs/etc/mfs
[root@master mfs]# vim mfsexports.cfg
# Allow everything but "meta".
*                       /       rw,alldirs,maproot=0

# Allow "meta".
*                       .       rw
192.168.30.131          /       rw,alldirs,maproot=0
192.168.30.132          /       rw,alldirs,maproot=0,password=123456
192.168.30.0/24          /       rw,alldirs,maproot=0
添加chunk-server
[root@node-3 ~]# yum install -y rpm-build gcc gcc-c++ fuse-devel zlib-devel lrzsz
[root@node-3 ~]# useradd -s /sbin/nologin mfs
[root@node-3 mfs-1.6.27]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --enable-mfsmount && echo $? && sleep 3 && make -j 4 && sleep 3 && make install && cd /usr/local/mfs
[root@node-3 mfs]# pwd
/usr/local/mfs
[root@node-3 mfs]# ls
bin  etc  sbin  share  var
[root@node-3 mfs]# cd etc/mfs/
[root@node-3 mfs]# cp mfschunkserver.cfg.dist mfschunkserver.cfg
[root@node-3 mfs]# vim mfschunkserver.cfg
......
MASTER_HOST = 192.168.30.130
MASTER_PORT = 9420
.......
[root@node-3 mfs]# vim mfshdd.cfg

# mount points of HDD drives
#
#/mnt/hd1
#/mnt/hd2
#etc.
/opt  #此行添加
[root@node-3 mfs]# chown -R mfs:mfs /usr/local/mfs/
[root@node-3 mfs]# chown -R mfs:mfs /opt/
[root@node-3 mfs]# /usr/local/mfs/sbin/mfschunkserver start
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfschunkserver modules ...
hdd space manager: path to scan: /opt/
hdd space manager: start background hdd scanning (searching for available chunks)
main server module: listen on *:9422
no charts data file - initializing empty charts
mfschunkserver daemon initialized properly
[root@node-3 mfs]# yum install -y tree 

在master上從新啓動下mfsmaster

[root@master ~]# /usr/local/mfs/sbin/mfsmaster stop 
sending SIGTERM to lock owner (pid:2258)
waiting for termination ... terminated
[root@master ~]# /usr/local/mfs/sbin/mfsmaster start 
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
loading sessions ... ok
sessions file has been loaded
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
loading objects (files,directories,etc.) ... ok
loading names ... ok
loading deletion timestamps ... ok
loading chunks data ... ok
checking filesystem consistency ... ok
connecting files and chunks ... ok
all inodes: 2
directory inodes: 1
file inodes: 1
chunks: 1
metadata file has been loaded
stats file has been loaded
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
[root@master ~]# /usr/local/mfsmeta/sbin/mfsmetalogger stop
sending SIGTERM to lock owner (pid:7614)
waiting for termination ... terminated
[root@master ~]# /usr/local/mfsmeta/sbin/mfsmetalogger start
working directory: /usr/local/mfsmeta/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly

在node-2上從新寫入數據測試

 在node-3上查看監控狀態

[root@node-3 mfs]# tree /opt/
/opt/
├── 00
├── 01
├── 02
├── 03
│   └── chunk_0000000000000003_00000001.mfs
├── 04
├── 05
│   └── chunk_0000000000000005_00000001.mfs
├── 06
├── 07
│   └── chunk_0000000000000007_00000001.mfs
├── 08
├── 09
........

在node-1上查看

[root@node-1 ~]# tree /tmp/
/tmp/
├── 00
├── 01
│   └── chunk_0000000000000001_00000001.mfs
├── 02
│   └── chunk_0000000000000002_00000001.mfs
├── 03
├── 04
│   └── chunk_0000000000000004_00000001.mfs
├── 05
├── 06
│   └── chunk_0000000000000006_00000001.mfs
├── 07
├── 08
├── 09
........

注意對比下區別!

設置複製份數

[root@node-2 ~]# cd /usr/local/mfsclient/bin/
[root@node-2 bin]# ./mfssetgoal -r 2 /mfs/
/mfs/:
 inodes with goal changed:               8
 inodes with goal not changed:           0
 inodes with permission denied:          0
[root@node-2 bin]# ./mfsfileinfo /mfs/initramfs-2.6.32-431.el6.x86_64.img   #這裏沒有生效
/mfs/initramfs-2.6.32-431.el6.x86_64.img:
    chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
        copy 1: 192.168.30.133:9422

緣由:

客戶端沒有從新掛載
[root@node-2 ~]# umount /mfs/
[root@node-2 ~]# mfsmount /mfs/ -H 192.168.30.130 -p
MFS Password: #輸入123456
mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root
[root@node-2 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/vg_master-LogVol00   18G  4.2G   13G  25% /
tmpfs                           2.0G   72K  2.0G   1% /dev/shm
/dev/sda1                       194M   35M  150M  19% /boot
/dev/sr0                        3.6G  3.6G     0 100% /media/cdrom
192.168.30.130:9421              25G   58M   25G   1% /mfs
[root@node-2 ~]# cd /usr/local/mfsclient/bin/
[root@node-2 bin]# ./mfssetgoal -r 2 /mfs/ #雖然這裏沒有提示2份
/mfs/:
 inodes with goal changed:               0
 inodes with goal not changed:           8
 inodes with permission denied:          0
[root@node-2 bin]# ./mfsfileinfo /mfs/initramfs-2.6.32-431.el6.x86_64.img  #這裏的結果代表已經有兩份數據
/mfs/initramfs-2.6.32-431.el6.x86_64.img:
    chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
        copy 1: 192.168.30.131:9422
        copy 2: 192.168.30.133:9422

手動停掉node-1上的chunk-server

[root@node-1 ~]# /usr/local/mfschunk/sbin/mfschunkserver stop
sending SIGTERM to lock owner (pid:7829)
waiting for termination ... terminated
[root@node-2 ~]# cp /etc/group /mfs/   #客戶端依然可寫
[root@node-2 ~]# ls /mfs/
config-2.6.32-431.el6.x86_64           passwd
group                                  symvers-2.6.32-431.el6.x86_64.gz
initramfs-2.6.32-431.el6.x86_64.img    System.map-2.6.32-431.el6.x86_64
initrd-2.6.32-431.el6.x86_64kdump.img  vmlinuz-2.6.32-431.el6.x86_64

拓展:設置數據文件在回收站的過時時間

[root@node-2 bin]# cd 
[root@node-2 ~]# cd /usr/local/mfsclient/bin/
[root@node-2 bin]# ls
mfsappendchunks  mfsfileinfo    mfsgettrashtime  mfsrgettrashtime  mfssetgoal
mfscheckfile     mfsfilerepair  mfsmakesnapshot  mfsrsetgoal       mfssettrashtime
mfsdeleattr      mfsgeteattr    mfsmount         mfsrsettrashtime  mfssnapshot
mfsdirinfo       mfsgetgoal     mfsrgetgoal      mfsseteattr       mfstools
[root@node-2 bin]# ./mfsrsettrashtime 600 /mfs/
deprecated tool - use "mfssettrashtime -r"  #提示這個工具已經被棄用了
/mfs/:
 inodes with trashtime changed:              9
 inodes with trashtime not changed:          0
 inodes with permission denied:              0
[root@node-2 bin]# echo $?
0  #上個命令執行成功
#回收站的過時時間以秒爲單位,若是是單獨安裝或掛載的MFSMETA文件系統,它包含/trash目錄(該目錄表示仍然能夠還原被刪除的文件,)
和/trash/undel/(用於獲取文件)
把刪除的文件移到/trash/undel下,就能夠恢復刪除的文件。
在MFSMETA的目錄裏,除了trash和trash/undel兩個目錄外,還有第三個目錄reserved,reserved目錄內有已經刪除的文件,但卻被其餘
用戶打開佔用着,在用戶關閉了這些打開佔用着的文件後,reserved目錄中的文件即被刪除,文件的數據也會被當即刪除,此目錄不能進行操做。

MFS集羣啓動和關閉的順序

啓動順序

1.啓動master server
2.啓動chunk server
3.啓動metaogger
4.啓動客戶端,使用mfsmount掛載

mfs集羣關閉順序

1.全部客戶端卸載MFS文件系統
2.中止chunk server
3.中止metalogger
4.中止master server
相關文章
相關標籤/搜索