Seaweed-FS綜合使用測試

Seaweed-FS綜合使用測試node

參考信息linux

https://github.com/chrislusf/seaweedfs/git

https://bintray.com/chrislusf/seaweedfs/seaweedfs#github

https://www.mercurial-scm.org/downloadsgolang

http://www.golangtc.com/downloadredis

http://ju.outofmemory.cn/entry/202009sql

http://studygolang.com/articles/2398apache

使用Go語言編寫json

Weed fs 是一個簡單/高可用 密集型小文件存儲的集羣文件系統bootstrap

Weed-FS is a simple and highly scalable distributed file system. There are two objectives:

官方說

to store billions of files! 能夠存儲億級的小文件

to serve the files fast! 存取文件快速

Instead of supporting full POSIX file system semantics, Weed-FS choose to implement only a key~file mapping. Similar to the word 「NoSQL」, you can call it as 「NoFS」.

完整的符合POSIX文件系統規範 和nosql同樣 使用key-file的結構。

一,架構設計

1,架構拓撲

DataCenter至關於數據中心

Rack至關於機櫃

DataNode至關於服務器

       

測試topo以下(注意因爲副本設置緣由110,1個數據中心至少要有2個rack):

wKiom1bo3uezh7MRAAAbiYtWUjg230.png

2,模擬現有環境測試:

數據中心1:dc1        Rack:dc1rack1         DataNode:dc1node一、dc1node二、dc1node三、dc1node4       

數據中心2:dc2        Rack:dc2rack1         DataNode:dc2node一、dc2node二、dc2node3

weed master:dc1node一、dc1node二、dc2node1

weed volume:dc1node三、dc1node四、dc2node二、dc2node3

weed filter:dc1node一、dc1node2(目錄支持,mout不支持)

3,備份策略用100:replicate once on a different data center

其餘備份策略以下:

000 no replication, just one copy

001 replicate once on the same rack

010 replicate once on a different rack in the same data center

100 replicate once on a different data center

200 replicate twice on two other different data center

110 replicate once on a different rack, and once on a different data center

4,hosts文件:

192.168.88.213    dc1node1        

192.168.88.214    dc1node2

192.168.88.215    dc1node3

192.168.88.216    dc1node4

192.168.188.58    dc2node1

192.168.188.133   dc2node2

192.168.188.172   dc2node3

二,部署設置

1,安裝weed

1),Go(Golang)

wget http://www.golangtc.com/static/go/go1.6/go1.6.linux-amd64.tar.gz

tar zxvf go1.6.linux-amd64.tar.gz -C /usr/local

在/etc/profile設置goroot

export GOROOT=/usr/local/go

export PATH=$PATH:$GOROOT/bin:/usr/local/seaweedfs/sbin

source /etc/profice

2),Mercurial(版本控制)

wget http://mercurial.selenic.com/release/centos6/RPMS/x86_64/mercurial-3.4.2-0.x86_64.rpm

rpm -ivh mercurial-3.4.2-0.x86_64.rpm

 

3),Seaweed-FS

推薦二進制包,go install須要×××

https://bintray.com/chrislusf/seaweedfs/seaweedfs

weed_0.70beta_linux_amd64.tar.gz

tar zxvf weed_0.70beta_linux_amd64.tar.gz -C /usr/local/

cd /usr/local

mv weed_0.70beta_linux_amd64/ seaweedfs

mkdir seaweedfs/sbin

mv seaweedfs/weed  seaweedfs/sbin

2,部署weed

1),啓動weed master

分別在dc1node一、dc1node一、dc2node1上執行

weed master   -mdir="/data/dc1node1" -ip="192.168.88.213" -ip.bind="192.168.88.213" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

weed master   -mdir="/data/dc1node2" -ip="192.168.88.214" -ip.bind="192.168.88.214" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

weed master   -mdir="/data/dc2node1" -ip="192.168.188.58" -ip.bind="192.168.188.58" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

使用介紹:

weed help master

Usage: weed master -port=9333

  start a master server to provide volume=>location mapping service

  and sequence number of file ids

Default Parameters:

  -conf="/etc/weedfs/weedfs.conf": Deprecating! xml configuration file

  -defaultReplication="000": Default replication type if not specified.

  -garbageThreshold="0.3": threshold to vacuum and reclaim spaces

  -idleTimeout=10: connection idle seconds

  -ip="localhost": master <ip>|<server> address

  -ip.bind="0.0.0.0": ip address to bind to

  -maxCpu=0: maximum number of CPUs. 0 means all available CPUs

  -mdir="/tmp": data directory to store meta data

  -peers="": other master nodes in comma separated ip:port list, example: 127.0.0.1:9093,127.0.0.1:9094

  -port=9333: http listen port

  -pulseSeconds=5: number of seconds between heartbeats

  -secure.secret="": secret to encrypt Json Web Token(JWT)

  -volumeSizeLimitMB=30000: Master stops directing writes to oversized volumes.

  -whiteList="": comma separated Ip addresses having write permission. No limit if empty.

啓動日誌:

[root@dc1node1 ~]# weed master   -mdir="/data/dc1node1" -ip="192.168.88.213" -ip.bind="192.168.88.213" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

[1] 7186

[root@datanode01 ~]# I0314 14:13:59 07186 file_util.go:20] Folder /data/dc1node1 Permission: -rwxr-xr-x

I0314 14:13:59 07186 topology.go:86] Using default configurations.

I0314 14:13:59 07186 master_server.go:59] Volume Size Limit is 30000 MB

I0314 14:13:59 07186 master.go:69] Start Seaweed Master 0.70 beta at 192.168.88.213:9333

I0314 14:13:59 07186 raft_server.go:74] Joining cluster: dc1node1:9333,dc1node2:9333,dc2node1:9333

I0314 14:13:59 07186 raft_server.go:134] Attempting to connect to: http://dc1node1:9333/cluster/join

I0314 14:13:59 07186 raft_server_handlers.go:16] Processing incoming join. Current Leader  Self 192.168.88.213:9333 Peers map[]

I0314 14:13:59 07186 raft_server_handlers.go:20] Command:{"name":"192.168.88.213:9333","connectionString":"http://192.168.88.213:9333"}

I0314 14:13:59 07186 raft_server_handlers.go:27] join command from Name 192.168.88.213:9333 Connection http://192.168.88.213:9333

I0314 14:13:59 07186 raft_server.go:179] Post returned status:  200

I0314 14:13:59 07186 master_server.go:93] [ 192.168.88.213:9333 ] I am the leader!

I0314 14:14:24 07186 raft_server_handlers.go:16] Processing incoming join. Current Leader 192.168.88.213:9333 Self 192.168.88.213:9333 Peers map[]

I0314 14:14:24 07186 raft_server_handlers.go:20] Command:{"name":"192.168.88.214:9333","connectionString":"http://192.168.88.214:9333"}

I0314 14:14:24 07186 raft_server_handlers.go:27] join command from Name 192.168.88.214:9333 Connection http://192.168.88.214:9333

I0314 14:14:52 07186 raft_server_handlers.go:16] Processing incoming join. Current Leader 192.168.88.213:9333 Self 192.168.88.213:9333 Peers map[192.168.88.214:9333:0xc20822dc00]

I0314 14:14:52 07186 raft_server_handlers.go:20] Command:{"name":"192.168.188.58:9333","connectionString":"http://192.168.188.58:9333"}

I0314 14:14:52 07186 raft_server_handlers.go:27] join command from Name 192.168.188.58:9333 Connection http://192.168.188.58:9333

[root@dc1node2 ~]# weed master   -mdir="/data/dc1node2" -ip="192.168.88.214" -ip.bind="192.168.88.214" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

[1] 47745

[root@datanode02 ~]# I0314 14:14:23 47745 file_util.go:20] Folder /data/dc1node2 Permission: -rwxr-xr-x

I0314 14:14:23 47745 topology.go:86] Using default configurations.

I0314 14:14:23 47745 master_server.go:59] Volume Size Limit is 30000 MB

I0314 14:14:23 47745 master.go:69] Start Seaweed Master 0.70 beta at 192.168.88.214:9333

I0314 14:14:23 47745 raft_server.go:74] Joining cluster: dc1node1:9333,dc1node2:9333,dc2node1:9333

I0314 14:14:23 47745 raft_server.go:134] Attempting to connect to: http://dc1node1:9333/cluster/join

I0314 14:14:23 47745 raft_server.go:179] Post returned status:  200

[root@dc2node1 ~]#weed master   -mdir="/data/dc2node1" -ip="192.168.188.58" -ip.bind="192.168.188.58" -port=9333 -peers="dc1node1:9333,dc1node2:9333,dc2node1:9333"  -defaultReplication="110" &

[1] 4490

[root@localhost ~]# I0314 14:14:50 04490 file_util.go:20] Folder /data/dc2node1 Permission: -rwxr-xr-x

I0314 14:14:50 04490 topology.go:86] Using default configurations.

I0314 14:14:50 04490 master_server.go:59] Volume Size Limit is 30000 MB

I0314 14:14:50 04490 master.go:69] Start Seaweed Master 0.70 beta at 192.168.188.58:9333

I0314 14:14:50 04490 raft_server.go:74] Joining cluster: dc1node1:9333,dc1node2:9333,dc2node1:9333

I0314 14:14:51 04490 raft_server.go:134] Attempting to connect to: http://dc1node1:9333/cluster/join

I0314 14:14:52 04490 raft_server.go:179] Post returned status:  200

2),啓動volume

分別在dc1node三、dc1node四、dc2node二、dc2node3上執行

weed volume -dataCenter="m5dc"  -rack="m5rack1"  -ip="192.168.88.215" -ip.bind="192.168.88.215" -max="10" -dir="/data/dc1node3"  -mserver="dc1node1:9333" -port=8081 &

weed volume -dataCenter="m5dc"  -rack="m5rack1"  -ip="192.168.88.216" -ip.bind="192.168.88.216" -max="10" -dir="/data/dc1node4"  -mserver="dc1node1:9333" -port=8081 &

weed volume -dataCenter="shddc"  -rack="shdrack1" -ip="192.168.188.133" -ip.bind="192.168.188.133" -max="10" -dir="/data/dc2node2" -mserver="dc2node1:9333" -port=8081 &

weed volume -dataCenter="shddc"  -rack="shdrack1" -ip="192.168.188.172" -ip.bind="192.168.188.172" -max="10" -dir="/data/dc2node3" -mserver="dc2node1:9333" -port=8081 &

啓動日誌:

[root@dc1node3 ~] # weed volume -dataCenter="m5dc"  -rack="m5rack1"  -ip="192.168.88.215" -ip.bind="192.168.88.215" -max="10" -dir="/data/dc1node3"  -mserver="dc1node1:9333" -port=8081 &

[1] 21117

[root@m ~]# I0314 14:12:12 21117 file_util.go:20] Folder /data/dc1node3 Permission: -rwxr-xr-x

I0314 14:12:12 21117 store.go:225] Store started on dir: /data/dc1node3 with 0 volumes max 10

I0314 14:12:12 21117 volume.go:136] Start Seaweed volume server 0.70 beta at 192.168.88.215:8081

I0314 14:12:12 21117 volume_server.go:70] Volume server bootstraps with master dc1node1:9333

[root@dc1node4 ~] # weed volume -dataCenter="m5dc"  -rack="m5rack1"  -ip="192.168.88.216" -ip.bind="192.168.88.216" -max="10" -dir="/data/dc1node4"  -mserver="dc1node1:9333" -port=8081 &

[1] 6284

[root@master ~]# I0314 14:19:50 06284 file_util.go:20] Folder /data/dc1node4 Permission: -rwxr-xr-x

I0314 14:19:50 06284 store.go:225] Store started on dir: /data/dc1node4 with 0 volumes max 10

I0314 14:19:50 06284 volume.go:136] Start Seaweed volume server 0.70 beta at 192.168.88.216:8081

I0314 14:19:50 06284 volume_server.go:70] Volume server bootstraps with master dc1node1:9333

[root@dc2node2] #  weed volume -dataCenter="shddc"  -rack="shdrack1" -ip="192.168.188.133" -ip.bind="192.168.188.133" -max="10" -dir="/data/dc2node2" -mserver="dc2node1:9333" -port=8081 &

[1] 15590

[root@datanode03 ~]# I0314 14:20:05 15590 file_util.go:20] Folder /data/dc2node2 Permission: -rwxr-xr-x

I0314 14:20:05 15590 store.go:225] Store started on dir: /data/dc2node2 with 0 volumes max 10

I0314 14:20:05 15590 volume.go:136] Start Seaweed volume server 0.70 beta at 192.168.188.133:8081

I0314 14:20:05 15590 volume_server.go:70] Volume server bootstraps with master dc2node1:9333

[root@dc2node3 ~] # weed volume -dataCenter="shddc"  -rack="shdrack1" -ip="192.168.188.172" -ip.bind="192.168.188.172" -max="10" -dir="/data/dc2node3" -mserver="dc2node1:9333" -port=8081 &

[1] 33466

[root@datanode04 ~]# I0314 14:20:26 33466 file_util.go:20] Folder /data/dc2node3 Permission: -rwxr-xr-x

I0314 14:20:26 33466 store.go:225] Store started on dir: /data/dc2node3 with 0 volumes max 10

I0314 14:20:26 33466 volume.go:136] Start Seaweed volume server 0.70 beta at 192.168.188.172:8081

I0314 14:20:26 33466 volume_server.go:70] Volume server bootstraps with master dc2node1:9333

[root@dc1node1 ~]#

I0314 14:19:29 07186 node.go:208] topo adds child m5dc

I0314 14:19:29 07186 node.go:208] topo:m5dc adds child m5rack1

I0314 14:19:29 07186 node.go:208] topo:m5dc:m5rack1 adds child 192.168.88.215:8081

I0314 14:19:50 07186 node.go:208] topo:m5dc:m5rack1 adds child 192.168.88.216:8081

I0314 14:20:06 07186 node.go:208] topo adds child shddc

I0314 14:20:06 07186 node.go:208] topo:shddc adds child shdrack1

I0314 14:20:06 07186 node.go:208] topo:shddc:shdrack1 adds child 192.168.188.133:8081

I0314 14:20:27 07186 node.go:208] topo:shddc:shdrack1 adds child 192.168.188.172:8081

使用介紹:

weed help volume

Usage: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333

  start a volume server to provide storage spaces

Default Parameters:

  -dataCenter="": current volume server's data center name

  -dir="/tmp": directories to store data files. dir[,dir]...

  -idleTimeout=10: connection idle seconds

  -p_w_picpaths.fix.orientation=true: Adjust jpg orientation when uploading.

  -index="memory": Choose [memory|leveldb|boltdb] mode for memory~performance balance.

  -ip="": ip or server name

  -ip.bind="0.0.0.0": ip address to bind to

  -max="7": maximum numbers of volumes, count[,count]...

  -maxCpu=0: maximum number of CPUs. 0 means all available CPUs

  -mserver="localhost:9333": master server location

  -port=8080: http listen port

  -port.public=0: port opened to public

  -publicUrl="": Publicly accessible address

  -pulseSeconds=5: number of seconds between heartbeats, must be smaller than or equal to the master's setting

  -rack="": current volume server's rack name

  -whiteList="": comma separated Ip addresses having write permission. No limit if empty.

3,安裝後結構UI

http://node1:93333

三,基本操做:存儲、獲取和刪除文件

1,http rest api接口

1),存儲文件 

[root@dc2node3 ~]# curl -F file=@helloseaweedfs http://dc1node1:9333/submit

I0314 15:14:05 33466 store.go:192] In dir /data/dc2node3 adds volume:1 collection: replicaPlacement:110 ttl:

I0314 15:14:05 33466 volume.go:110] loading index file /data/dc2node3/1.idx readonly false

I0314 15:14:05 33466 store.go:192] In dir /data/dc2node3 adds volume:3 collection: replicaPlacement:110 ttl:

I0314 15:14:05 33466 volume.go:110] loading index file /data/dc2node3/3.idx readonly false

{"fid":"3,01255df9c4","fileName":"helloseaweedfs","fileUrl":"192.168.88.213:8081/3,01255df9c4","size":17}

咱們看到master給咱們返回了一行json數據,其中:

fid是一個逗號分隔的字符串,按照repository中文檔的說明,這個字符串應該由volume id, key uint64和cookie code構成。其中逗號前面的1就是volume id, 01488c536a則是key和cookie組成的串。fid是文件sysinfo.sh在集羣中的惟一ID。後續查看、獲取以及刪除該文件數據都須要使 用這個fid。

這裏看到文件保存在dc1node1v這個volume節點上,文件路徑 http://192.168.88.213:8081/3,01255df9c4 ,副本策略110(不一樣rack一次,不一樣datacenter一次),數據保留在volume id 3上。

能夠根據下面的日誌發現數據存儲在m5rack1的dc1node1v節點上、m5rack2的dc1node4節點上、shdrack1的dc2node3節點上。

存儲文件時leader master上的日誌:

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 1 on topo:shddc:shdrack2:192.168.188.58:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 1 on topo:shddc:shdrack1:192.168.188.172:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 1 on topo:m5dc:m5rack2:192.168.88.214:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 2 on topo:m5dc:m5rack2:192.168.88.213:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 2 on topo:m5dc:m5rack1:192.168.88.215:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 2 on topo:shddc:shdrack1:192.168.188.133:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 3 on topo:m5dc:m5rack2:192.168.88.213:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 3 on topo:m5dc:m5rack1:192.168.88.216:8081

I0314 15:14:06 07186 volume_growth.go:204] Created Volume 3 on topo:shddc:shdrack1:192.168.188.172:8081

 

2),獲取文件

上面提交了一個文件,這裏查看,由上知道文件被放在3個節點中,故這三個節點中。

[root@dc2node3 ~]# curl http://dc1node1:8081/3,01255df9c4

hello seaweedfs!

[root@dc2node3 ~]# curl http://dc1node4:8081/3,01255df9c4

hello seaweedfs!

[root@dc2node3 ~]# curl http://dc2node3:8081/3,01255df9c4

hello seaweedfs!

在沒有副本的節點上查看文件會重定向到正確的fileUrl

[root@dc2node3 ~]# curl http://dc2node1:8081/3,01255df9c4

<a href="http://192.168.88.213:8081/3,01255df9c4">Moved Permanently</a>.

另外訪問leader master監控端口,會以輪詢的方式重定向到全部副本的fileUrl

[root@dc2node3 ~]# curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.188.172:8081/3,01255df9c4">Moved Permanently</a>.

[root@dc2node3 ~] # curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.88.213:8081/3,01255df9c4">Moved Permanently</a>.

[root@dc2node3 ~] # curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.88.216:8081/3,01255df9c4">Moved Permanently</a>.

3),刪除文件

[root@dc2node3 ~]# curl -X DELETE  http://dc1node1:8081/3,01255df9c4

{"size":17}

刪除後就查不到文件了 

[root@dc2node3 ~] # curl http://dc1node1:8081/3,01255df9c4

[root@dc2node3 ~] # curl http://dc1node4:8081/3,01255df9c4

[root@dc2node3 ~] # curl http://dc2node3:8081/3,01255df9c4

I0314 16:01:22 33466 volume_server_handlers_read.go:53] read error: File Entry Not Found. Needle 67 Memory 0 /3,01255df9c4(報錯待查)

可是訪問leader master後重定向連接還在 

[root@dc2node3 ~]# curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.188.172:8081/3,01255df9c4">Moved Permanently</a>.

[root@dc2node3 ~] # curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.88.213:8081/3,01255df9c4">Moved Permanently</a>.

[root@dc2node3 ~] # curl http://dc1node1:9333/3,01255df9c4

<a href="http://192.168.88.216:8081/3,01255df9c4">Moved Permanently</a>.

4),卷收縮大小

刪除後收縮後卷大小纔會真正的變小

[root@dc2node3 ~] # curl "http://dc1node1:9333/vol/vacuum"

I0314 16:07:42 33466 volume.go:110] loading index file /data/dc2node3/3.idx readonly false

{"Topology":{"DataCenters":[{"Free":29,"Id":"m5dc","Max":34,"Racks":[{"DataNodes":[{"Free":9,"Max":10,"PublicUrl":"192.168.88.215:8081","Url":"192.168.88.215:8081","Volumes":1},{"Free":9,"Max":10,"PublicUrl":"192.168.88.216:8081","Url":"192.168.88.216:8081","Volumes":1}],"Free":18,"Id":"m5rack1","Max":20},{"DataNodes":[{"Free":5,"Max":7,"PublicUrl":"192.168.88.213:8081","Url":"192.168.88.213:8081","Volumes":2},{"Free":6,"Max":7,"PublicUrl":"192.168.88.214:8081","Url":"192.168.88.214:8081","Volumes":1}],"Free":11,"Id":"m5rack2","Max":14}]},{"Free":23,"Id":"shddc","Max":27,"Racks":[{"DataNodes":[{"Free":9,"Max":10,"PublicUrl":"192.168.188.133:8081","Url":"192.168.188.133:8081","Volumes":1},{"Free":8,"Max":10,"PublicUrl":"192.168.188.172:8081","Url":"192.168.188.172:8081","Volumes":2}],"Free":17,"Id":"shdrack1","Max":20},{"DataNodes":[{"Free":6,"Max":7,"PublicUrl":"192.168.188.58:8081","Url":"192.168.188.58:8081","Volumes":1}],"Free":6,"Id":"shdrack2","Max":7}]}],"Free":52,"Max":61,"layouts":[{"collection":"benchmark","replication":"110","ttl":"","writables":null},{"collection":"","replication":"110","ttl":"","writables":[1,2,3]}]},"Version":"0.70 beta"}

收縮前

[root@dc1node1 dc1node1v]# ls -lh

total 12K

-rw-r--r-- 1 root root   8 Mar 14 15:14 2.dat

-rw-r--r-- 1 root root   0 Mar 14 15:14 2.idx

-rw-r--r-- 1 root root 120 Mar 14 15:59 3.dat

-rw-r--r-- 1 root root  32 Mar 14 15:59 3.idx

收縮後

[root@datanode01 dc1node1v]# ls -lh

total 8.0K

-rw-r--r-- 1 root root 8 Mar 14 15:14 2.dat

-rw-r--r-- 1 root root 0 Mar 14 15:14 2.idx

-rw-r--r-- 1 root root 8 Mar 14 16:07 3.dat

-rw-r--r-- 1 root root 0 Mar 14 16:07 3.idx

2,目錄訪問

1),部署filer服務(多filer服務須要redis服務支持)

[root@dc1node1 ~] # weed filer -defaultReplicaPlacement="110" -dir="/data/dc1node1filer/" -redis.server="192.168.88.213:6379" -master="dc1node1:9333" -port=8888

I0315 11:53:30 39690 file_util.go:20] Folder /data/dc1node1filer/ Permission: -rwxr-xr-x

I0315 11:53:30 39690 filer.go:88] Start Seaweed Filer 0.70 beta at port 8888

 [root@dc1node2 ~] weed filer -defaultReplicaPlacement="110" -dir="/data/dc1node2filer/" -redis.server="192.168.88.213:6379" -master="dc1node1:9333" -port=8888

I0315 11:53:11 48413 file_util.go:20] Folder /data/dc1node2filer/ Permission: -rwxr-xr-x

I0315 11:53:11 48413 filer.go:88] Start Seaweed Filer 0.70 beta at port 8888

2),上傳圖片到指定目錄

[root@dc2node3 ~] curl -F "filename=@car.jpg" "http://dc1node1:8888/test/"

{"name":"car.jpg","size":13853}

3),瀏覽器訪問

訪問2個filer server的url都能獲取正確的圖片

http://dc1node1:8888/test/car.jpg

http://dc1node2:8888/test/car.jpg

4),刪除文件

[root@dc2node3 ~] # curl -X DELETE http://dc1node1:8888/test/car.jpg

{"error":"Invalid fileId "}

提示error,但實際上文件已經被刪除了!這塊多是個小bug。另外filer服務器雖然是集羣,可是redis仍是單點,要是想所有集羣環境還得部署redis集羣。

如今 訪問下面2個url連接都是空了:

http://dc1node1:8888/test/car.jpg

http://dc1node2:8888/test/car.jpg

3,mount

只能是linux客戶端,須要filer支持,FUSE

1),安裝fuse

yum -y install fuse

2),啓動服務時指定filer爲true,這裏只是測試單獨用一個小架構

weed server -filer=true 會自動啓動933三、888八、

在dc2node2節點上啓動weed server

[root@dc2node2 ~]# weed server -filer=true

I0315 15:35:34 28793 file_util.go:20] Folder /tmp Permission: -rwxrwxrwx

I0315 15:35:34 28793 file_util.go:20] Folder /tmp/filer Permission: -rwx------

I0315 15:35:34 28793 file_util.go:20] Folder /tmp Permission: -rwxrwxrwx

I0315 15:35:34 28793 topology.go:86] Using default configurations.

I0315 15:35:34 28793 master_server.go:59] Volume Size Limit is 30000 MB

I0315 15:35:34 28793 server.go:203] Start Seaweed Master 0.70 beta at localhost:9333

I0315 15:35:34 28793 server.go:176] Start Seaweed Filer 0.70 beta at port 8888

I0315 15:35:34 28793 raft_server.go:103] Old conf,log,snapshot should have been removed.

I0315 15:35:34 28793 store.go:225] Store started on dir: /tmp with 0 volumes max 7

I0315 15:35:34 28793 server.go:257] Start Seaweed volume server 0.70 beta at localhost:8080

I0315 15:35:34 28793 volume_server.go:70] Volume server bootstraps with master localhost:9333

I0315 15:36:04 28793 master_server.go:89] [ localhost:9333 ] localhost:9333 becomes leader.

I0315 15:36:04 28793 node.go:208] topo adds child DefaultDataCenter

I0315 15:36:04 28793 node.go:208] topo:DefaultDataCenter adds child DefaultRack

I0315 15:36:04 28793 node.go:208] topo:DefaultDataCenter:DefaultRack adds child localhost:8080

I0315 15:36:04 28793 volume_server.go:82] Volume Server Connected with master at localhost:9333

3),掛載使用

在dc2node3節點上weed mount

[root@dc2node3 ~] #  weed mount -filer="dc2node2:8888" -dir="/data/seaweedclient/"  -debug=true

This is SeaweedFS version 0.70 beta linux amd64

[root@dc2node3 ~] # ll /data

d????????? ? ?             ?                  ?            ? seaweedclient

測試目前weed mount還不能使用,做者說是半成品。

詳細見這個

spacer.gifwKioL1bo3qODqeo-AACksBXh7is117.png

4),用法備註

weed help mount

Usage: weed mount -filer=localhost:8888 -dir=/some/dir

  mount weed filer to userspace.

  Pre-requisites:

  1) have SeaweedFS master and volume servers running

  2) have a "weed filer" running

  These 2 requirements can be achieved with one command "weed server -filer=true"

  This uses bazil.org/fuse, whichenables writing FUSE file systems on

  Linux, and OS X.

  On OS X, it requires OSXFUSE (http://osxfuse.github.com/).

Default Parameters:

  -debug=false: verbose debug information

  -dir=".": mount weed filer to this directory

  -filer="localhost:8888": weed filer location

4,各類語言客戶端

https://github.com/chrislusf/seaweedfs/wiki/Client-Libraries

四,文件一致性

1,擴展節點

m5dc擴展1個rack爲m5rack2,這個rack包含dc1node1v和dc1node2v兩個節點

weed volume -dataCenter="m5dc"  -rack="m5rack2"  -ip="192.168.88.213" -ip.bind="192.168.88.213"  -dir="/data/dc1node1v"  -mserver="dc1node1:9333" -port=8081 &

weed volume -dataCenter="m5dc"  -rack="m5rack2"  -ip="192.168.88.214" -ip.bind="192.168.88.214"  -dir="/data/dc1node2v"  -mserver="dc1node1:9333" -port=8081 &

shddc擴容1個rack爲shdrack2,這個rack包含dc2node1v節點

weed volume -dataCenter="shddc"  -rack="shdrack2" -ip="192.168.188.58" -ip.bind="192.168.188.58"  -dir="/data/dc2node1v" -mserver="dc2node1:9333" -port=8081 &

擴容後topo日誌

[root@dc1node1 ~]#

I0314 15:09:34 07186 node.go:208] topo:m5dc adds child m5rack2

I0314 15:09:34 07186 node.go:208] topo:m5dc:m5rack2 adds child 192.168.88.213:8081

I0314 15:10:08 07186 node.go:208] topo:m5dc:m5rack2 adds child 192.168.88.214:8081

I0314 15:11:13 07186 node.go:208] topo:shddc adds child shdrack2

I0314 15:11:13 07186 node.go:208] topo:shddc:shdrack2 adds child 192.168.188.58:8081

2,縮減節點

縮減volume節點前要注意數據備份和,簡單辦法就是複雜在保存在此捲上的數據到相應其餘捲上,這個是根據副本策略來弄的。

縮減master節點,topo架構圖會自動改變,會自動選擇leader master,例如kill掉 dc1node1上地master服務後,dc2node1變成了leader master。

五,壓力測試:

1,ab壓力測試

併發100,請求100萬次

ab -k -c 100 -n 1000000 http://dc1node1:9333/dir/assign

This is ApacheBench, Version 2.3 <$Revision: 655654 $>

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/

Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking dc1node1 (be patient)

Completed 100000 requests

Completed 200000 requests

Completed 300000 requests

Completed 400000 requests

Completed 500000 requests

Completed 600000 requests

Completed 700000 requests

Completed 800000 requests

Completed 900000 requests

Completed 1000000 requests

Finished 1000000 requests

Server Software:       

Server Hostname:        dc1node1

Server Port:            9333

Document Path:          /dir/assign

Document Length:        94 bytes

Concurrency Level:      100

Time taken for tests:   9.253 seconds

Complete requests:      1000000

Failed requests:        0

Write errors:           0

Keep-Alive requests:    1000000

Total transferred:      226000226 bytes

HTML transferred:       94000094 bytes

Requests per second:    108078.33 [#/sec] (mean)

Time per request:       0.925 [ms] (mean)

Time per request:       0.009 [ms] (mean, across all concurrent requests)

Transfer rate:          23853.25 [Kbytes/sec] received

Connection Times (ms)

              min  mean[+/-sd] median   max

Connect:        0    0   0.0      0       2

Processing:     0    1   0.8      1      41

Waiting:        0    1   0.8      1      41

Total:          0    1   0.8      1      41

Percentage of the requests served within a certain time (ms)

  50%      1

  66%      1

  75%      1

  80%      1

  90%      2

  95%      2

  98%      3

  99%      3

 100%     41 (longest request)

2, Benchmark

1),先建5個名爲benchmark的集合用於測試

 http://dc1node1:9333/vol/grow?collection=benchmark&count=5

或者用CURL發送POST請求

curl -d "collection=benchmark&count=5" "http://192.168.88.213:9333/vol/grow"

2),併發100,先寫100萬個1KB文件,後隨機讀取測試

[root@dc2node3 ~]#weed benchmark -c=100 -collection="benchmark" -size=1024 -n=1048576 -server="dc1node1:9333"

This is SeaweedFS version 0.70 beta linux amd64

------------ Writing Benchmark ----------

Completed 24778 of 1048576 requests, 2.4% 24776.5/s 24.9MB/s

Completed 52375 of 1048576 requests, 5.0% 27596.9/s 27.8MB/s

Completed 79137 of 1048576 requests, 7.5% 26762.1/s 26.9MB/s

......

Completed 982295 of 1048576 requests, 93.7% 25366.1/s 25.5MB/s

Completed 1007837 of 1048576 requests, 96.1% 25541.8/s 25.7MB/s

Completed 1032815 of 1048576 requests, 98.5% 24977.9/s 25.1MB/s

Concurrency Level:      100

Time taken for tests:   41.583 seconds

Complete requests:      1048576

Failed requests:        0

Total transferred:      1106787642 bytes

Requests per second:    25216.18 [#/sec]

Transfer rate:          25992.24 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        0.8      3.9       322.7      7.4

Percentage of the requests served within a certain time (ms)

   50%      2.7 ms

   66%      3.8 ms

   75%      4.7 ms

   80%      5.2 ms

   90%      7.0 ms

   95%      8.9 ms

   98%     11.7 ms

   99%     13.7 ms

  100%    322.7 ms

------------ Randomly Reading Benchmark ----------

Completed 43852 of 1048576 requests, 4.2% 42917.7/s 43.2MB/s

Completed 120598 of 1048576 requests, 11.5% 78448.1/s 79.0MB/s

Completed 204812 of 1048576 requests, 19.5% 84215.0/s 84.8MB/s

......

Completed 862028 of 1048576 requests, 82.2% 83751.6/s 84.3MB/s

Completed 945524 of 1048576 requests, 90.2% 82230.2/s 82.8MB/s

Completed 1027398 of 1048576 requests, 98.0% 83154.3/s 83.7MB/s

Concurrency Level:      100

Time taken for tests:   13.244 seconds

Complete requests:      1048576

Failed requests:        0

Total transferred:      1106784752 bytes

Requests per second:    79172.39 [#/sec]

Transfer rate:          81608.81 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        0.0      1.2       206.7      3.2

Percentage of the requests served within a certain time (ms)

   50%      1.0 ms

   95%      1.1 ms

   98%      1.5 ms

   99%      7.1 ms

  100%    206.7 ms

併發100,大小100K,1萬個文件先寫後隨機讀測試

[root@dc2node3 ~]# weed benchmark -c=100 -collection="benchmark" -size=102400 -n=10000 -server="dc1node1:9333"

This is SeaweedFS version 0.70 beta linux amd64

------------ Writing Benchmark ----------

Completed 977 of 10000 requests, 9.8% 976.9/s 95.4MB/s

Completed 2110 of 10000 requests, 21.1% 1133.0/s 110.7MB/s

Completed 3249 of 10000 requests, 32.5% 1139.0/s 111.3MB/s

Completed 4381 of 10000 requests, 43.8% 1132.0/s 110.6MB/s

Completed 5512 of 10000 requests, 55.1% 1131.0/s 110.5MB/s

Completed 6649 of 10000 requests, 66.5% 1137.0/s 111.1MB/s

Completed 7781 of 10000 requests, 77.8% 1132.0/s 110.6MB/s

Completed 8914 of 10000 requests, 89.1% 1133.0/s 110.7MB/s

Concurrency Level:      100

Time taken for tests:   8.969 seconds

Complete requests:      10000

Failed requests:        0

Total transferred:      1024315888 bytes

Requests per second:    1114.97 [#/sec]

Transfer rate:          111531.63 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        5.4      89.2       988.1      59.1

Percentage of the requests served within a certain time (ms)

   50%     75.5 ms

   66%     86.4 ms

   75%     95.2 ms

   80%    101.8 ms

   90%    124.7 ms

   95%    165.0 ms

   98%    283.4 ms

   99%    346.4 ms

  100%    988.1 ms

------------ Randomly Reading Benchmark ----------

Completed 1173 of 10000 requests, 11.7% 1172.9/s 114.6MB/s

Completed 2404 of 10000 requests, 24.0% 1231.0/s 120.3MB/s

Completed 3646 of 10000 requests, 36.5% 1242.0/s 121.3MB/s

Completed 4880 of 10000 requests, 48.8% 1234.0/s 120.5MB/s

Completed 6121 of 10000 requests, 61.2% 1241.0/s 121.2MB/s

Completed 7334 of 10000 requests, 73.3% 1213.0/s 118.5MB/s

Completed 8589 of 10000 requests, 85.9% 1255.0/s 122.6MB/s

Completed 9798 of 10000 requests, 98.0% 1209.0/s 118.1MB/s

Completed 9999 of 10000 requests, 100.0% 201.0/s 19.6MB/s

Completed 9999 of 10000 requests, 100.0% 0.0/s 0.0MB/s

Concurrency Level:      100

Time taken for tests:   10.929 seconds

Complete requests:      10000

Failed requests:        0

Total transferred:      1024312373 bytes

Requests per second:    914.98 [#/sec]

Transfer rate:          91525.86 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        0.3      81.5       4938.9      120.6

Percentage of the requests served within a certain time (ms)

   50%     52.1 ms

   66%     62.4 ms

   75%     70.0 ms

   80%     77.2 ms

   90%    231.5 ms

   95%    285.5 ms

   98%    328.1 ms

   99%    662.2 ms

  100%    4938.9 ms

併發100,大小1M,1萬個文件先寫後隨機讀測試

[root@dc2node3 ~]# weed benchmark -c=100 -collection="benchmark" -size=1024000 -n=10000 -server="dc1node1:9333"

# weed benchmark -c=100 -collection="benchmark" -size=1024000 -n=10000 -server="dc1node1:9333"

This is SeaweedFS version 0.70 beta linux amd64

------------ Writing Benchmark ----------

Completed 14 of 10000 requests, 0.1% 13.3/s 13.0MB/s

Completed 68 of 10000 requests, 0.7% 57.1/s 55.7MB/s

Completed 138 of 10000 requests, 1.4% 70.0/s 68.4MB/s

Completed 224 of 10000 requests, 2.2% 86.0/s 84.0MB/s

......

Completed 9683 of 10000 requests, 96.8% 88.0/s 85.9MB/s

Completed 9764 of 10000 requests, 97.6% 81.0/s 79.1MB/s

Completed 9838 of 10000 requests, 98.4% 74.0/s 72.3MB/s

Completed 9925 of 10000 requests, 99.2% 87.0/s 85.0MB/s

Concurrency Level:      100

Time taken for tests:   123.407 seconds

Complete requests:      9999

Failed requests:        1

Total transferred:      10239290379 bytes

Requests per second:    81.02 [#/sec]

Transfer rate:          81027.21 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        43.1      1231.1       5021.1      560.4

Percentage of the requests served within a certain time (ms)

   50%    1133.7 ms

   66%    1336.6 ms

   75%    1498.0 ms

   80%    1605.5 ms

   90%    1936.5 ms

   95%    2332.0 ms

   98%    2717.0 ms

   99%    3042.2 ms

  100%    5021.1 ms

------------ Randomly Reading Benchmark ----------

Completed 97 of 10000 requests, 1.0% 97.0/s 94.7MB/s

Completed 263 of 10000 requests, 2.6% 166.0/s 162.1MB/s

Completed 422 of 10000 requests, 4.2% 159.0/s 155.3MB/s

......

Completed 9647 of 10000 requests, 96.5% 138.0/s 134.8MB/s

Completed 9784 of 10000 requests, 97.8% 137.0/s 133.8MB/s

Completed 9934 of 10000 requests, 99.3% 150.0/s 146.5MB/s

Concurrency Level:      100

Time taken for tests:   69.699 seconds

Complete requests:      10000

Failed requests:        0

Total transferred:      10240314363 bytes

Requests per second:    143.47 [#/sec]

Transfer rate:          143477.86 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        1.4      690.4       7590.4      566.1

Percentage of the requests served within a certain time (ms)

   50%    612.8 ms

   66%    788.6 ms

   75%    898.6 ms

   80%    1018.5 ms

   90%    1329.5 ms

   95%    1646.9 ms

   98%    2190.4 ms

   99%    2601.4 ms

  100%    7590.4 ms

非集羣裏客戶端測試結果:

#weed  benchmark -c=100 -n=100000 -size=102400 -server="192.168.188.58:9333"

This is SeaweedFS version 0.70 beta linux amd64

------------ Writing Benchmark ----------

Completed 1087 of 100000 requests, 1.1% 1085.6/s 106.1MB/s

Completed 2230 of 100000 requests, 2.2% 1143.0/s 111.7MB/s

Completed 3380 of 100000 requests, 3.4% 1150.0/s 112.3MB/s

Completed 4523 of 100000 requests, 4.5% 1143.0/s 111.7MB/s

Completed 5678 of 100000 requests, 5.7% 1155.0/s 112.8MB/s

Completed 6818 of 100000 requests, 6.8% 1140.0/s 111.4MB/s

Completed 7953 of 100000 requests, 8.0% 1135.0/s 110.9MB/s

Completed 9111 of 100000 requests, 9.1% 1157.9/s 113.1MB/s

Completed 10262 of 100000 requests, 10.3% 1150.1/s 112.3MB/s

Completed 11405 of 100000 requests, 11.4% 1144.0/s 111.8MB/s

Completed 12557 of 100000 requests, 12.6% 1152.0/s 112.5MB/s

Completed 13703 of 100000 requests, 13.7% 1146.0/s 111.9MB/s

Completed 14846 of 100000 requests, 14.8% 1137.6/s 111.1MB/s

Completed 15998 of 100000 requests, 16.0% 1157.4/s 113.1MB/s

Completed 17146 of 100000 requests, 17.1% 1148.0/s 112.1MB/s

Completed 18283 of 100000 requests, 18.3% 1137.0/s 111.1MB/s

Completed 19437 of 100000 requests, 19.4% 1154.0/s 112.7MB/s

Completed 20580 of 100000 requests, 20.6% 1143.0/s 111.7MB/s

Completed 21728 of 100000 requests, 21.7% 1147.8/s 112.1MB/s

Completed 22875 of 100000 requests, 22.9% 1147.2/s 112.1MB/s

Completed 23968 of 100000 requests, 24.0% 1093.0/s 106.8MB/s

Completed 24909 of 100000 requests, 24.9% 940.1/s 91.8MB/s

Completed 26051 of 100000 requests, 26.1% 1143.1/s 111.7MB/s

Completed 27213 of 100000 requests, 27.2% 1162.0/s 113.5MB/s

Completed 28339 of 100000 requests, 28.3% 1126.0/s 110.0MB/s

Completed 28920 of 100000 requests, 28.9% 581.0/s 56.8MB/s

Completed 29662 of 100000 requests, 29.7% 742.0/s 72.5MB/s

Completed 30386 of 100000 requests, 30.4% 724.0/s 70.7MB/s

Completed 31195 of 100000 requests, 31.2% 809.0/s 79.0MB/s

Completed 31651 of 100000 requests, 31.7% 456.0/s 44.5MB/s

Completed 32792 of 100000 requests, 32.8% 1141.0/s 111.5MB/s

Completed 33857 of 100000 requests, 33.9% 1065.0/s 104.0MB/s

Completed 34807 of 100000 requests, 34.8% 950.0/s 92.8MB/s

Completed 35429 of 100000 requests, 35.4% 622.0/s 60.8MB/s

Completed 36585 of 100000 requests, 36.6% 1156.0/s 112.9MB/s

Completed 37164 of 100000 requests, 37.2% 579.0/s 56.6MB/s

Completed 37992 of 100000 requests, 38.0% 828.0/s 80.9MB/s

Completed 39086 of 100000 requests, 39.1% 1094.0/s 106.9MB/s

Completed 40126 of 100000 requests, 40.1% 1038.5/s 101.4MB/s

Completed 41256 of 100000 requests, 41.3% 1131.6/s 110.5MB/s

Completed 42381 of 100000 requests, 42.4% 1125.0/s 109.9MB/s

Completed 43533 of 100000 requests, 43.5% 1152.0/s 112.5MB/s

Completed 44616 of 100000 requests, 44.6% 1083.0/s 105.8MB/s

Completed 45769 of 100000 requests, 45.8% 1153.0/s 112.6MB/s

Completed 46916 of 100000 requests, 46.9% 1147.0/s 112.0MB/s

Completed 48066 of 100000 requests, 48.1% 1144.9/s 111.8MB/s

Completed 48904 of 100000 requests, 48.9% 840.0/s 82.1MB/s

Completed 49566 of 100000 requests, 49.6% 663.3/s 64.8MB/s

Completed 50711 of 100000 requests, 50.7% 1142.5/s 111.6MB/s

Completed 51801 of 100000 requests, 51.8% 1091.8/s 106.7MB/s

Completed 52774 of 100000 requests, 52.8% 973.5/s 95.1MB/s

Completed 53915 of 100000 requests, 53.9% 1141.0/s 111.5MB/s

Completed 55057 of 100000 requests, 55.1% 1142.0/s 111.6MB/s

Completed 56199 of 100000 requests, 56.2% 1142.0/s 111.6MB/s

Completed 57353 of 100000 requests, 57.4% 1154.0/s 112.7MB/s

Completed 58279 of 100000 requests, 58.3% 925.4/s 90.4MB/s

Completed 59426 of 100000 requests, 59.4% 1147.7/s 112.1MB/s

Completed 60488 of 100000 requests, 60.5% 1062.0/s 103.7MB/s

Completed 61629 of 100000 requests, 61.6% 1141.0/s 111.5MB/s

Completed 62786 of 100000 requests, 62.8% 1157.0/s 113.0MB/s

Completed 63924 of 100000 requests, 63.9% 1138.0/s 111.2MB/s

Completed 65074 of 100000 requests, 65.1% 1150.0/s 112.3MB/s

Completed 66181 of 100000 requests, 66.2% 1107.0/s 108.1MB/s

Completed 67013 of 100000 requests, 67.0% 832.0/s 81.3MB/s

Completed 68065 of 100000 requests, 68.1% 1052.0/s 102.8MB/s

Completed 68966 of 100000 requests, 69.0% 901.0/s 88.0MB/s

Completed 70025 of 100000 requests, 70.0% 1056.8/s 103.2MB/s

Completed 70462 of 100000 requests, 70.5% 437.9/s 42.8MB/s

Completed 71520 of 100000 requests, 71.5% 1058.0/s 103.4MB/s

Completed 72341 of 100000 requests, 72.3% 821.0/s 80.2MB/s

Completed 73535 of 100000 requests, 73.5% 1193.5/s 116.6MB/s

Completed 74681 of 100000 requests, 74.7% 1146.5/s 112.0MB/s

Completed 75800 of 100000 requests, 75.8% 1119.0/s 109.3MB/s

Completed 76971 of 100000 requests, 77.0% 1171.0/s 114.4MB/s

Completed 78083 of 100000 requests, 78.1% 1112.0/s 108.6MB/s

Completed 79221 of 100000 requests, 79.2% 1138.0/s 111.2MB/s

Completed 80367 of 100000 requests, 80.4% 1146.0/s 111.9MB/s

Completed 81513 of 100000 requests, 81.5% 1144.7/s 111.8MB/s

Completed 82660 of 100000 requests, 82.7% 1148.3/s 112.2MB/s

Completed 83811 of 100000 requests, 83.8% 1151.0/s 112.4MB/s

Completed 84963 of 100000 requests, 85.0% 1152.0/s 112.5MB/s

Completed 86107 of 100000 requests, 86.1% 1144.0/s 111.8MB/s

Completed 87267 of 100000 requests, 87.3% 1156.0/s 112.9MB/s

Completed 88407 of 100000 requests, 88.4% 1144.0/s 111.8MB/s

Completed 89542 of 100000 requests, 89.5% 1135.0/s 110.9MB/s

Completed 90696 of 100000 requests, 90.7% 1154.0/s 112.7MB/s

Completed 91747 of 100000 requests, 91.7% 1051.0/s 102.7MB/s

Completed 92870 of 100000 requests, 92.9% 1122.9/s 109.7MB/s

Completed 94005 of 100000 requests, 94.0% 1135.1/s 110.9MB/s

Completed 95121 of 100000 requests, 95.1% 1116.0/s 109.0MB/s

Completed 96237 of 100000 requests, 96.2% 1116.0/s 109.0MB/s

Completed 97359 of 100000 requests, 97.4% 1122.0/s 109.6MB/s

Completed 98521 of 100000 requests, 98.5% 1162.0/s 113.5MB/s

Completed 99662 of 100000 requests, 99.7% 1140.9/s 111.5MB/s

Concurrency Level:      100

Time taken for tests:   94.407 seconds

Complete requests:      100000

Failed requests:        0

Total transferred:      10243144846 bytes

Requests per second:    1059.24 [#/sec]

Transfer rate:          105956.66 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        3.9      94.2       1142.2      75.1

Percentage of the requests served within a certain time (ms)

   50%     75.0 ms

   66%     87.6 ms

   75%     97.1 ms

   80%    105.5 ms

   90%    145.0 ms

   95%    224.9 ms

   98%    345.6 ms

   99%    450.7 ms

  100%    1142.2 ms

------------ Randomly Reading Benchmark ----------

Completed 1034 of 100000 requests, 1.0% 1031.9/s 100.8MB/s

Completed 2155 of 100000 requests, 2.2% 1118.6/s 109.3MB/s

Completed 3300 of 100000 requests, 3.3% 1147.5/s 112.1MB/s

Completed 4434 of 100000 requests, 4.4% 1134.0/s 110.8MB/s

Completed 5571 of 100000 requests, 5.6% 1137.0/s 111.1MB/s

Completed 6705 of 100000 requests, 6.7% 1128.1/s 110.2MB/s

Completed 7833 of 100000 requests, 7.8% 1133.9/s 110.8MB/s

Completed 8952 of 100000 requests, 9.0% 1118.1/s 109.2MB/s

Completed 10052 of 100000 requests, 10.1% 1097.3/s 107.2MB/s

Completed 11180 of 100000 requests, 11.2% 1132.8/s 110.7MB/s

Completed 12318 of 100000 requests, 12.3% 1136.8/s 111.1MB/s

Completed 13419 of 100000 requests, 13.4% 1092.1/s 106.7MB/s

Completed 14527 of 100000 requests, 14.5% 1117.1/s 109.1MB/s

Completed 15659 of 100000 requests, 15.7% 1132.0/s 110.6MB/s

Completed 16763 of 100000 requests, 16.8% 1105.1/s 108.0MB/s

Completed 17909 of 100000 requests, 17.9% 1144.9/s 111.8MB/s

Completed 19031 of 100000 requests, 19.0% 1122.0/s 109.6MB/s

Completed 20179 of 100000 requests, 20.2% 1139.8/s 111.3MB/s

Completed 21290 of 100000 requests, 21.3% 1118.7/s 109.3MB/s

Completed 22438 of 100000 requests, 22.4% 1149.5/s 112.3MB/s

Completed 23558 of 100000 requests, 23.6% 1114.2/s 108.8MB/s

Completed 24665 of 100000 requests, 24.7% 1111.5/s 108.6MB/s

Completed 25789 of 100000 requests, 25.8% 1125.3/s 109.9MB/s

Completed 26888 of 100000 requests, 26.9% 1099.0/s 107.4MB/s

Completed 28015 of 100000 requests, 28.0% 1122.8/s 109.7MB/s

Completed 29122 of 100000 requests, 29.1% 1102.1/s 107.7MB/s

Completed 30256 of 100000 requests, 30.3% 1142.3/s 111.6MB/s

Completed 31383 of 100000 requests, 31.4% 1126.8/s 110.1MB/s

Completed 32504 of 100000 requests, 32.5% 1122.3/s 109.6MB/s

Completed 33628 of 100000 requests, 33.6% 1122.9/s 109.7MB/s

Failed to read http://192.168.88.215:8081/30,b738c9ca2a error:Get http://192.168.88.215:8081/30,b738c9ca2a: read tcp 192.168.88.215:8081: connection reset by peer

Completed 34733 of 100000 requests, 34.7% 1106.1/s 108.1MB/s

Completed 35884 of 100000 requests, 35.9% 1142.7/s 111.6MB/s

Completed 37001 of 100000 requests, 37.0% 1125.1/s 109.9MB/s

Completed 38120 of 100000 requests, 38.1% 1119.0/s 109.3MB/s

Completed 39239 of 100000 requests, 39.2% 1117.7/s 109.2MB/s

Completed 40362 of 100000 requests, 40.4% 1123.2/s 109.7MB/s

Completed 41489 of 100000 requests, 41.5% 1128.1/s 110.2MB/s

Completed 42620 of 100000 requests, 42.6% 1129.9/s 110.4MB/s

Completed 43732 of 100000 requests, 43.7% 1112.0/s 108.6MB/s

Completed 44865 of 100000 requests, 44.9% 1133.0/s 110.7MB/s

Completed 45975 of 100000 requests, 46.0% 1108.7/s 108.3MB/s

Completed 47109 of 100000 requests, 47.1% 1132.9/s 110.7MB/s

Completed 48231 of 100000 requests, 48.2% 1124.3/s 109.8MB/s

Completed 49382 of 100000 requests, 49.4% 1151.2/s 112.5MB/s

Completed 50506 of 100000 requests, 50.5% 1125.0/s 109.9MB/s

Completed 51634 of 100000 requests, 51.6% 1128.1/s 110.2MB/s

Completed 52760 of 100000 requests, 52.8% 1124.8/s 109.9MB/s

Completed 53895 of 100000 requests, 53.9% 1134.9/s 110.9MB/s

Completed 55022 of 100000 requests, 55.0% 1128.3/s 110.2MB/s

Completed 56157 of 100000 requests, 56.2% 1133.9/s 110.8MB/s

Completed 57286 of 100000 requests, 57.3% 1129.0/s 110.3MB/s

Completed 58427 of 100000 requests, 58.4% 1141.0/s 111.5MB/s

Completed 59532 of 100000 requests, 59.5% 1105.0/s 107.9MB/s

Completed 60602 of 100000 requests, 60.6% 1070.0/s 104.5MB/s

Completed 61701 of 100000 requests, 61.7% 1099.0/s 107.4MB/s

Completed 62764 of 100000 requests, 62.8% 1063.0/s 103.8MB/s

Completed 63885 of 100000 requests, 63.9% 1121.0/s 109.5MB/s

Completed 65019 of 100000 requests, 65.0% 1135.1/s 110.9MB/s

Completed 66135 of 100000 requests, 66.1% 1114.9/s 108.9MB/s

Completed 67260 of 100000 requests, 67.3% 1121.4/s 109.5MB/s

Completed 68410 of 100000 requests, 68.4% 1153.7/s 112.7MB/s

Completed 69539 of 100000 requests, 69.5% 1129.0/s 110.3MB/s

Completed 70668 of 100000 requests, 70.7% 1129.0/s 110.3MB/s

Completed 71803 of 100000 requests, 71.8% 1135.0/s 110.9MB/s

Completed 72928 of 100000 requests, 72.9% 1120.2/s 109.4MB/s

Completed 74052 of 100000 requests, 74.1% 1129.7/s 110.4MB/s

Completed 75180 of 100000 requests, 75.2% 1128.1/s 110.2MB/s

Completed 76317 of 100000 requests, 76.3% 1134.8/s 110.9MB/s

Completed 77445 of 100000 requests, 77.4% 1129.1/s 110.3MB/s

Completed 78589 of 100000 requests, 78.6% 1135.7/s 110.9MB/s

Completed 79703 of 100000 requests, 79.7% 1119.7/s 109.4MB/s

Completed 80849 of 100000 requests, 80.8% 1147.3/s 112.1MB/s

Completed 81976 of 100000 requests, 82.0% 1128.2/s 110.2MB/s

Completed 83112 of 100000 requests, 83.1% 1136.0/s 111.0MB/s

Completed 84237 of 100000 requests, 84.2% 1125.0/s 109.9MB/s

Completed 85385 of 100000 requests, 85.4% 1144.3/s 111.8MB/s

Completed 86509 of 100000 requests, 86.5% 1128.9/s 110.3MB/s

Completed 87627 of 100000 requests, 87.6% 1118.0/s 109.2MB/s

Completed 88766 of 100000 requests, 88.8% 1137.9/s 111.2MB/s

Completed 89883 of 100000 requests, 89.9% 1117.0/s 109.1MB/s

Completed 91019 of 100000 requests, 91.0% 1136.4/s 111.0MB/s

Completed 92172 of 100000 requests, 92.2% 1146.6/s 112.0MB/s

Completed 93298 of 100000 requests, 93.3% 1133.1/s 110.7MB/s

Completed 94434 of 100000 requests, 94.4% 1134.8/s 110.9MB/s

Completed 95548 of 100000 requests, 95.5% 1115.1/s 108.9MB/s

Completed 96681 of 100000 requests, 96.7% 1131.2/s 110.5MB/s

Completed 97819 of 100000 requests, 97.8% 1138.6/s 111.2MB/s

Completed 98942 of 100000 requests, 98.9% 1124.2/s 109.8MB/s

Completed 99988 of 100000 requests, 100.0% 1046.0/s 102.2MB/s

Concurrency Level:      100

Time taken for tests:   89.094 seconds

Complete requests:      99999

Failed requests:        1

Total transferred:      10243038550 bytes

Requests per second:    1122.40 [#/sec]

Transfer rate:          112274.15 [Kbytes/sec]

Connection Times (ms)

              min      avg        max      std

Total:        7.3      88.5       6147.5      109.4

Percentage of the requests served within a certain time (ms)

   50%     58.2 ms

   66%     66.0 ms

   75%     72.7 ms

   80%     79.8 ms

   90%    244.6 ms

   95%    291.8 ms

   98%    337.2 ms

   99%    633.2 ms

  100%    6147.5 ms

3),測試後刪除測試文件

http://dc1node1:9333/col/delete?collection=benchmark

或者用CURL提交POST請求

curl -d "collection=benchmark" "http://dc1node1:9333/col/delete"

六,其餘功能

1,weed upload

Usage: weed upload -server=localhost:9333 file1 [file2 file3]

 upload -server=localhost:9333 -dir=one_directory -include=*.pdf

  upload one or a list of files, or batch upload one whole folder recursively.

2,weed download

Usage: weed download -server=localhost:9333 -dir=one_directory fid1 [fid2 fid3 ...]

  download files by file id.

3,weed backup

Usage: weed backup -dir=. -volumeId=234 -server=localhost:9333

  Incrementally backup volume data.

    It is expected that you use this inside a script, to loop through

    all possible volume ids that needs to be backup to local folder.

    The volume id does not need to exist locally or even remotely.

    This will help to backup future new volumes.

    Usually backing up is just copying the .dat (and .idx) files.

    But it's tricky to incremententally copy the differences.

    The complexity comes when there are multiple addition, deletion and compaction.

    This tool will handle them correctly and efficiently, avoiding unnecessary data transporation.

Default Parameters:

  -collection="": collection name

  -dir=".": directory to store volume data files

  -server="localhost:9333": SeaweedFS master location

  -volumeId=-1: a volume id. The volume .dat and .idx files should already exist in the dir.

4,weed fix

Usage: weed fix -dir=/tmp -volumeId=234

  Fix runs the SeaweedFS fix command to re-create the index .idx file.

Default Parameters:

  -collection="": the volume collection name

  -dir=".": data directory to store files

  -volumeId=-1: a volume id. The volume should already exist in the dir. The volume index file should not exist.

5,weed  compact

Usage: weed compact -dir=/tmp -volumeId=234

  Force an compaction to remove deleted files from volume files.

  The compacted .dat file is stored as .cpd file.

  The compacted .idx file is stored as .cpx file.

6,weed  export 

Usage: weed export -dir=/tmp -volumeId=234 -o=/dir/name.tar -fileNameFormat=``.`Name` -newer='2006-01-02T15:04:05'

  List all files in a volume, or Export all files in a volume to a tar file if the output is specified.

    The format of file name in the tar file can be customized. Default is ``.`Mime`/``.`Id`:``.`Name`. Also available is ``.`Key`

七,總結

weed-fs爲想尋找開源分佈式文件系統的朋友們提供了一個新選擇。尤爲是在存儲大量小圖片時,weed-fs自身就是基於haystack這一優化圖 片存儲的論文的。但weed-fs目前最大 的問題彷佛是沒有重量級的使用案例,自身也還有很多不足,這裏測試的是最新的0.70 beta版本,後續穩定版應該會好一點。

相關文章
相關標籤/搜索