Replica Set使用的是n個mongod節點,構建具有自動的容錯功能(auto-failover),自動恢復的(auto-recovery)的高可用方案。前端
使用Replica Set來實現讀寫分離。經過在鏈接時指定或者在主庫指定slaveOk,由Secondary來分擔讀的壓力,Primary只承擔寫操做。linux
對於Replica Set中的secondary 節點默認是不可讀的。 sql
架構圖:mongodb
分別在各服務器上運行兩個mongod實例: shell
shard11 + shard12 + shard13 ----> 組成一個replica set --|數據庫
|-----> sharding_clusterapi
shard21 + shard22 + shard23 ----> 組成一個replica set --|服務器
Shard Server: 用於存儲實際的數據塊,實際生產環境中一個shard server角色可由幾臺機器組個一個relica set承擔,防止主機單點故障!架構
Config Server: 存儲了整個 Cluster Metadata,其中包括 chunk 信息!app
Route Server: 前端路由,客戶端由此接入,且讓整個集羣看上去像單一數據庫,前端應用能夠透明使用。
1、安裝配置mongodb環境
1.安裝
1. wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.0.4.tgz
2. tar zxf mongodb-linux-x86_64-2.0.4.tgz
3. mv mongodb-linux-x86_64-2.0.4 /opt/mongodb
4. echo "export PATH=$PATH:/opt/mongodb/bin" >> /etc/profile
5. source /etc/profile
2.創建用戶和組
1. useradd -u 600 -s /bin/false mongodb
3.建立數據目錄
在各服務器上創建以下目錄:
1. 30服務器:
2. mkdir -p /data0/mongodb/{db,logs}
3. mkdir -p /data0/mongodb/db/{shard11,shard21,config}
4. 31服務器:
5. mkdir -p /data0/mongodb/{db,logs}
6. mkdir -p /data0/mongodb/db/{shard12,shard22,config}
7. 32服務器:
8. mkdir -p /data0/mongodb/{db,logs}
9. mkdir -p /data0/mongodb/db/{shard13,shard23,config}
4.設置各節點服務器hosts解析
1. true > /etc/hosts
2. echo -ne "
3. 192.168.8.30 mong01
4. 192.168.8.31 mong02
5. 192.168.8.32 mong03
6. " >>/etc/hosts
7.
8. 或
9. cat >> /etc/hosts << EOF
10. 192.168.8.30 mong01
11. 192.168.8.31 mong02
12. 192.168.8.32 mong03
13. EOF
5.同步時鐘
ntpdate ntp.api.bz
寫到crontab任務計劃中!
這裏務必要同步時鐘,否則shrad不能同步!
以上配置各節點都進行操做!!
2、配置relica sets
1.配置兩個shard
1. 30 server:
2. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
3. sleep 2
4. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
5. sleep 2
6. echo "all mongo started."
7.
8. 31 server:
9. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard12 -oplogSize 1000 -logpath /data0/mongodb/logs/shard12.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
10. sleep 2
11. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard22 -oplogSize 1000 -logpath /data0/mongodb/logs/shard22.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
12. sleep 2
13. echo "all mongo started."
14.
15. 32 server:
16. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard13 -oplogSize 1000 -logpath /data0/mongodb/logs/shard13.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
17. sleep 2
18. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard23 -oplogSize 1000 -logpath /data0/mongodb/logs/shard23.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
19. sleep 2
20. echo "all mongo started."
能夠對應的把上面的命令放在一個腳本內,方便啓動!
也能夠寫到配置文件中,經過-f參數來啓動!
改爲配置文件方式:
1. shardsvr = true
2. replSet = shard1
3. port = 27021
4. dbpath = /data0/mongodb/db/shard11
5. oplogSize = 1000
6. logpath = /data0/mongodb/logs/shard11.log
7. logappend = true
8. maxConns = 10000
9. quit=true
10. profile = 1
11. slowms = 5
12. rest = true
13. fork = true
14. directoryperdb = true
這樣能夠經過 mognod -f mongodb.conf來啓動了!
我這裏把這些命令都放入腳本中:
啓動腳本(這裏只啓動mongod,後面有專門啓動config和mongos腳本):
1. [root@mon1 sh]# cat start.sh
2. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
3. sleep 2
4.
5. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
6. sleep 2
7. ps aux |grep mongo
8. echo "all mongo started."
9.
10. [root@mon2 sh]# cat start.sh
11. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard12 -oplogSize 1000 -logpath /data0/mongodb/logs/shard12.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
12. sleep 2
13. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard22 -oplogSize 1000 -logpath /data0/mongodb/logs/shard22.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
14. sleep 2
15. ps aux |grep mongo
16. echo "all mongo started."
17.
18. [root@mongo03 sh]# cat start.sh
19. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend --maxConns 10000 --quiet -fork --directoryperdb --keyFile=/opt/mongodb/sh/keyFile
20. sleep 2
21. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend --maxConns 10000 --quiet -fork --directoryperdb --keyFile=/opt/mongodb/sh/keyFile
22. sleep 2
23. echo "all mongo started."
PS:要是想開啓一個HTTP協議的端口提供rest服務,能夠在mongod啓動參數中加上 --rest 選項!
這樣咱們能夠經過 http://IP:28021/_replSet 查看狀態!
生產環境推薦用配置文件和腳本文件方式啓動。
3、初始化replica set
1.配置shard1用到的replica sets
1. [root@mongo01 ~]# mongo 192.168.8.30:27021
2. > config = {_id: 'shard1', members: [
3. {_id: 0, host: '192.168.8.30:27021'},
4. {_id: 1, host: '192.168.8.31:27021'},
5. {_id: 2, host: '192.168.8.32:27021'}]
6. }
7.
8. > config = {_id: 'shard1', members: [
9. ... {_id: 0, host: '192.168.8.30:27021'},
10. ... {_id: 1, host: '192.168.8.31:27021'},
11. ... {_id: 2, host: '192.168.8.32:27021'}]
12. ... }
13. {
14. "_id" : "shard1",
15. "members" : [
16. {
17. "_id" : 0,
18. "host" : "192.168.8.30:27021"
19. },
20. {
21. "_id" : 1,
22. "host" : "192.168.8.31:27021"
23. },
24. {
25. "_id" : 2,
26. "host" : "192.168.8.32:27021"
27. }
28. ]
29. }
出現以下信息表示成功:
1. > rs.initiate(config)
2. {
3. "info" : "Config now saved locally. Should come online in about a minute.",
4. "ok" : 1
5. }
6.
7.
8. > rs.status()
9. {
10. "set" : "shard1",
11. "date" : ISODate("2012-06-07T11:35:22Z"),
12. "myState" : 1,
13. "members" : [
14. {
15. "_id" : 0,
16. "name" : "192.168.8.30:27021",
17. "health" : 1, #1 表示正常
18. "state" : 1, #1 表示是primary
19. "stateStr" : "PRIMARY", #表示此服務器是主庫
20. "optime" : {
21. "t" : 1339068873000,
22. "i" : 1
23. },
24. "optimeDate" : ISODate("2012-06-07T11:34:33Z"),
25. "self" : true
26. },
27. {
28. "_id" : 1,
29. "name" : "192.168.8.31:27021",
30. "health" : 1, #1 表示正常
31. "state" : 2, #2 表示是secondary
32. "stateStr" : "SECONDARY", #表示此服務器是從庫
33. "uptime" : 41,
34. "optime" : {
35. "t" : 1339068873000,
36. "i" : 1
37. },
38. "optimeDate" : ISODate("2012-06-07T11:34:33Z"),
39. "lastHeartbeat" : ISODate("2012-06-07T11:35:21Z"),
40. "pingMs" : 7
41. },
42. {
43. "_id" : 2,
44. "name" : "192.168.8.32:27021",
45. "health" : 1,
46. "state" : 2,
47. "stateStr" : "SECONDARY",
48. "uptime" : 36,
49. "optime" : {
50. "t" : 1341373105000,
51. "i" : 1
52. },
53. "optimeDate" : ISODate("2012-06-07T11:34:00Z"),
54. "lastHeartbeat" : ISODate("2012-06-07T11:35:21Z"),
55. "pingMs" : 3
56. }
57. ],
58. "ok" : 1
59. }
60. PRIMARY>
能夠看立刻變成 PRIMARY 即主節點!
再到其它節點看下:
1. [root@mongo02 sh]# mongo 192.168.8.31:27021
2. MongoDB shell version: 2.0.5
3. connecting to: 192.168.8.31:27021/test
4. SECONDARY>
5.
6. [root@mongo03 sh]# mongo 192.168.8.32:27021
7. MongoDB shell version: 2.0.5
8. connecting to: 192.168.8.32:27021/test
9. SECONDARY>
10.
在全部節點上能夠查看Replica Sets 的配置信息:
1. PRIMARY> rs.conf()
2. {
3. "_id" : "shard1",
4. "version" : 1,
5. "members" : [
6. {
7. "_id" : 0,
8. "host" : "192.168.8.30:27021"
9. },
10. {
11. "_id" : 1,
12. "host" : "192.168.8.31:27021"
13. },
14. {
15. "_id" : 2,
16. "host" : "192.168.8.32:27021",
17. "shardOnly" : true
18. }
19. ]
20. }
2.配置shard2用到的replica sets
1. [root@mongo02 sh]# mongo 192.168.8.30:27022
2. MongoDB shell version: 2.0.5
3. connecting to: 192.168.8.30:27022/test
4. > config = {_id: 'shard2', members: [
5. {_id: 0, host: '192.168.8.30:27022'},
6. {_id: 1, host: '192.168.8.31:27022'},
7. {_id: 2, host: '192.168.8.32:27022'}]
8. }
9. > config = {_id: 'shard2', members: [
10. ... {_id: 0, host: '192.168.8.30:27022'},
11. ... {_id: 1, host: '192.168.8.31:27022'},
12. ... {_id: 2, host: '192.168.8.32:27022'}]
13. ... }
14. {
15. "_id" : "shard2",
16. "members" : [
17. {
18. "_id" : 0,
19. "host" : "192.168.8.30:27022"
20. },
21. {
22. "_id" : 1,
23. "host" : "192.168.8.31:27022"
24. },
25. {
26. "_id" : 2,
27. "host" : "192.168.8.32:27022"
28. }
29. ]
30. }
31.
32. > rs.initiate(config)
33. {
34. "info" : "Config now saved locally. Should come online in about a minute.",
35. "ok" : 1
36. }
驗證節點:
1. > rs.status()
2. {
3. "set" : "shard2",
4. "date" : ISODate("2012-06-07T11:43:47Z"),
5. "myState" : 2,
6. "members" : [
7. {
8. "_id" : 0,
9. "name" : "192.168.8.30:27022",
10. "health" : 1,
11. "state" : 1,
12. "stateStr" : "PRIMARY",
13. "optime" : {
14. "t" : 1341367921000,
15. "i" : 1
16. },
17. "optimeDate" : ISODate("2012-06-07T11:43:40Z"),
18. "self" : true
19. },
20. {
21. "_id" : 1,
22. "name" : "192.168.8.31:27022",
23. "health" : 1,
24. "state" : 2,
25. "stateStr" : "SECONDARY",
26. "uptime" : 50,
27. "optime" : {
28. "t" : 1341367921000,
29. "i" : 1
30. },
31. "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
32. "lastHeartbeat" : ISODate("2012-06-07T11:43:46Z"),
33. "pingMs" : 0,
34. },
35. {
36. "_id" : 2,
37. "name" : "192.168.8.32:27022",
38. "health" : 1,
39. "state" : 2,
40. "stateStr" : "SECONDARY",
41. "uptime" : 81,
42. "optime" : {
43. "t" : 1341373254000,
44. "i" : 1
45. },
46. "optimeDate" : ISODate("2012-06-07T11:41:00Z"),
47. "lastHeartbeat" : ISODate("2012-06-07T11:43:46Z"),
48. "pingMs" : 0,
49. }
50. ],
51. "ok" : 1
52. }
53. PRIMARY>
到此就配置好了二個replica sets!
PS: 初始化時,不指定priority默認id 0 爲primary
狀態中關鍵數據位:
在用 rs.status()查看replica sets狀態時,
state:1表示該host是當前能夠進行讀寫,2:不能讀寫
health:1表示該host目前是正常的,0:異常
注意:初使化replica sets時也能夠用這種方法:
db.runCommand({"replSetInitiate":{"_id":"shard1","members":[{"_id":0,"host":"192.168.8.30:27021"},{"_id":1,"host":"192.168.8.31:27021"},{"_id":2,"host":"192.168.8.32:27021","shardOnly":true}]}})
能夠省略用rs.initiate(config)。
4、配置三臺config server
分別在各服務器上運行(啓動都同樣):
/opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb
用腳本形式:
1. [root@mongo01 sh]# cat config.sh
2. /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb
3. [root@mongo01 sh]# pwd
4. /opt/mongodb/sh
5. [root@mongo01 sh]# ./config.sh
而後在各節點查看有沒有啓動起來:
1. [root@mongo01 sh]# ps aux |grep mong
2. root 25343 0.9 6.8 737596 20036 ? Sl 19:32 0:12 /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard1 -oplogSize 100 -logpath /data0/mongodb/logs/shard1.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
3. root 25351 0.9 7.0 737624 20760 ? Sl 19:32 0:11 /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard2 -oplogSize 100 -logpath /data0/mongodb/logs/shard2.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
4. root 25669 13.0 4.7 118768 13852 ? Sl 19:52 0:07 /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb
5. root 25695 0.0 0.2 61220 744 pts/3 R+ 19:53 0:00 grep mong
5、配置mongs(啓動路由)
分別在20六、207服務器上運行(也能夠在全部節點上啓動):
/opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork
用腳本形式:
1. [root@mongo01 sh]# cat mongos.sh
2. /opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork
3. [root@mongo01 sh]# pwd
4. /opt/mongodb/sh
5. [root@mongo01 sh]# ./mongos.sh
6. [root@mongo01 sh]# ps aux |grep mong
7. root 25343 0.8 6.8 737596 20040 ? Sl 19:32 0:13 /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard1 -oplogSize 100 -logpath /data0/mongodb/logs/shard1.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
8. root 25351 0.9 7.0 737624 20768 ? Sl 19:32 0:16 /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard2 -oplogSize 100 -logpath /data0/mongodb/logs/shard2.log -logappend --maxConns 10000 --quiet -fork --directoryperdb
9. root 25669 2.0 8.0 321852 23744 ? Sl 19:52 0:09 /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb
10. root 25863 0.5 0.8 90760 2388 ? Sl 20:00 0:00 /opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork
11. root 25896 0.0 0.2 61220 732 pts/3 D+ 20:00 0:00 grep mong
注意:
1). mongos裏面的ip和端口是config服務的ip和端口:192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000
2). 必須先啓動config後(而且config啓動正常後,有config的進程存在)再啓動mongos
6、配置shard集羣
1.鏈接一臺路由
1. [root@mongo01 sh]# mongo 192.168.8.30:30000/admin
2. MongoDB shell version: 2.0.5
3. connecting to: 192.168.8.30:30000/admin
4. mongos>
2.加入shards
1. mongos> db.runCommand({ addshard : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021",name:"shard1",maxSize:20480})
2. { "shardAdded" : "shard1", "ok" : 1 }
3. mongos> db.runCommand({ addshard : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022",name:"shard2",maxSize:20480})
4. { "shardAdded" : "shard2", "ok" : 1 }
PS:
分片操做必須在 admin 庫下操做
若是隻啓動206和207服務器的路由!所以可不用把208服務器加進來!
可選參數說明:
Name:用於指定每一個shard的名字,不指定的話系統將自動分配
maxSize:指定各個shard可以使用的最大磁盤空間,單位
MegaBytes
3.列出加入的shards
1. mongos> db.runCommand( { listshards : 1 } );
2. {
3. "shards" : [
4. {
5. "_id" : "shard1",
6. "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021",
7. "maxSize" : NumberLong(20480)
8. },
9. {
10. "_id" : "shard2",
11. "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022",
12. "maxSize" : NumberLong(20480)
13. }
14. ],
15. "ok" : 1
16. }
PS: 列出了以上二個我加的shards(shard1和shard2),表示shards已經配置成功!!
若是206那臺機器掛了,其它兩個節點中某個會成爲主節點,mongos會自動鏈接到主節點!
1. mongos> db.runCommand({ismaster:1});
2. {
3. "ismaster" : true,
4. "msg" : "isdbgrid",
5. "maxBsonObjectSize" : 16777216,
6. "ok" : 1
7. }
8. mongos> db.runCommand( { listshards : 1 } );
9. { "ok" : 0, "errmsg" : "access denied - use admin db" }
10. mongos> use admin
11. switched to db admin
12. mongos> db.runCommand( { listshards : 1 } );
13. {
14. "shards" : [
15. {
16. "_id" : "s1",
17. "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021"
18. },
19. {
20. "_id" : "s2",
21. "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022"
22. }
23. ],
24. "ok" : 1
25. }
26. mongos>
七.添加分片
1.激活數據庫分片
db.runCommand( { enablesharding : "<dbname>" } );
如:db.runCommand( { enablesharding : "" } );
插入測試數據:
1. mongos> use nosql
2. switched to db nosql
3. mongos> for(var i=0;i<100;i++)db.fans.insert({uid:i,uname:'nosqlfans'+i});
激活數據庫:
1. mongos> use admin
2. switched to db admin
3. mongos> db.runCommand( { enablesharding : "nosql" } );
4. { "ok" : 1 }
經過執行以上命令,可讓數據庫跨shard,若是不執行這步,數據庫只會存放在一個shard,一旦激活數據庫分片,數據庫中不一樣的collection 將被存放在不一樣的shard上,但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對 collection做些操做!
2.添加索引
必須加索引,否則不能對collections分片!
1. mongos> use nosql
2. switched to db nosql
3. mongos> db.fans.find()
4. { "_id" : ObjectId("4ff2ae6816df1d1b33bad081"), "uid" : 0, "uname" : "nosqlfans0" }
5. { "_id" : ObjectId("4ff2ae6816df1d1b33bad082"), "uid" : 1, "uname" : "nosqlfans1" }
6. { "_id" : ObjectId("4ff2ae6816df1d1b33bad083"), "uid" : 2, "uname" : "nosqlfans2" }
7. { "_id" : ObjectId("4ff2ae6816df1d1b33bad084"), "uid" : 3, "uname" : "nosqlfans3" }
8. { "_id" : ObjectId("4ff2ae6816df1d1b33bad085"), "uid" : 4, "uname" : "nosqlfans4" }
9. { "_id" : ObjectId("4ff2ae6816df1d1b33bad086"), "uid" : 5, "uname" : "nosqlfans5" }
10. { "_id" : ObjectId("4ff2ae6816df1d1b33bad087"), "uid" : 6, "uname" : "nosqlfans6" }
11. { "_id" : ObjectId("4ff2ae6816df1d1b33bad088"), "uid" : 7, "uname" : "nosqlfans7" }
12. { "_id" : ObjectId("4ff2ae6816df1d1b33bad089"), "uid" : 8, "uname" : "nosqlfans8" }
13. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08a"), "uid" : 9, "uname" : "nosqlfans9" }
14. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08b"), "uid" : 10, "uname" : "nosqlfans10" }
15. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08c"), "uid" : 11, "uname" : "nosqlfans11" }
16. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08d"), "uid" : 12, "uname" : "nosqlfans12" }
17. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08e"), "uid" : 13, "uname" : "nosqlfans13" }
18. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08f"), "uid" : 14, "uname" : "nosqlfans14" }
19. { "_id" : ObjectId("4ff2ae6816df1d1b33bad090"), "uid" : 15, "uname" : "nosqlfans15" }
20. { "_id" : ObjectId("4ff2ae6816df1d1b33bad091"), "uid" : 16, "uname" : "nosqlfans16" }
21. { "_id" : ObjectId("4ff2ae6816df1d1b33bad092"), "uid" : 17, "uname" : "nosqlfans17" }
22. { "_id" : ObjectId("4ff2ae6816df1d1b33bad093"), "uid" : 18, "uname" : "nosqlfans18" }
23. { "_id" : ObjectId("4ff2ae6816df1d1b33bad094"), "uid" : 19, "uname" : "nosqlfans19" }
24. has more
25. mongos> db.fans.ensureIndex({"uid":1})
26. mongos> db.fans.find({uid:10}).explain()
27. {
28. "cursor" : "BtreeCursor uid_1",
29. "nscanned" : 1,
30. "nscannedObjects" : 1,
31. "n" : 1,
32. "millis" : 25,
33. "nYields" : 0,
34. "nChunkSkips" : 0,
35. "isMultiKey" : false,
36. "indexOnly" : false,
37. "indexBounds" : {
38. "uid" : [
39. [
40. 10,
41. 10
42. ]
43. ]
44. }
45. }
3.Collecton分片
要使單個collection也分片存儲,須要給collection指定一個分片key,經過如下命令操做:
db.runCommand( { shardcollection : "",key : });
1. mongos> use admin
2. switched to db admin
3. mongos> db.runCommand({shardcollection : "nosql.fans",key : {uid:1}})
4. { "collectionsharded" : "nosql.fans", "ok" : 1 }
PS:
1). 操做必須切換到admin數據庫下
2). 分片的collection系統要建立好索引
3). 分片的collection只能有一個在分片key上的惟一索引,其它惟一索引不被容許
4.查看分片狀態
1. mongos> use nosql
2. mongos> db.fans.stats()
3. {
4. "sharded" : true,
5. "flags" : 1,
6. "ns" : "nosql.fans",
7. "count" : 100,
8. "numExtents" : 2,
9. "size" : 5968,
10. "storageSize" : 20480,
11. "totalIndexSize" : 24528,
12. "indexSizes" : {
13. "_id_" : 8176,
14. "uid0_1" : 8176,
15. "uid_1" : 8176
16. },
17. "avgObjSize" : 59.68,
18. "nindexes" : 3,
19. "nchunks" : 1,
20. "shards" : {
21. "shard1" : {
22. "ns" : "nosql.test",
23. "count" : 100,
24. "size" : 5968,
25. "avgObjSize" : 59.68,
26. "storageSize" : 20480,
27. "numExtents" : 2,
28. "nindexes" : 3,
29. "lastExtentSize" : 16384,
30. "paddingFactor" : 1,
31. "flags" : 1,
32. "totalIndexSize" : 24528,
33. "indexSizes" : {
34. "_id_" : 8176,
35. "uid0_1" : 8176,
36. "uid_1" : 8176
37. },
38. "ok" : 1
39. }
40. },
41. "ok" : 1
42. }
43. mongos>
些時分片沒有發生變化!
再插入比較多的數據:
1. mongos> use nosql
2. switched to db nosql
3. mongos> for(var i=200;i<200003;i++)db.fans.save({uid:i,uname:'nosqlfans'+i});
4. mongos> db.fans.stats()
5. {
6. "sharded" : true,
7. "flags" : 1,
8. "ns" : "nosql.fans",
9. "count" : 200002,
10. "numExtents" : 12,
11. "size" : 12760184,
12. "storageSize" : 22646784,
13. "totalIndexSize" : 12116832,
14. "indexSizes" : {
15. "_id_" : 6508096,
16. "uid_1" : 5608736
17. },
18. "avgObjSize" : 63.80028199718003,
19. "nindexes" : 2,
20. "nchunks" : 10,
21. "shards" : {
22. "shard1" : {
23. "ns" : "nosql.fans",
24. "count" : 9554,
25. "size" : 573260,
26. "avgObjSize" : 60.00209336403601,
27. "storageSize" : 1396736,
28. "numExtents" : 5,
29. "nindexes" : 2,
30. "lastExtentSize" : 1048576,
31. "paddingFactor" : 1,
32. "flags" : 1,
33. "totalIndexSize" : 596848,
34. "indexSizes" : {
35. "_id_" : 318864,
36. "uid_1" : 277984
37. },
38. "ok" : 1
39. },
40. "shard2" : {
41. "ns" : "nosql.fans",
42. "count" : 190448,
43. "size" : 12186924,
44. "avgObjSize" : 63.990821641602956,
45. "storageSize" : 21250048,
46. "numExtents" : 7,
47. "nindexes" : 2,
48. "lastExtentSize" : 10067968,
49. "paddingFactor" : 1,
50. "flags" : 1,
51. "totalIndexSize" : 11519984,
52. "indexSizes" : {
53. "_id_" : 6189232,
54. "uid_1" : 5330752
55. },
56. "ok" : 1
57. }
58. },
59. "ok" : 1
60. }
61. mongos>
當再次插入大量數據時。。自動分片處理了!!因此OK!!!
八.中止全部服務腳本
1. [root@mon1 sh]# cat /opt/mongodb/sh/stop.sh
2. #!/bin/sh
3. check=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|wc -l`
4. echo $check
5. while [ $check -gt 0 ]
6. do
7. # echo $check
8. no=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|sed -n '1p'`
9. kill -3 $no
10. echo "kill $no mongo daemon is ok."
11. sleep 2
12. check=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|wc -l`
13. echo "stopping mongo,pls waiting..."
14. done
15.
16. check=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|wc -l`
17. echo $check
18. while [ $check -gt 0 ]
19. do
20. # echo $check
21. no=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|sed -n '1p'`
22. kill -3 $no
23. echo "kill $no mongo daemon is ok."
24. sleep 2
25. check=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|wc -l`
26. echo "stopping mongo,pls waiting..."
27. done
28.
29. check=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|wc -l`
30. echo $check
31. while [ $check -gt 0 ]
32. do
33. # echo $check
34. no=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|sed -n '1p'`
35. kill -3 $no
36. echo "kill $no mongo daemon is ok."
37. sleep 2
38. check=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|wc -l`
39. echo "stopping mongo,pls waiting..."
40. done
41.
42. echo "all mongodb stopped!"
九.分片管理
1.listshards:列出全部的Shard
1. >use admin
2. >db.runCommand({listshards:1})
2.移除shard
1. >use admin
2. >db.runCommand( { removeshard : "shard1/192.168.8.30:27021,192.168.8.31:27021" } )
3. >db.runCommand( { removeshard : "shard2/192.168.8.30:27022,192.168.8.31:27022" } )
對於移除的分片後,咱們再加入相同分片時,會加不進去,能夠按以下方法進行:
1. mongos> use config
2. switched to db config
3. mongos> show collections
4. changelog
5. chunks
6. collections
7. databases
8. lockpings
9. locks
10. mongos
11. settings
12. shards
13. system.indexes
14. version
15. mongos> db.shards.find()
16. { "_id" : "shard1", "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021", "maxSize" : NumberLong(20480) }
17. { "_id" : "shard2", "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022", "maxSize" : NumberLong(20480) }
要作的就是刪除shards表中的信息,把移除的shard鍵值刪除掉!再從新加入shard
如:db.shards.remove({"_id":"shard2"})
3.查看Sharding信息
> printShardingStatus()
PRIMARY> db.system.replset.find()
PRIMARY> rs.isMaster()