前面詳細介紹了mongodb的副本集和分片的原理,這裏就不贅述了。下面記錄Mongodb副本集+分片集羣環境部署過程:前端
MongoDB Sharding Cluster,須要三種角色:linux
Shard Server: mongod 實例,用於存儲實際的數據塊,實際生產環境中一個shard server角色可由幾臺機器組個一個relica set承擔,防止主機單點故障
Config Server: mongod 實例,存儲了整個 Cluster Metadata,其中包括 chunk 信息。
Route Server: mongos 實例,前端路由,客戶端由此接入,且讓整個集羣看上去像單一數據庫,前端應用能夠透明使用。mongodb
機器信息:數據庫
分別在3臺機器運行一個mongod實例(稱爲mongod shard11,mongod shard12,mongod shard13)組織replica set1,做爲cluster的shard1
分別在3臺機器運行一個mongod實例(稱爲mongod shard21,mongod shard22,mongod shard23)組織replica set2,做爲cluster的shard2
每臺機器運行一個mongod實例,做爲3個config server
每臺機器運行一個mongos進程,用於客戶端鏈接bash
1)安裝mongodb (3臺機器都要操做) 下載地址:https://pan.baidu.com/s/1hsoVcpQ 提取密碼:6zp4 [root@slave1 src]# cd [root@slave1 ~]# cd /usr/local/src/ [root@slave1 src]# ll mongodb-linux-x86_64-rhel62-3.0.6.tgz [root@slave1 src]# tar -zvxf mongodb-linux-x86_64-rhel62-3.0.6.tgz [root@slave1 src]# mv mongodb-linux-x86_64-rhel62-3.0.6 mongodb 2)建立sharding數據目錄 根據本例sharding架構圖所示,在各臺sever上建立shard數據文件目錄 slave1 [root@slave1 src]# mkdir /home/services/ [root@slave1 src]# mv mongodb /home/services/ [root@slave1 src]# cd /home/services/mongodb/ [root@slave1 mongodb]# mkdir -p data/shard11 [root@slave1 mongodb]# mkdir -p data/shard21 slave2 [root@slave2 src]# mkdir /home/services/ [root@slave2 src]# mv mongodb /home/services/ [root@slave2 src]# cd /home/services/mongodb/ [root@slave2 mongodb]# mkdir -p data/shard12 [root@slave2 mongodb]# mkdir -p data/shard22 slave3 [root@slave3 src]# mkdir /home/services/ [root@slave3 src]# mv mongodb /home/services/ [root@slave3 src]# cd /home/services/mongodb/ [root@slave3 mongodb]# mkdir -p data/shard13 [root@slave3 mongodb]# mkdir -p data/shard23 3)配置relica sets 3.1)配置shard1所用到的replica sets 1: slave1 [root@slave1 ~]# /home/services/mongodb/bin/mongod --shardsvr --replSet shard1 --port 27018 --dbpath /home/services/mongodb/data/shard11 --oplogSize 100 --logpath /home/services/mongodb/data/shard11.log --logappend --fork slave2 [root@slave2 ~]# /home/services/mongodb/bin/mongod --shardsvr --replSet shard1 --port 27018 --dbpath /home/services/mongodb/data/shard12 --oplogSize 100 --logpath /home/services/mongodb/data/shard12.log --logappend --fork slave3 [root@slave3 ~]# /home/services/mongodb/bin/mongod --shardsvr --replSet shard1 --port 27018 --dbpath /home/services/mongodb/data/shard13 --oplogSize 100 --logpath /home/services/mongodb/data/shard13.log --logappend --fork 檢車各個機器上的mongod進程是否正常起來了(ps -ef|grep mongod),27018端口是否正常起來了 3.2)初始化replica set 1 從3臺機器中任意找一臺,鏈接mongod [root@slave1 ~]# /home/services/mongodb/bin/mongo --port 27018 ...... > config = {"_id" : "shard1","members" : [{"_id" : 0,"host" : "182.48.115.236:27018"},{"_id" : 1,"host" : "182.48.115.237:27018"},{"_id" : 2,"host" : "182.48.115.238:27018"}]} { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "182.48.115.236:27018" }, { "_id" : 1, "host" : "182.48.115.237:27018" }, { "_id" : 2, "host" : "182.48.115.238:27018" } ] } > rs.initiate(config); { "ok" : 1 } 3.3)配置shard2所用到的replica sets 2: slave1 [root@slave1 ~]# /home/services/mongodb//bin/mongod --shardsvr --replSet shard2 --port 27019 --dbpath /home/services/mongodb/data/shard21 --oplogSize 100 --logpath /home/services/mongodb/data/shard21.log --logappend --fork slave2 [root@slave2 ~]# /home/services/mongodb//bin/mongod --shardsvr --replSet shard2 --port 27019 --dbpath /home/services/mongodb/data/shard22 --oplogSize 100 --logpath /home/services/mongodb/data/shard22.log --logappend --fork slave3 [root@slave3 ~]# /home/services/mongodb//bin/mongod --shardsvr --replSet shard2 --port 27019 --dbpath /home/services/mongodb/data/shard23 --oplogSize 100 --logpath /home/services/mongodb/data/shard23.log --logappend --fork 3.4)初始化replica set 2 從3臺機器中任意找一臺,鏈接mongod [root@slave1 ~]# /home/services/mongodb/bin/mongo --port 27019 ...... > config = {"_id" : "shard2","members" : [{"_id" : 0,"host" : "182.48.115.236:27019"},{"_id" : 1,"host" : "182.48.115.237:27019"},{"_id" : 2,"host" : "182.48.115.238:27019"}]} { "_id" : "shard2", "members" : [ { "_id" : 0, "host" : "182.48.115.236:27019" }, { "_id" : 1, "host" : "182.48.115.237:27019" }, { "_id" : 2, "host" : "182.48.115.238:27019" } ] } > rs.initiate(config); { "ok" : 1 } 4)配置三臺config server slave1 [root@slave1 ~]# mkdir -p /home/services/mongodb/data/config [root@slave1 ~]# /home/services/mongodb//bin/mongod --configsvr --dbpath /home/services/mongodb/data/config --port 20000 --logpath /home/services/mongodb/data/config.log --logappend --fork slave2 [root@slave2 ~]# mkdir -p /home/services/mongodb/data/config [root@slave2 ~]# /home/services/mongodb//bin/mongod --configsvr --dbpath /home/services/mongodb/data/config --port 20000 --logpath /home/services/mongodb/data/config.log --logappend --fork slave3 [root@slave3 ~]# mkdir -p /home/services/mongodb/data/config [root@slave3 ~]# /home/services/mongodb//bin/mongod --configsvr --dbpath /home/services/mongodb/data/config --port 20000 --logpath /home/services/mongodb/data/config.log --logappend --fork 5)配置mongs 在三臺機器上分別執行: slave1 [root@slave1 ~]# /home/services/mongodb/bin/mongos --configdb 182.48.115.236:20000,182.48.115.237:20000,182.48.115.238:20000 --port 27017 --chunkSize 5 --logpath /home/services/mongodb/data/mongos.log --logappend --fork slave2 [root@slave2 ~]# /home/services/mongodb/bin/mongos --configdb 182.48.115.236:20000,182.48.115.237:20000,182.48.115.238:20000 --port 27017 --chunkSize 5 --logpath /home/services/mongodb/data/mongos.log --logappend --fork slave3 [root@slave3 ~]# /home/services/mongodb/bin/mongos --configdb 182.48.115.236:20000,182.48.115.237:20000,182.48.115.238:20000 --port 27017 --chunkSize 5 --logpath /home/services/mongodb/data/mongos.log --logappend --fork 注意:新版版的mongodb的mongos命令裏就不識別--chunkSize參數了 6)配置分片集羣(Configuring the Shard Cluster) 從3臺機器中任意找一臺,鏈接mongod,並切換到admin數據庫作如下配置 6.1)鏈接到mongs,並切換到admin [root@slave1 ~]# /home/services/mongodb/bin/mongo 182.48.115.236:27017/admin ...... mongos> db admin mongos> 6.2)加入shards分區 如裏shard是單臺服務器,用"db.runCommand( { addshard : 「[:]」 } )"這樣的命令加入 若是shard是replica sets,用"replicaSetName/[:port][,serverhostname2[:port],…]"這樣的格式表示,例如本例執行: mongos> db.runCommand( { addshard:"shard1/182.48.115.236:27018,182.48.115.237:27018,182.48.115.238:27018",name:"s1",maxsize:20480}); { "shardAdded" : "s1", "ok" : 1 } mongos> db.runCommand( { addshard:"shard2/182.48.115.236:27019,182.48.115.237:27019,182.48.115.238:27019",name:"s2",maxsize:20480}); { "shardAdded" : "s2", "ok" : 1 } 注意: 可選參數 Name:用於指定每一個shard的名字,不指定的話系統將自動分配 maxSize:指定各個shard可以使用的最大磁盤空間,單位megabytes 6.3)Listing shards mongos> db.runCommand( { listshards : 1 } ) { "shards" : [ { "_id" : "s1", "host" : "shard1/182.48.115.236:27018,182.48.115.237:27018,182.48.115.238:27018" }, { "_id" : "s2", "host" : "shard2/182.48.115.236:27019,182.48.115.237:27019,182.48.115.238:27019" } ], "ok" : 1 } mongos> 上面命令列出了以上二個添加的shards,表示shards已經配置成功 6.4)激活數據庫分片 命令: db.runCommand( { enablesharding : 「」 } ); 經過執行以上命令,能夠讓數據庫跨shard,若是不執行這步,數據庫只會存放在一個shard,一旦激活數據庫分片,數據庫中不一樣的collection將被存放在不一樣的shard上, 但一個collection仍舊存放在同一個shard上,要使單個collection也分片,還需單獨對collection做些操做 Collecton分片 要使單個collection也分片存儲,須要給collection指定一個分片key,經過如下命令操做: db.runCommand( { shardcollection : 「」,key : }); 注意: a)分片的collection系統會自動建立一個索引(也可用戶提早建立好) b)分片的collection只能有一個在分片key上的惟一索引,其它惟一索引不被容許 本案例: mongos> db.runCommand({enablesharding:"test2"}); { "ok" : 1 } mongos> db.runCommand( { shardcollection : "test2.books", key : { id : 1 } } ); { "collectionsharded" : "test2.books", "ok" : 1 } mongos> use test2 switched to db test2 mongos> db.stats(); { "raw" : { "shard1/182.48.115.236:27018,182.48.115.237:27018,182.48.115.238:27018" : { "db" : "test2", "collections" : 3, "objects" : 6, "avgObjSize" : 69.33333333333333, "dataSize" : 416, "storageSize" : 20480, "numExtents" : 3, "indexes" : 2, "indexSize" : 16352, "fileSize" : 67108864, "nsSizeMB" : 16, "extentFreeList" : { "num" : 0, "totalSize" : 0 }, "dataFileVersion" : { "major" : 4, "minor" : 22 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("586286596422d63aa9f9f000") } }, "shard2/182.48.115.236:27019,182.48.115.237:27019,182.48.115.238:27019" : { "db" : "test2", "collections" : 0, "objects" : 0, "avgObjSize" : 0, "dataSize" : 0, "storageSize" : 0, "numExtents" : 0, "indexes" : 0, "indexSize" : 0, "fileSize" : 0, "ok" : 1 } }, "objects" : 6, "avgObjSize" : 69, "dataSize" : 416, "storageSize" : 20480, "numExtents" : 3, "indexes" : 2, "indexSize" : 16352, "fileSize" : 67108864, "extentFreeList" : { "num" : 0, "totalSize" : 0 }, "ok" : 1 } mongos> db.books.stats(); { "sharded" : true, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for c ompatibility only.", "userFlags" : 1, "capped" : false, "ns" : "test2.books", "count" : 0, "numExtents" : 1, "size" : 0, "storageSize" : 8192, "totalIndexSize" : 16352, "indexSizes" : { "_id_" : 8176, "id_1" : 8176 }, "avgObjSize" : 0, "nindexes" : 2, "nchunks" : 1, "shards" : { "s1" : { "ns" : "test2.books", "count" : 0, "size" : 0, "numExtents" : 1, "storageSize" : 8192, "lastExtentSize" : 8192, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard co ded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 16352, "indexSizes" : { "_id_" : 8176, "id_1" : 8176 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("586286596422d63aa9f9f000") } } }, "ok" : 1 } 7)測試 mongos> for (var i = 1; i <= 20000; i++) db.books.save({id:i,name:"12345678",sex:"male",age:27,value:"test"}); WriteResult({ "nInserted" : 1 }) mongos> db.books.stats(); { "sharded" : true, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "ns" : "test2.books", "count" : 20000, "numExtents" : 10, "size" : 2240000, "storageSize" : 5586944, "totalIndexSize" : 1250928, "indexSizes" : { "_id_" : 670432, "id_1" : 580496 }, "avgObjSize" : 112, "nindexes" : 2, "nchunks" : 5, "shards" : { "s1" : { "ns" : "test2.books", "count" : 12300, "size" : 1377600, "avgObjSize" : 112, "numExtents" : 5, "storageSize" : 2793472, "lastExtentSize" : 2097152, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 760368, "indexSizes" : { "_id_" : 408800, "id_1" : 351568 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("586286596422d63aa9f9f000") } }, "s2" : { "ns" : "test2.books", "count" : 7700, "size" : 862400, "avgObjSize" : 112, "numExtents" : 5, "storageSize" : 2793472, "lastExtentSize" : 2097152, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 490560, "indexSizes" : { "_id_" : 261632, "id_1" : 228928 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("58628704f916bb05014c5ea7") } } }, "ok" : 1 }