開啓複製集後,主節點會在 local 庫下生成一個集合叫 oplog.rs,這是一個有限集合,也就是大小是固定的。其中記錄的是整個mongod實例一段時間內數據庫的全部變動(插入/更新/刪除)操做,當空間用完時新記錄自動覆蓋最老的記錄node
MongoDB複製集(副本集):由一組實列(進程)組成;包含一個Primary節點和多個Secondary節點,用戶的全部寫操做寫入Primary ,Secondary經過oplog來同步primary的數據;能夠經過心跳檢測機制,一旦primary出現故障,則就會經過仲裁節點從secondary選取一個新的主節點linux
Primary:主節點,由選擇產生,負責客戶端的寫操做,產生oplog日誌文件mongodb
Secondary:從節點;負責客戶端的讀操做;數據庫
Arbiter:仲裁節點;只參與選舉的投票;不會成爲Primary和secondary,任意節點宕機,複製集將不能提供服務了(沒法選出Primary),這時能夠給複製集添加一個Arbiter節點,即便有節點宕機,仍能選出Primary**vim
MongoDB的節點分爲三種類型,分別爲標準節點(host)、被動節點(passive)和仲裁節點(arbiter)服務器
只有標準節點纔有可能被選舉爲活躍節點(主節點),擁有選舉權。被動節點有完整副本,不可能成爲活躍節點,具備選舉權。仲裁節點不復制數據,不可能成爲活躍節點,只有選舉權。說白了就是隻有標準節點纔有可能被選舉爲主節點,即便在一個複製集中說有的標準節點都宕機,被動節點和仲裁節點也不會成爲主節點session
標準節點與被動節點的區別:priority值高者是標準節點,低者則爲被動節點app
選舉規則是票數高的獲勝,priority是優先權01000的值,至關於額外增長01000的票數。選舉結果:票數高者獲勝;若票數相同,數據新者獲勝。異步
客戶端的數據進來;分佈式
數據操做寫入到日誌緩衝;
數據寫入到數據緩衝;
把日誌緩衝中的操做日誌放到OPLOG中來;
返回操做結果到客戶端(異步);
後臺線程進行OPLOG複製到從節點,這個頻率是很是高的,比日誌刷盤頻率還要高,從節點會一直監聽主節點,OPLOG一有變化就會進行復制操做;
後臺線程進行日誌緩衝中的數據刷盤,很是頻繁(默認100)毫秒,也可自行設置(30-60);
後臺線程進行數據緩衝中的數據刷盤,默認是60秒;
複製集主要用來實現自動故障轉移從而達到高可用的目的,然而,隨着業務規模的增加和時間的推移,業務數據量會愈來愈大,當前業務數據可能只有幾百GB不到,一臺DB服務器足以搞定全部的工做,而一旦業務數據量擴充大幾個TB幾百個TB時,就會產生一臺服務器沒法存儲的狀況,此時,須要將數據按照必定的規則分配到不一樣的服務器進行存儲、查詢等,即爲分片集羣。分片集羣要作到的事情就是數據分佈式存儲
存儲方式:數據集被拆分紅數據塊(chunk),每一個數據塊包含多個doc,數據塊分佈式存儲在分片集羣中。
Config server:MongoDB負責追蹤數據塊在shard上的分佈信息,每一個分片存儲哪些數據塊,叫作分片的元數據,保存在config server上的數據庫 config中,通常使用3臺config
server,全部config server中的config數據庫必須徹底相同(建議將config server部署在不一樣的服務器,以保證穩定性);
Shard server:將數據進行分片,拆分紅數據塊(chunk),每一個trunk塊的大小默認爲64M,數據塊真正存放的單位;
Mongos server:數據庫集羣請求的入口,全部的請求都經過mongos進行協調,查看分片的元數據,查找chunk存放位置,mongos本身就是一個請求分發中心,在生產環境一般有多mongos做爲請求的入口,防止其中一個掛掉全部的mongodb請求都沒有辦法操做。
總結:應用請求mongos來操做mongodb的增刪改查,配置服務器存儲數據庫元信息,而且和mongos作同步,數據最終存入在shard(分片)上,爲了防止數據丟失,同步在副本集中存儲了一份,仲裁節點在數據存儲到分片的時候決定存儲到哪一個節點。
概述:片鍵是文檔的一個屬性字段或是一個複合索引字段,一旦創建後則不可改變,片鍵是拆分數據的關鍵的依據,如若在數據極爲龐大的場景下,片鍵決定了數據在分片的過程當中數據的存儲位置,直接會影響集羣的性能;
注:建立片鍵時,須要有一個支撐片鍵運行的索引;
1.遞增片鍵:使用時間戳,日期,自增的主鍵,ObjectId,_id等,此類片鍵的寫入操做集中在一個分片服務器上,寫入不具備分散性,這會致使單臺服務器壓力較大,但分割比較容易,這臺服務器可能會成爲性能瓶頸;
2.哈希片鍵:也稱之爲散列索引,使用一個哈希索引字段做爲片鍵,優勢是使數據在各節點分佈比較均勻,數據寫入可隨機分發到每一個分片服務器上,把寫入的壓力分散到了各個服務器上。可是讀也是隨機的,可能會命中更多的分片,可是缺點是沒法實現範圍區分;
3.組合片鍵: 數據庫中沒有比較合適的鍵值供片鍵選擇,或者是打算使用的片鍵基數過小(即變化少如星期只有7天可變化),能夠選另外一個字段使用組合片鍵,甚至能夠添加冗餘字段來組合;
4.標籤片鍵:數據存儲在指定的分片服務器上,能夠爲分片添加tag標籤,而後指定相應的tag,好比讓10...(T)出如今shard0000上,11...(Q)出如今shard0001或shard0002上,就可使用tag讓均衡器指定分發;
分佈式mongodb集羣副本集+分片
CentOS Linux release 7.9.2009
Mongodb:4.0.21
IP | 路由服務端口 | 配置服務端口 | 分片1端口 | 分片2端口 | 分片3端 |
---|---|---|---|---|---|
172.16.245.102 | 27017 | 27018 | 27001 | 27002 | 27003 |
172.16.245.103 | 27017 | 27018 | 27001 | 27002 | 27003 |
172.16.245.104 | 27017 | 27018 | 27001 | 27002 | 27003 |
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-4.0.21.tgz
三臺服務器相同操做
mkdir -p /data/mongodb/conf mkdir -p /data/mongodb/data/config mkdir -p /data/mongodb/data/shard1 mkdir -p /data/mongodb/data/shard2 mkdir -p /data/mongodb/data/shard3 mkdir -p /data/mongodb/log/config.log mkdir -p /data/mongodb/log/mongos.log mkdir -p /data/mongodb/log/shard1.log mkdir -p /data/mongodb/log/shard2.log mkdir -p /data/mongodb/log/shard3.log touch /data/mongodb/log/config.log/config.log touch /data/mongodb/log/mongos.log/mongos.log touch /data/mongodb/log/shard1.log/shard1.log touch /data/mongodb/log/shard2.log/shard2.log touch /data/mongodb/log/shard3.log/shard3.log
3臺服務器執行相同操做
[root@node5 conf]# vim /data/mongodb/conf/config.conf [root@node5 conf]# cat /data/mongodb/conf/config.conf dbpath=/data/mongodb/data/config logpath=/data/mongodb/log/config.log/config.log port=27018 #端口號 logappend=true fork=true maxConns=5000 replSet=configs #副本集名稱 configsvr=true bind_ip=0.0.0.0
分別啓動三臺服務器的配置服務
[root@node5 conf]# /data/mongodb/bin/mongod -f /data/mongodb/conf/config.conf
鏈接mongo,只需在任意一臺機器執行便可
[root@node5 conf]# /data/mongodb/bin/mongo --host 172.16.245.102 --port 27018
進入數據庫之後切換數據庫
use admin
初始化副本集
rs.initiate({_id:"configs",members:[{_id:0,host:"172.16.245.102:27018"},{_id:1,host:"172.16.245.103:27018"}, {_id:2,host:"172.16.245.104:27018"}]})
其中_id:"configs"的configs是上面config.conf配置文件裏的複製集名稱,把三臺服務器的(指定相應的IP)配置服務組成複製集
查看狀態
configs:PRIMARY> rs.status() { "set" : "configs", #副本集名稱 "date" : ISODate("2020-12-22T06:39:04.184Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1608619122, 1), "electionCandidateMetrics" : { "lastElectionReason" : "electionTimeout", "lastElectionDate" : ISODate("2020-12-22T05:31:42.975Z"), "electionTerm" : NumberLong(1), "lastCommittedOpTimeAtElection" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "lastSeenOpTimeAtElection" : { "ts" : Timestamp(1608615092, 1), "t" : NumberLong(-1) }, "numVotesNeeded" : 2, "priorityAtElection" : 1, "electionTimeoutMillis" : NumberLong(10000), "numCatchUpOps" : NumberLong(0), "newTermStartDate" : ISODate("2020-12-22T05:31:42.986Z"), "wMajorityWriteAvailabilityDate" : ISODate("2020-12-22T05:31:44.134Z") }, "members" : [ { "_id" : 0, "name" : "172.16.245.102:27018", #副本1 "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 4383, "optime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2020-12-22T06:39:02Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1608615102, 1), "electionDate" : ISODate("2020-12-22T05:31:42Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "172.16.245.103:27018", #副本2 "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 4052, "optime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2020-12-22T06:39:02Z"), "optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"), "lastHeartbeat" : ISODate("2020-12-22T06:39:02.935Z"), "lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.044Z"), "pingMs" : NumberLong(85), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.245.102:27018", "syncSourceHost" : "172.16.245.102:27018", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "172.16.245.104:27018", #副本3 "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 4052, "optime" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1608619142, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2020-12-22T06:39:02Z"), "optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"), "lastHeartbeat" : ISODate("2020-12-22T06:39:03.368Z"), "lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.046Z"), "pingMs" : NumberLong(85), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.245.102:27018", "syncSourceHost" : "172.16.245.102:27018", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1608619142, 1), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608619142, 1), "$clusterTime" : { "clusterTime" : Timestamp(1608619142, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:PRIMARY>
等幾十秒左右,執行上面的命令查看狀態,三臺機器的配置服務就已造成複製集,其中1臺爲PRIMARY,其餘2臺爲SECONDARY
3臺服務器執行相同操做
在/data/mongodb/conf目錄建立shard1.conf、shard2.conf、shard3.conf,內容以下
[root@node3 conf]# ls config.conf mongos.conf shard1.conf shard2.conf shard3.conf [root@node3 conf]# cat shard1.conf dbpath=/data/mongodb/data/shard1 logpath=/data/mongodb/log/shard1.log/shard1.log port=27001 logappend=true fork=true maxConns=5000 storageEngine=mmapv1 shardsvr=true replSet=shard1 bind_ip=0.0.0.0 [root@node3 conf]# cat shard2.conf dbpath=/data/mongodb/data/shard2 logpath=/data/mongodb/log/shard2.log/shard2.log port=27002 logappend=true fork=true maxConns=5000 storageEngine=mmapv1 shardsvr=true replSet=shard2 bind_ip=0.0.0.0 [root@node3 conf]# cat shard3.conf dbpath=/data/mongodb/data/shard3 logpath=/data/mongodb/log/shard3.log/shard3.log port=27003 logappend=true fork=true maxConns=5000 storageEngine=mmapv1 shardsvr=true replSet=shard3 bind_ip=0.0.0.0
端口分別是2700一、2700二、27003,分別對應shard1.conf、shard2.conf、shard3.conf
在3臺機器的相同端口造成一個分片的複製集,因爲3臺機器都須要這3個文件,因此根據這9個配置文件分別啓動分片服務
三臺機器都須要啓動分片服務,節點1啓動shard1 節點2啓動shard1 節點2啓動shard1 ....
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard1.conf [root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard2.conf [root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard3.conf
鏈接mongo,只需在任意一臺機器執行便可
mongo --host 172.16.245.103 --port 27001 這裏以shard1爲例,其餘兩個分片則再需對應鏈接到2700二、27003的端口進行操做便可
進入數據庫admin
use admin
初始化三個分片副本集集羣
rs.initiate({_id:"shard1",members:[{_id:0,host:"172.16.245.102:27001"},{_id:1,host:"172.16.245.103:27001"},{_id:2,host:"172.16.245.104:27001"}]}) rs.initiate({_id:"shard2",members:[{_id:0,host:"172.16.245.102:27002"},{_id:1,host:"172.16.245.103:27002"},{_id:2,host:"172.16.245.104:27002"}]}) rs.initiate({_id:"shard3",members:[{_id:0,host:"172.16.245.102:27003"},{_id:1,host:"172.16.245.103:27003"},{_id:2,host:"172.16.245.104:27003"}]})
3臺服務器執行相同操做
在/data/mongodb/conf目錄建立mongos.conf,內容以下
[root@node4 conf]# cat mongos.conf logpath=/data/mongodb/log/mongos.log/mongos.log logappend = true port = 27017 fork = true configdb = configs/172.16.245.102:27018,172.16.245.103:27018,172.16.245.104:27018 maxConns=20000 bind_ip=0.0.0.0
啓動mongos
分別在三臺服務器啓動:
[root@node4 conf]# /data/mongodb/bin/mongos -f /data/mongodb/conf/mongos.conf
鏈接mongo
mongo --host 172.16.245.102 --port 27017 mongos>use admin
添加分片,只需在一臺機器執行便可
mongos>sh.addShard("shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001") mongos>sh.addShard("shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002") mongos>sh.addShard("shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003") mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5fe184bf29ea91799b557a8b") } shards: { "_id" : "shard1", "host" : "shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003", "state" : 1 } active mongoses: "4.0.21" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "calon", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a4780da-8f33-4214-88f8-c9b1a3140299"), "lastMod" : 1 } } { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "test", "primary" : "shard2", "partitioned" : false, "version" : { "uuid" : UUID("d59549a4-3e68-4a7d-baf8-67a4d8372b76"), "lastMod" : 1 } } { "_id" : "ycsb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("6d491868-245e-4c86-a5f5-f8fcd308b45e"), "lastMod" : 1 } } ycsb.usertable shard key: { "_id" : "hashed" } unique: false balancing: true chunks: shard1 2 shard2 2 shard3 2 { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0) { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1) { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2) { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3) { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4) { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)
設置分片chunk大小
mongos>use config mongos>db.setting.save({"_id":"chunksize","value":1}) #設置塊大小爲1M是方便實驗,否則須要插入海量數據
mongos> use shardbtest; switched to db shardbtest mongos> mongos> mongos> sh.enableSharding("shardbtest"); { "ok" : 1, "operationTime" : Timestamp(1608620190, 4), "$clusterTime" : { "clusterTime" : Timestamp(1608620190, 4), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.shardCollection("shardbtest.usertable",{"_id":"hashed"}); #爲 shardbtest褲中的usertable表進行分片基於id的哈希分片 { "collectionsharded" : "shardbtest.usertable", "collectionUUID" : UUID("2b5a8bcf-6e31-4dac-831f-5fa414253655"), "ok" : 1, "operationTime" : Timestamp(1608620216, 36), "$clusterTime" : { "clusterTime" : Timestamp(1608620216, 36), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> for(i=1;i<=3000;i++){db.usertable.insert({"id":i})} #模擬插入3000條的數據 WriteResult({ "nInserted" : 1 })
mongos> db.usertable.stats(); { "sharded" : true, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "ns" : "shardbtest.usertable", "count" : 3000, #總3000 "numExtents" : 9, "size" : 144096, "storageSize" : 516096, "totalIndexSize" : 269808, "indexSizes" : { "_id_" : 122640, "_id_hashed" : 147168 }, "avgObjSize" : 48, "maxSize" : NumberLong(0), "nindexes" : 2, "nchunks" : 6, "shards" : { "shard3" : { "ns" : "shardbtest.usertable", "size" : 48656, "count" : 1013, #shard3寫入1013 "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620309, 1), "$gleStats" : { "lastOpTime" : { "ts" : Timestamp(1608620272, 38), "t" : NumberLong(1) }, "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620309, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620307, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620309, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }, "shard2" : { "ns" : "shardbtest.usertable", "size" : 49232, "count" : 1025, #shard2寫入1025 "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620306, 1), "$gleStats" : { "lastOpTime" : { "ts" : Timestamp(1608620272, 32), "t" : NumberLong(1) }, "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620306, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620307, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620309, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }, "shard1" : { "ns" : "shardbtest.usertable", "size" : 46208, "count" : 962, #shard1寫入962 "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620308, 1), "$gleStats" : { "lastOpTime" : { "ts" : Timestamp(1608620292, 10), "t" : NumberLong(1) }, "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620308, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620307, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620309, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } }, "ok" : 1, "operationTime" : Timestamp(1608620309, 1), "$clusterTime" : { "clusterTime" : Timestamp(1608620309, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
mongos> show dbs admin 0.000GB calon 0.078GB config 0.235GB shardbtest 0.234GB test 0.078GB ycsb 0.234GB mongos> use shardbtest switched to db shardbtest mongos> db.usertable.stats(); { "sharded" : true, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "ns" : "shardbtest.usertable", "count" : 3000, "numExtents" : 9, "size" : 144096, "storageSize" : 516096, "totalIndexSize" : 269808, "indexSizes" : { "_id_" : 122640, "_id_hashed" : 147168 }, "avgObjSize" : 48, "maxSize" : NumberLong(0), "nindexes" : 2, "nchunks" : 6, "shards" : { "shard2" : { "ns" : "shardbtest.usertable", "size" : 49232, "count" : 1025, "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620886, 6), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620886, 6), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620888, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620888, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }, "shard3" : { "ns" : "shardbtest.usertable", "size" : 48656, "count" : 1013, "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620889, 1), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620889, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620888, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620889, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }, "shard1" : { "ns" : "shardbtest.usertable", "size" : 46208, "count" : 962, "avgObjSize" : 48, "numExtents" : 3, "storageSize" : 172032, "lastExtentSize" : 131072, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 89936, "indexSizes" : { "_id_" : 40880, "_id_hashed" : 49056 }, "ok" : 1, "operationTime" : Timestamp(1608620888, 1), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1608620888, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1608620888, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1608620888, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } }, "ok" : 1, "operationTime" : Timestamp(1608620889, 1), "$clusterTime" : { "clusterTime" : Timestamp(1608620889, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
以上就實現了mongodb複製集的高可用以及分片