Mongodb是一種非關係數據庫(NoSQL),非關係型數據庫的產生就是爲了解決大數據量、高擴展性、高性能、靈活數據模型、高可用性。MongoDB官方已經不建議使用主從模式了,替代方案是採用副本集的模式。主從模式其實就是一個單副本的應用,沒有很好的擴展性和容錯性,而Mongodb副本集具備多個副本保證了容錯性,就算一個副本掛掉了還有不少副本存在,主節點掛掉後,整個集羣內會實現自動切換。linux
Mongodb副本集的工做原理
客戶端鏈接到整個Mongodb副本集,不關心具體哪一臺節點機是否掛掉。主節點機負責整個副本集的讀寫,副本集按期同步數據備份,一但主節點掛掉,副本節點就會選舉一個新的主服務器,這一切對於應用服務器不須要關心。副本集中的副本節點在主節點掛掉後經過心跳機制檢測到後,就會在集羣內發起主節點的選舉機制,自動選舉一位新的主服務器。mongodb
看起來Mongodb副本集很牛X的樣子,下面就演示下副本集環境部署過程,官方推薦的Mongodb副本集機器數量爲至少3個節點,這裏我就選擇三個節點,一個主節點,兩個從節點,暫不使用仲裁節點。shell
1、環境準備數據庫
ip地址 主機名 角色 172.16.60.205 mongodb-master01 副本集主節點 172.16.60.206 mongodb-slave01 副本集副本節點 172.16.60.207 mongodb-slave02 副本集副本節點 三個節點機均設置好各自的主機名,並以下設置好hosts綁定 [root@mongodb-master01 ~]# cat /etc/hosts ............ 172.16.60.205 mongodb-master01 172.16.60.206 mongodb-slave01 172.16.60.207 mongodb-slave02 三個節點機均關閉selinux,爲了測試方便,將iptables也關閉 [root@mongodb-master01 ~]# setenforce 0 [root@mongodb-master01 ~]# cat /etc/sysconfig/selinux ........... SELINUX=disabled [root@mongodb-master01 ~]# iptables -F [root@mongodb-master01 ~]# /etc/init.d/iptables stop [root@mongodb-master01 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ]
2、Mongodb安裝、副本集配置bash
1) 在三個節點機上創建mongodb副本集測試文件夾,用於存放整個副本集文件 [root@mongodb-master01 ~]# mkdir -p /data/mongodb/data/replset/ 2)在三個節點機上安裝mongodb 下載地址:https://www.mongodb.org/dl/linux/x86_64-rhel62 [root@mongodb-master01 ~]# wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-v3.6-latest.tgz [root@mongodb-master01 ~]# tar -zvxf mongodb-linux-x86_64-rhel62-v3.6-latest.tgz 3)分別在每一個節點機上啓動mongodb(啓動時指明--bind_ip地址,默認是127.0.0.1,須要改爲本機ip,不然遠程鏈接時失敗) [root@mongodb-master01 ~]# mv mongodb-linux-x86_64-rhel62-3.6.11-rc0-2-g2151d1d219 /usr/local/mongodb [root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 & [root@mongodb-master01 ~]# ps -ef|grep mongodb root 7729 6977 1 15:10 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset root 7780 6977 0 15:11 pts/1 00:00:00 grep mongodb [root@mongodb-master01 ~]# lsof -i:27017 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mongod 7729 root 10u IPv4 6554476 0t0 TCP localhost:27017 (LISTEN) 4)初始化副本集 在三個節點中的任意一個節點機上操做(好比在172.16.60.205節點機) 登錄mongodb [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017 ......... #使用admin數據庫 > use admin switched to db admin #定義副本集配置變量,這裏的 _id:」repset」 和上面命令參數「 –replSet repset」 要保持同樣。 > config = { _id:"repset", members:[{_id:0,host:"172.16.60.205:27017"},{_id:1,host:"172.16.60.206:27017"},{_id:2,host:"172.16.60.207:27017"}]} { "_id" : "repset", "members" : [ { "_id" : 0, "host" : "172.16.60.205:27017" }, { "_id" : 1, "host" : "172.16.60.206:27017" }, { "_id" : 2, "host" : "172.16.60.207:27017" } ] } #初始化副本集配置 > rs.initiate(config); { "ok" : 1, "operationTime" : Timestamp(1551166191, 1), "$clusterTime" : { "clusterTime" : Timestamp(1551166191, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } #查看集羣節點的狀態 repset:SECONDARY> rs.status(); { "set" : "repset", "date" : ISODate("2019-02-26T07:31:07.766Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "172.16.60.205:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 270, "optime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-02-26T07:31:03Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1551166202, 1), "electionDate" : ISODate("2019-02-26T07:30:02Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "172.16.60.206:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 76, "optime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-02-26T07:31:03Z"), "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"), "lastHeartbeat" : ISODate("2019-02-26T07:31:06.590Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.852Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.60.205:27017", "syncSourceHost" : "172.16.60.205:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "172.16.60.207:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 76, "optime" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1551166263, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-02-26T07:31:03Z"), "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"), "lastHeartbeat" : ISODate("2019-02-26T07:31:06.589Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.958Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.60.205:27017", "syncSourceHost" : "172.16.60.205:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1551166263, 1), "$clusterTime" : { "clusterTime" : Timestamp(1551166263, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } 如上信息代表: 副本集配置成功後,172.16.60.205爲主節點PRIMARY,172.16.60.206/207爲副本節點SECONDARY。 health:1 1代表狀態是正常,0代表異常 state:1 值小的是primary節點、值大的是secondary節點
3、測試Mongodb副本集數據複製功能 <mongodb默認是從主節點讀寫數據的,副本節點上不容許讀,須要設置副本節點能夠讀>服務器
1)在主節點172.16.60.205上鍊接到終端 [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017 ................ #創建test 數據庫 repset:PRIMARY> use test; switched to db test #往testdb表插入測試數據 repset:PRIMARY> db.testdb.insert({"test1":"testval1"}) WriteResult({ "nInserted" : 1 }) 2)在副本節點172.16.60.20六、172.16.60.207上鍊接到mongodb查看數據是否複製過來。 這裏在172.16.60.206副本節點上進行查看 [root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017 ................ repset:SECONDARY> use test; switched to db test repset:SECONDARY> show tables; 2019-02-26T15:37:46.446+0800 E QUERY [thread1] Error: listCollections failed: { "operationTime" : Timestamp(1551166663, 1), "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk", "$clusterTime" : { "clusterTime" : Timestamp(1551166663, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:941:1 DB.prototype.getCollectionInfos@src/mongo/shell/db.js:953:19 DB.prototype.getCollectionNames@src/mongo/shell/db.js:964:16 shellHelper.show@src/mongo/shell/utils.js:853:9 shellHelper@src/mongo/shell/utils.js:750:15 @(shellhelp2):1:1 上面出現了報錯! 這是由於mongodb默認是從主節點讀寫數據的,副本節點上不容許讀,須要設置副本節點能夠讀 repset:SECONDARY> db.getMongo().setSlaveOk(); repset:SECONDARY> db.testdb.find(); { "_id" : ObjectId("5c74ec9267d8c3d06506449b"), "test1" : "testval1" } repset:SECONDARY> show tables; testdb 如上發現已經在副本節點上發現了測試數據,即已經從主節點複製過來了。 (在另外一個副本節點172.16.60.207也如上操做便可)
4、測試副本集故障轉移功能
先停掉主節點172.16.60.205,查看mongodb副本集狀態,能夠看到通過一系列的投票選擇操做,172.16.60.206當選主節點,172.16.60.207從172.16.60.206同步數據過來。網絡
1)停掉原來的主節點172.16.60.205的mongodb,模擬故障 [root@mongodb-master01 ~]# ps -ef|grep mongodb|grep -v grep|awk '{print $2}'|xargs kill -9 [root@mongodb-master01 ~]# lsof -i:27017 [root@mongodb-master01 ~]# 2)接着登陸到另外兩個正常的從節點172.16.60.20六、172.16.60.207中的任意一個節點的mongodb,查看副本集狀態 [root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017 ................. repset:PRIMARY> rs.status(); { "set" : "repset", "date" : ISODate("2019-02-26T08:06:02.996Z"), "myState" : 1, "term" : NumberLong(2), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "appliedOpTime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) } }, "members" : [ { "_id" : 0, "name" : "172.16.60.205:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2019-02-26T08:06:02.917Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T08:03:37.492Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Connection refused", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : -1 }, { "_id" : 1, "name" : "172.16.60.206:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 2246, "optime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2019-02-26T08:05:59Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1551168228, 1), "electionDate" : ISODate("2019-02-26T08:03:48Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "172.16.60.207:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 2169, "optime" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "optimeDurable" : { "ts" : Timestamp(1551168359, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2019-02-26T08:05:59Z"), "optimeDurableDate" : ISODate("2019-02-26T08:05:59Z"), "lastHeartbeat" : ISODate("2019-02-26T08:06:02.861Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T08:06:02.991Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.60.206:27017", "syncSourceHost" : "172.16.60.206:27017", "syncSourceId" : 1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1551168359, 1), "$clusterTime" : { "clusterTime" : Timestamp(1551168359, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } 發現當原來的主節點172.16.60.205宕掉後,通過選舉,原來的從節點172.16.60.206被推舉爲新的主節點。 3)如今在172.16.60.206新主節點上建立測試數據 repset:PRIMARY> use kevin; switched to db kevin repset:PRIMARY> db.kevin.insert({"shibo":"hahaha"}) WriteResult({ "nInserted" : 1 }) 4)另外一個從節點172.16.60.207上登陸mongodb查看 [root@mongodb-slave02 ~]# /usr/local/mongodb/bin/mongo 172.16.60.207:27017 ................ repset:SECONDARY> use kevin; switched to db kevin repset:SECONDARY> db.getMongo().setSlaveOk(); repset:SECONDARY> show tables; kevin repset:SECONDARY> db.kevin.find(); { "_id" : ObjectId("5c74f42bb0b339ed6eb68e9c"), "shibo" : "hahaha" } 發現從節點172.16.60.207能夠同步新的主節點172.16.60.206的數據 5)再從新啓動原來的主節點172.16.60.205的mongodb [root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 & mongod 9162 root 49u IPv4 6561201 0t0 TCP mongodb-master01:55236->mongodb-slave01:27017 (ESTABLISHED) [root@mongodb-master01 ~]# ps -ef|grep mongodb root 9162 6977 4 16:14 pts/1 00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 root 9244 6977 0 16:14 pts/1 00:00:00 grep mongodb 再次登陸到三個節點中的任意一個的mongodb,查看副本集狀態 [root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017 .................... repset:SECONDARY> rs.status(); { "set" : "repset", "date" : ISODate("2019-02-26T08:16:11.741Z"), "myState" : 2, "term" : NumberLong(2), "syncingTo" : "172.16.60.206:27017", "syncSourceHost" : "172.16.60.206:27017", "syncSourceId" : 1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "appliedOpTime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) } }, "members" : [ { "_id" : 0, "name" : "172.16.60.205:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 129, "optime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2019-02-26T08:16:09Z"), "syncingTo" : "172.16.60.206:27017", "syncSourceHost" : "172.16.60.206:27017", "syncSourceId" : 1, "infoMessage" : "", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "172.16.60.206:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 127, "optime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "optimeDurable" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2019-02-26T08:16:09Z"), "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"), "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.518Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1551168228, 1), "electionDate" : ISODate("2019-02-26T08:03:48Z"), "configVersion" : 1 }, { "_id" : 2, "name" : "172.16.60.207:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 127, "optime" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "optimeDurable" : { "ts" : Timestamp(1551168969, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2019-02-26T08:16:09Z"), "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"), "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"), "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.655Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "172.16.60.206:27017", "syncSourceHost" : "172.16.60.206:27017", "syncSourceId" : 1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1551168969, 1), "$clusterTime" : { "clusterTime" : Timestamp(1551168969, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } 發現原來的主節點172.16.60.205在故障恢復後,變成了新的主節點172.16.60.206的從節點
5、Mongodb讀寫分離
目前來看。Mongodb副本集能夠完美支持故障轉移。至於主節點的讀寫壓力過大如何解決?常見的解決方案是讀寫分離。app
通常狀況下,常規寫操做來講並無讀操做多,因此在Mongodb副本集中,一臺主節點負責寫操做,兩臺副本節點負責讀操做。 1)設置讀寫分離須要先在副本節點SECONDARY 設置 setSlaveOk。 2)在程序中設置副本節點負責讀操做,以下代碼:
public class TestMongoDBReplSetReadSplit { public static void main(String[] args) { try { List<ServerAddress> addresses = new ArrayList<ServerAddress>(); ServerAddress address1 = new ServerAddress("172.16.60.205" , 27017); ServerAddress address2 = new ServerAddress("172.16.60.206" , 27017); ServerAddress address3 = new ServerAddress("172.16.60.207" , 27017); addresses.add(address1); addresses.add(address2); addresses.add(address3); MongoClient client = new MongoClient(addresses); DB db = client.getDB( "test" ); DBCollection coll = db.getCollection( "testdb" ); BasicDBObject object = new BasicDBObject(); object.append( "test2" , "testval2" ); //讀操做從副本節點讀取 ReadPreference preference = ReadPreference. secondary(); DBObject dbObject = coll.findOne(object, null , preference); System. out .println(dbObject); } catch (Exception e) { e.printStackTrace(); } } }
讀參數除了secondary一共還有五個參數:primary、primaryPreferred、secondary、secondaryPreferred、nearest。
primary:默認參數,只從主節點上進行讀取操做;
primaryPreferred:大部分從主節點上讀取數據,只有主節點不可用時從secondary節點讀取數據。
secondary:只從secondary節點上進行讀取操做,存在的問題是secondary節點的數據會比primary節點數據「舊」。
secondaryPreferred:優先從secondary節點進行讀取操做,secondary節點不可用時從主節點讀取數據;
nearest:不論是主節點、secondary節點,從網絡延遲最低的節點上讀取數據。性能
讀寫分離作好後,就能夠進行數據分流,減輕壓力,解決了"主節點的讀寫壓力過大如何解決?"這個問題。不過當副本節點增多時,主節點的複製壓力會加大有什麼辦法解決嗎?基於這個問題,Mongodb已有了相應的解決方案 - 引用仲裁節點:
在Mongodb副本集中,仲裁節點不存儲數據,只是負責故障轉移的羣體投票,這樣就少了數據複製的壓力。看起來想的很周到啊,其實不僅是主節點、副本節點、仲裁節點,還有Secondary-Only、Hidden、Delayed、Non-Voting,其中:
Secondary-Only:不能成爲primary節點,只能做爲secondary副本節點,防止一些性能不高的節點成爲主節點。
Hidden:這類節點是不可以被客戶端制定IP引用,也不能被設置爲主節點,可是能夠投票,通常用於備份數據。
Delayed:能夠指定一個時間延遲從primary節點同步數據。主要用於備份數據,若是實時同步,誤刪除數據立刻同步到從節點,恢復又恢復不了。
Non-Voting:沒有選舉權的secondary節點,純粹的備份數據節點。測試