(一)MongDB複製(副本集)
MongDB複製是將數據同步到多個服務器的過程。
複製提供了數據的冗餘備份,並在服務器上存儲數據副本,提升數據的可用性,保證數據的安全性,
複製容許你從硬件和服務的中斷中恢復數據,能隨時應對數據丟失、機器損壞帶來的風險。
並且複製還能提升讀取能力,用戶讀取服務器和寫入服務器在不一樣的地方,提升整個系統的負載。node
一、複製的特性: 保障數據的安全性 數據的高可用性7*24 災難恢復 無需停機維護 分佈式讀取數據 二、複製的原理 Mongodb複製集由一組Mongod實例(進程)組成。mongodb複製至少須要二個節點,一個是Primary主節點用於處理客戶端的請求,其他都是從節點Secondary用於複製主節點上的數據. Primary會記錄那些寫的操做到一個日誌oplog中,其餘的Secondary會按期輪詢主節點這個日誌,去操做他們的數據庫來保持數據庫的一致。 ![](https://s1.51cto.com/107?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=) 客戶端從主節點讀取數據,在客戶端寫入數據到主節點時, 主節點與從節點進行數據交互保障數據的一致性。 MongoDB(3.2以上)中,官方已經不推薦使用主從模式,取而代之的,是使用複製集(Replica Set)的方式作主備處理。 三、複製集(Replica Set),在複製集中,有且只有一個主節點(primary),能夠包含一個或多個從節點(secondary),主從節點直接會經過心跳檢測來肯定節點是否健康或存活。全部的讀寫操做都是在主節點上進行的,若是要實現讀寫分離,須要進行相應的處理,這個最後會說。從節點會根據oplog(也就是操做日誌)來複制主節點的數據。
原理:而 MongoDB 複製集,集羣中的任何節點均可能成爲 Master 節點。一旦 Master 節點故障,則會在其他節點中選舉出一個新的 Master 節點。 並引導剩餘節點鏈接到新的 Master 節點。這個過程對於應用是透明的。mongodb
在生產環境,複製集至少包括三個節點,其中一個必須爲主節點,一個從節點,一個仲裁節點。其中每個節點都是mongod進程對應的實例,節點間經過心跳檢查對方的狀態。
primary節點:負責數據庫的讀寫操做。
secondary節點:備份primary節點上的數據,能夠有多個。
arbiter節點:主節點故障時,參與複製集剩下節點中選舉一個新的primary節點。shell
(二)、mongodb環境配置
一、主機配置
192.168.4.203 db1 ##primary節點
192.168.4.97 db2 ##secondary節點
192.168.4.200 db3 ##arbriter節點
二、安裝(略)請參考https://blog.51cto.com/liqingbiao/2401942數據庫
三、primary節點的相關操做。 3.一、相關主節點的配置
[root@otrs ~]# vim /usr/local/mongodb/conf/master.conf bind_ip = 0.0.0.0 port = 27017 dbpath = /data/mongodb/db logpath = /data/mongodb/mongodb.log logappend = true fork = true journal=true maxConns=1000 auth = true keyFile = /usr/local/mongodb/mongodb-keyfile storageEngine=wiredTiger oplogSize=2048 replSet=RS keyFile = /usr/local/mongodb/mongodb-keyfile ##### 集羣的私鑰的完整路徑,只對於Replica Set 架構有效(noauth = true時不用配置此項)
3.二、在服務器上生成私鑰,並設置成600,把該私鑰scp到其它遠程服務器上。
[root@otrs ~]# openssl rand -base64 745 > /usr/local/mongodb/mongodb-keyfile [root@otrs ~]# chmod 600 /usr/local/mongodb/mongodb-keyfile [root@otrs ~]# cat /usr/local/mongodb/mongodb-keyfile sJy/0RquRGAu7Qk1xT5P7VqDVjHKGdFIu0EQSRa98+pxAEfD43Ix+hrKVhmfk6ag X8SAwl/2wkgeMFQBznKQNzE/EBFKos6VJgzi47RkUcXI3XV6igXbJNLzsjYsktkZ ipKDLtfpQrvse4nZy9PRQusg9HpYLlr3tVKYA9TNmAJtUXA36NDOGBAEbHzfEvvc sh4vmfxFAB+qtMwEer01MC11mKzXGN1nmL9D3hbmqCgC2F8d8RFeqTY5A73b81jT j16wqQw2PuAPHncy6MaQX0ytNO5uWiYDcOxUwOA/LVbTaP8jOHwcEfpl6FY8NT66 P2GXINkfKMjaTMIrhXJVgMGkJz0O4aJv8RYZaKCpLmiMpNsyxbMLyngvx5AmDWgP qAHkuQf8O6HcA676hzhBSdDoB8Rr6Yx4NvzQorKq5g/hjmk+9IpDixuI+qjZAwWV uvPceiONigJqwZnryIkvGm3pwl2SmfieKdTRJ5lbpaEz3N5JVgBlM2L6jxj3egnL Hn0V+1GH81Iwkw9AXpbn+I9KLrfivI6iuVT6xKu0Zu0ERtUZ442lgIpPIGiiY2HR M3MgyOLU0SWBcI0/t3+N4L2Kxkm0806Nl3/LdtxaPkGTqcSdJl39i96c8qmZThsn UPMQrIA7QHtBhal5e2rRQ7N5gbC+aFXCnEfNqbfPN13ljZfvMj+pzRDwfLutXpMF KSHaAkpF29wYL5nlbnN0CKxKBZDD1gJncR0aYWt2s4z3IP5TOgYER+zVFfhUlS6Y 5JsSgM57wrUDkF3VGvkwGQMs+8g5/3WxgEOzwcJV32QO98HLQR5QE0md108KWpy3 8LZYUgGzADcYepEeqGj/BPspnuQy7n4GzKyWZWK7Q4Sl9TLdVQR8XDUAl8lOtnDk ar/qYfEHb/Bt7tZb/ANZQyvpyTvEIHZvyPZ5xzAtoDduV+cQRyx+G1X/smHagm1o yo0HNr25CIaTjk2atQq4USnN2daq5f/OEw== [root@otrs ~]# scp /usr/local/mongodb/mongodb-keyfile root@192.168.4.97:/usr/local/mongodb/ mongodb-keyfile 100% 1012 1.0KB/s 00:00
3.三、建立認證用戶。因爲配置文件啓用認證模式,如今須要以非認證的模式來建立用戶。
#```
#######先以非驗證模式啓動數據庫
[root@otrs ~]# /usr/local/mongodb/bin/mongod --fork --dbpath=/data/mongodb/db --logpath=/data/mongodb/mongodb.log
about to fork child process, waiting until server is ready for connections.
forked process: 28389
child process started successfully, parent exiting
[root@otrs ~]# netstat -lntp|grep mongod
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 28389/mongod
############建立root用戶並配置管理員權限
[root@otrs ~]# mongo
MongoDB shell version v4.0.9
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4df82588-c03c-49e5-8839-92dafd7ff8ce") }
MongoDB server version: 4.0.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten]
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] WARNING: Access control is not enabled for the database.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] Read and write access to data and configuration is unrestricted.
2019-06-14T16:02:56.903+0800 I CONTROL [initandlisten] vim
show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
use admin
switched to db admin
db.createUser({ user: "root", pwd: "root", roles: [ { role: "root", db: "admin" } ] });
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}api3.四、啓動以驗證權限的配置文件[root@otrs ~]# mongod --config /usr/local/mongodb/conf/master.conf
about to fork child process, waiting until server is ready for connections.
forked process: 28698
child process started successfully, parent exiting
[root@otrs ~]# ps -ef|grep mongod
root 28698 1 19 16:19 ? 00:00:01 mongod --config /usr/local/mongodb/conf/master.conf
root 28731 28040 0 16:19 pts/0 00:00:00 grep --color=auto mongod 安全四、配置secondary節點 4.一、新建目錄編輯配置文件[root@otrs004097 opt]# mkdir /data/mongodb/standard
[root@otrs004097 opt]# cat /usr/local/mongodb/conf/standard.conf
bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/standard
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS服務器4.二、啓動服務[root@otrs004097 opt]# /usr/local/mongodb/bin/mongod --fork --config /usr/local/mongodb/conf/standard.conf
[root@otrs004097 opt]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 1045/mongod session五、配置arbriter節點[root@NginxServer01 mongodb]# cat /usr/local/mongodb/conf/arbiter.conf
bind_ip = 0.0.0.0
port = 27017
dbpath = /data/mongodb/arbiter
logpath = /data/mongodb/mongodb.log
logappend = true
fork = true
journal=true
maxConns=1000
auth = true
keyFile = /usr/local/mongodb/mongodb-keyfile
storageEngine=wiredTiger
oplogSize=2048
replSet=RS
[root@NginxServer01 mongodb]# /usr/local/mongodb/bin/mongod --config /usr/local/mongodb/conf/arbiter.conf
about to fork child process, waiting until server is ready for connections.
forked process: 26321
child process started successfully, parent exiting
[root@NginxServer01 mongodb]# netstat -lntp|grep mongod
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 26321/mongod 架構六、在primary節點服務器上,初始化複製集primary節點、添加secondary節點和arbriter節點。 6.一、初始化primary節點[root@otrs ~]# mongo -uroot -p
MongoDB shell version v4.0.9
Enter password:
#########查看副本集的狀態,發現並未初始化,接下來進行初始化。
rs.status()
{
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}#########進行config的配置
config={"_id":"RS","members":[ {"_id":0,"host":"192.168.4.203:27017"},{"_id":1,"host":"192.168.4.97:27017"}]}
{
"_id" : "RS",
"members" : [
{
"_id" : 0,
"host" : "192.168.4.203:27017"
},
{
"_id" : 1,
"host" : "192.168.4.97:27017"
}
]
}rs.initiate(config); #######初始化
{ "ok" : 1 }
6.二、查看副本集的狀態
RS:SECONDARY> rs.status() { "set" : "RS", "date" : ISODate("2019-06-14T09:01:19.722Z"), "myState" : 2, "term" : NumberLong(0), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "appliedOpTime" : { "ts" : Timestamp(1560502874, 1), "t" : NumberLong(-1) }, "durableOpTime" : { "ts" : Timestamp(1560502874, 1), "t" : NumberLong(-1) } }, "lastStableCheckpointTimestamp" : Timestamp(0, 0), "members" : [ { "_id" : 0, "name" : "192.168.4.203:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 2525, "optime" : { "ts" : Timestamp(1560502874, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2019-06-14T09:01:14Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.4.97:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "ts" : Timestamp(1560502874, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1560502874, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2019-06-14T09:01:14Z"), "optimeDurableDate" : ISODate("2019-06-14T09:01:14Z"), "lastHeartbeat" : ISODate("2019-06-14T09:01:19.645Z"), "lastHeartbeatRecv" : ISODate("2019-06-14T09:01:19.489Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1 }
6.三、由初始化結果查看,兩個都是secondary節點,因爲數據庫同步沒有完成,等同步完成的時候會變成primary節點。等兩分鐘再次查看下
RS:PRIMARY> rs.status() { "set" : "RS", "date" : ISODate("2019-06-14T09:11:17.382Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1560503477, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1560503477, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1560503477, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1560503477, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1560503427, 1), "members" : [ { "_id" : 0, "name" : "192.168.4.203:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3123, "optime" : { "ts" : Timestamp(1560503477, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-06-14T09:11:17Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1560502885, 1), "electionDate" : ISODate("2019-06-14T09:01:25Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.4.97:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 603, "optime" : { "ts" : Timestamp(1560503467, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1560503467, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2019-06-14T09:11:07Z"), "optimeDurableDate" : ISODate("2019-06-14T09:11:07Z"), "lastHeartbeat" : ISODate("2019-06-14T09:11:15.995Z"), "lastHeartbeatRecv" : ISODate("2019-06-14T09:11:16.418Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.4.203:27017", "syncSourceHost" : "192.168.4.203:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1560503477, 1), "$clusterTime" : { "clusterTime" : Timestamp(1560503477, 1), "signature" : { "hash" : BinData(0,"iR63R/X7QanbrWvuDJNkpdPgcVY="), "keyId" : NumberLong("6702308864978583553") } } }
6.四、在secondary服務器上查看同步日誌
[root@otrs004097 opt]# tail -f /data/mongodb/mongodb.log 2019-06-14T17:01:15.870+0800 I REPL [replexec-0] This node is 192.168.4.97:27017 in the config 2019-06-14T17:01:15.870+0800 I REPL [replexec-0] transition to STARTUP2 from STARTUP 2019-06-14T17:01:15.871+0800 I REPL [replexec-0] Starting replication storage threads 2019-06-14T17:01:15.872+0800 I REPL [replexec-2] Member 192.168.4.203:27017 is now in state SECONDARY 2019-06-14T17:01:15.872+0800 I STORAGE [replexec-0] createCollection: local.temp_oplog_buffer with generated UUID: 2e5c6683-a67b-4a16-bd9b-8672ee4db900 2019-06-14T17:01:15.880+0800 I REPL [replication-0] Starting initial sync (attempt 1 of 10) 2019-06-14T17:01:15.881+0800 I STORAGE [replication-0] Finishing collection drop for local.temp_oplog_buffer (2e5c6683-a67b-4a16-bd9b-8672ee4db900). 2019-06-14T17:01:15.882+0800 I STORAGE [replication-0] createCollection: local.temp_oplog_buffer with generated UUID: c25fa3cf-cae9-430b-b514-f13c3ab1e247 2019-06-14T17:01:15.889+0800 I REPL [replication-0] sync source candidate: 192.168.4.203:27017 2019-06-14T17:01:15.889+0800 I REPL [replication-0] Initial syncer oplog truncation finished in: 0ms 2019-06-14T17:01:15.889+0800 I REPL [replication-0] ****** 2019-06-14T17:01:15.889+0800 I REPL [replication-0] creating replication oplog of size: 2048MB... 2019-06-14T17:01:15.889+0800 I STORAGE [replication-0] createCollection: local.oplog.rs with generated UUID: f35caf0a-f2da-4cf7-b16b-34a94f1e25a7 2019-06-14T17:01:15.892+0800 I STORAGE [replication-0] Starting OplogTruncaterThread local.oplog.rs 2019-06-14T17:01:15.892+0800 I STORAGE [replication-0] The size storer reports that the oplog conta××× 0 records totaling to 0 bytes 2019-06-14T17:01:15.892+0800 I STORAGE [replication-0] Scanning the oplog to determine where to place markers for truncation 2019-06-14T17:01:15.905+0800 I REPL [replication-0] ****** 2019-06-14T17:01:15.905+0800 I STORAGE [replication-0] dropAllDatabasesExceptLocal 1
6.五、添加arbriter仲裁節點
RS:PRIMARY> rs.addArb("192.168.4.45:27017") { "ok" : 1, "operationTime" : Timestamp(1560504144, 1), "$clusterTime" : { "clusterTime" : Timestamp(1560504144, 1), "signature" : { "hash" : BinData(0,"um2WmD60Gh9q/43qUff8yN2abIw="), "keyId" : NumberLong("6702308864978583553") } } }
##########查看複製集的狀態
RS:PRIMARY> rs.status()
{
"set" : "RS",
"date" : ISODate("2019-06-14T09:40:04.334Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1560505167, 1),
"members" : [
{
"_id" : 0,
"name" : "192.168.4.203:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4850,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1560502885, 1),
"electionDate" : ISODate("2019-06-14T09:01:25Z"),
"configVersion" : 2,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.4.97:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2330,
"optime" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1560505197, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2019-06-14T09:39:57Z"),
"optimeDurableDate" : ISODate("2019-06-14T09:39:57Z"),
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.717Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:02.767Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.4.203:27017",
"syncSourceHost" : "192.168.4.203:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 2
},
{
"_id" : 2,
"name" : "192.168.4.45:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2019-06-14T09:40:02.594Z"),
"lastHeartbeatRecv" : ISODate("2019-06-14T09:40:04.186Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 2
}
],
"ok" : 1,
"operationTime" : Timestamp(1560505197, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1560505197, 1),
"signature" : {
"hash" : BinData(0,"Ff00RyXUvxDPc5nzFQYXGZIlnBc="),
"keyId" : NumberLong("6702308864978583553")
}
}
}
備註: "health" : 1,——————是1,表示健康狀態 "stateStr" : "PRIMARY",——————主 "stateStr" : "SECONDARY",——————從 "stateStr" : "ARBITER",——————仲裁
七、查看primary節點、secondary節點、arbriter節點的數據的變化
7.一、主節點Parimary操做,建立表並插入數據。
8.RS:PRIMARY> use lqb
switched to db lqb
RS:PRIMARY> db.object.×××ert([{"language":"C"},{"language":"C++"}])
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 2,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
RS:PRIMARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }
7.二、從節點操做。查看有數據同步到
RS:SECONDARY> use lqb;
switched to db lqb
RS:SECONDARY> rs.slaveOk();
RS:SECONDARY> db.object.find()
{ "_id" : ObjectId("5d0370424e7535b767bb7099"), "language" : "C++" }
{ "_id" : ObjectId("5d0370424e7535b767bb7098"), "language" : "C" }
7.三、仲裁節點aribriter。仲裁節點不會存儲數據
RS:ARBITER> use lqb;
switched to db lqb
RS:ARBITER> show tables
Warning: unable to run listCollections, attempting to approximate collection names by parsing connectionStatus
RS:ARBITER> db.object.find()
Error: error: {
"ok" : 0,
"errmsg" : "not authorized on lqb to execute command { find: \"object\", filter: {}, lsid: { id: UUID(\"d2d7e624-8f30-468a-a3b0-79728b0cabbd\") }, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"lqb\" }",
"code" : 13,
八、結論: 結論一:當主節點掛了得時候,從節點會變成主節點,當原先的主節點恢復的時候,會變成從節點,主節點不變。 結論二:主節點能寫,從節點不能寫。 結論三:主從節點的角色換了得話,原先的讀寫也就換了。