MongoDB 副本集的原理、搭建、應用

概念:html

      在瞭解了這篇文章以後,能夠進行該篇文章的說明和測試。MongoDB 副本集(Replica Set)是有自動故障恢復功能的主從集羣,有一個Primary節點和一個或多個Secondary節點組成。相似於MySQL的MMM架構。更多關於副本集的介紹請見官網。也能夠在google、baidu上查閱。python

      副本集中數據同步過程Primary節點寫入數據,Secondary經過讀取Primary的oplog獲得複製信息,開始複製數據而且將複製信息寫入到本身的oplog。若是某個操做失敗,則備份節點中止從當前數據源複製數據。若是某個備份節點因爲某些緣由掛掉了,當從新啓動後,就會自動從oplog的最後一個操做開始同步,同步完成後,將信息寫入本身的oplog,因爲複製操做是先複製數據,複製完成後再寫入oplog,有可能相同的操做會同步兩份,不過MongoDB在設計之初就考慮到這個問題,將oplog的同一個操做執行屢次,與執行一次的效果是同樣的。簡單的說就是:mongodb

當Primary節點完成數據操做後,Secondary會作出一系列的動做保證數據的同步:
1:檢查本身local庫的oplog.rs集合找出最近的時間戳。
2:檢查Primary節點local庫oplog.rs集合,找出大於此時間戳的記錄。
3:將找到的記錄插入到本身的oplog.rs集合中,並執行這些操做。shell

       副本集的同步和主從同步同樣,都是異步同步的過程,不一樣的是副本集有個自動故障轉移的功能。其原理是:slave端從primary端獲取日誌,而後在本身身上徹底順序的執行日誌所記錄的各類操做(該日誌是不記錄查詢操做的),這個日誌就是local數據 庫中的oplog.rs表,默認在64位機器上這個表是比較大的,佔磁盤大小的5%,oplog.rs的大小能夠在啓動參數中設 定:--oplogSize 1000,單位是M。數據庫

      注意:在副本集的環境中,要是全部的Secondary都宕機了,只剩下Primary。最後Primary會變成Secondary,不能提供服務。服務器

一:環境搭建架構

1:準備服務器app

192.168.200.25

192.168.200.245

192.168.200.252

2:安裝異步

http://www.cnblogs.com/zhoujinyi/archive/2013/06/02/3113868.html

3:修改配置,只須要開啓:replSet 參數便可。格式爲:ide

192.168.200.252: --replSet = mmm/192.168.200.245:27017  # mmm是副本集的名稱,192.168.200.25:27017 爲實例的位子。

192.168.200.245: --replSet = mmm/192.168.200.252:27017

192.168.200.25: --replSet = mmm/192.168.200.252:27017,192.168.200.245:27017 

4:啓動

啓動後會提示:

replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done

說明須要進行初始化操做,初始化操做只能執行一次。

5:初始化副本集

登入任意一臺機器的MongoDB執行:由於是全新的副本集因此能夠任意進入一臺執行;要是有一臺有數據,則須要在有數據上執行;要多臺有數據則不能初始化。

zhoujy@zhoujy:~$ mongo --host=192.168.200.252
MongoDB shell version: 2.4.6
connecting to: 192.168.200.252:27017/test
> rs.initiate({"_id":"mmm","members":[
... {"_id":1,
... "host":"192.168.200.252:27017",
... "priority":1
... },
... {"_id":2,
... "host":"192.168.200.245:27017",
... "priority":1
... }
... ]})
{
    "info" : "Config now saved locally.  Should come online in about a minute.",
    "ok" : 1
}
######
"_id": 副本集的名稱
"members": 副本集的服務器列表
"_id": 服務器的惟一ID
"host": 服務器主機
"priority": 是優先級,默認爲1,優先級0爲被動節點,不能成爲活躍節點。優先級不位0則按照有大到小選出活躍節點。
"arbiterOnly": 仲裁節點,只參與投票,不接收數據,也不能成爲活躍節點。
> rs.status() { "set" : "mmm", "date" : ISODate("2014-02-18T04:03:53Z"), "myState" : 1, "members" : [ { "_id" : 1, "name" : "192.168.200.252:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 76, "optime" : Timestamp(1392696191, 1), "optimeDate" : ISODate("2014-02-18T04:03:11Z"), "self" : true }, { "_id" : 2, "name" : "192.168.200.245:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 35, "optime" : Timestamp(1392696191, 1), "optimeDate" : ISODate("2014-02-18T04:03:11Z"), "lastHeartbeat" : ISODate("2014-02-18T04:03:52Z"), "lastHeartbeatRecv" : ISODate("2014-02-18T04:03:53Z"), "pingMs" : 0, "syncingTo" : "192.168.200.252:27017" } ], "ok" : 1 }

6:日誌

查看252上的日誌:

Tue Feb 18 12:03:29.334 [rsMgr] replSet PRIMARY
…………
…………
Tue Feb 18 12:03:40.341 [rsHealthPoll] replSet member 192.168.200.245:27017 is now in state SECONDARY

至此,整個副本集已經搭建成功了。

上面的的副本集只有2臺服務器,還有一臺怎麼添加?除了在初始化的時候添加,還有什麼方法能夠後期增刪節點?

二:維護操做

1:增刪節點。

把25服務加入到副本集中:

rs.add("192.168.200.25:27017")

mmm:PRIMARY> rs.add("192.168.200.25:27017")
{ "ok" : 1 }
mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T04:53:00Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 3023,
            "optime" : Timestamp(1392699177, 1),
            "optimeDate" : ISODate("2014-02-18T04:52:57Z"),
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2982,
            "optime" : Timestamp(1392699177, 1),
            "optimeDate" : ISODate("2014-02-18T04:52:57Z"),
            "lastHeartbeat" : ISODate("2014-02-18T04:52:59Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T04:53:00Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 6,
            "stateStr" : "UNKNOWN",             #等一會就變成了 SECONDARY 
            "uptime" : 3,
            "optime" : Timestamp(0, 0),
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2014-02-18T04:52:59Z"),
            "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
            "pingMs" : 0,
            "lastHeartbeatMessage" : "still initializing"
        }
    ],
    "ok" : 1
}

把25服務從副本集中刪除:

rs.remove("192.168.200.25:27017")

mmm:PRIMARY> rs.remove("192.168.200.25:27017")
Tue Feb 18 13:01:09.298 DBClientCursor::init call() failed
Tue Feb 18 13:01:09.299 Error: error doing query: failed at src/mongo/shell/query.js:78
Tue Feb 18 13:01:09.300 trying reconnect to 192.168.200.252:27017
Tue Feb 18 13:01:09.301 reconnect 192.168.200.252:27017 ok
mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T05:01:19Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 3522,
            "optime" : Timestamp(1392699669, 1),
            "optimeDate" : ISODate("2014-02-18T05:01:09Z"),
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 10,
            "optime" : Timestamp(1392699669, 1),
            "optimeDate" : ISODate("2014-02-18T05:01:09Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:01:19Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:01:18Z"),
            "pingMs" : 0,
            "lastHeartbeatMessage" : "syncing to: 192.168.200.252:27017",
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}

192.168.200.25 的節點已經被移除。

2:查看複製的狀況

 db.printSlaveReplicationInfo()

mmm:PRIMARY> db.printSlaveReplicationInfo()
source:   192.168.200.245:27017
     syncedTo: Tue Feb 18 2014 13:02:35 GMT+0800 (CST)
         = 145 secs ago (0.04hrs)
source:   192.168.200.25:27017
     syncedTo: Tue Feb 18 2014 13:02:35 GMT+0800 (CST)
         = 145 secs ago (0.04hrs)

source:從庫的ip和端口。

syncedTo:目前的同步狀況,以及最後一次同步的時間。

從上面能夠看出,在數據庫內容不變的狀況下他是不一樣步的,數據庫變更就會立刻同步

3:查看副本集的狀態

rs.status()

mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T05:12:28Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 4191,
            "optime" : Timestamp(1392699755, 1),
            "optimeDate" : ISODate("2014-02-18T05:02:35Z"),
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 679,
            "optime" : Timestamp(1392699755, 1),
            "optimeDate" : ISODate("2014-02-18T05:02:35Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:12:27Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:12:27Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 593,
            "optime" : Timestamp(1392699755, 1),
            "optimeDate" : ISODate("2014-02-18T05:02:35Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:12:28Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:12:28Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}

4:副本集的配置

rs.conf()/rs.config()

mmm:PRIMARY> rs.conf()
{
    "_id" : "mmm",
    "version" : 4,
    "members" : [
        {
            "_id" : 1,
            "host" : "192.168.200.252:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.200.245:27017"
        },
        {
            "_id" : 3,
            "host" : "192.168.200.25:27017"
        }
    ]
}

5:操做Secondary

默認狀況下,Secondary是不提供服務的,即不能讀和寫。會提示:
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

在特殊狀況下須要讀的話則須要:
rs.slaveOk() ,只對當前鏈接有效。

mmm:SECONDARY> db.test.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
mmm:SECONDARY> rs.slaveOk()
mmm:SECONDARY> db.test.find()
{ "_id" : ObjectId("5302edfa8c9151a5013b978e"), "a" : 1 }

6:更新ing

 

三:測試

1:測試副本集數據複製功能

在Primary(192.168.200.252:27017)上插入數據:

mmm:PRIMARY> for(var i=0;i<10000;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY> db.test.count()
10001

在Secondary上查看是否已經同步:

mmm:SECONDARY> rs.slaveOk()
mmm:SECONDARY> db.test.count()
10001

數據已經同步。

2:測試副本集故障轉移功能

關閉Primary節點,查看其餘2個節點的狀況:

mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T05:38:54Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 5777,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2265,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:38:54Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:38:53Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 2179,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:38:54Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:38:53Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}

#關閉
mmm:PRIMARY> use admin
switched to db admin
mmm:PRIMARY> db.shutdownServer()

#進入任意一臺:
mmm:SECONDARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T05:47:41Z"),
    "myState" : 2,
    "syncingTo" : "192.168.200.25:27017",
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",
            "uptime" : 0,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:47:40Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:45:57Z"),
            "pingMs" : 0
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 5888,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "errmsg" : "syncing to: 192.168.200.25:27017",
            "self" : true
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 2292,
            "optime" : Timestamp(1392701576, 2678),
            "optimeDate" : ISODate("2014-02-18T05:32:56Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:47:40Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:47:39Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}

看到192.168.200.25:27017 已經從 SECONDARY 變成了 PRIMARY。具體的信息能夠經過日誌文件得知。繼續操做:

在新主上插入:

mmm:PRIMARY> for(var i=0;i<10000;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY> db.test.count()
20001

重啓啓動以前關閉的192.168.200.252:27017

mmm:SECONDARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T05:45:14Z"),
    "myState" : 2,
    "syncingTo" : "192.168.200.245:27017",
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 12,
            "optime" : Timestamp(1392702168, 8187),
            "optimeDate" : ISODate("2014-02-18T05:42:48Z"),
            "errmsg" : "syncing to: 192.168.200.245:27017",
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 11,
            "optime" : Timestamp(1392702168, 8187),
            "optimeDate" : ISODate("2014-02-18T05:42:48Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:45:13Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:45:12Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.25:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 9,
            "optime" : Timestamp(1392702168, 8187),
            "optimeDate" : ISODate("2014-02-18T05:42:48Z"),
            "lastHeartbeat" : ISODate("2014-02-18T05:45:13Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T05:45:13Z"),
            "pingMs" : 0
        }
    ],
    "ok" : 1
}

啓動以前的主,發現其變成了SECONDARY,在新主插入的數據,是否已經同步:

mmm:SECONDARY> db.test.count()
Tue Feb 18 13:47:03.634 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180
mmm:SECONDARY> rs.slaveOk()
mmm:SECONDARY> db.test.count()
20001

已經同步。

注意

全部的Secondary都宕機、或則副本集中只剩下一個節點,則該節點只能爲Secondary節點,也就意味着整個集羣智能進行讀操做而不能進行寫操做,當其餘的恢復時,以前的primary節點仍然是primary節點。

當某個節點宕機後從新啓動該節點會有一段的時間(時間長短視集羣的數據量和宕機時間而定)致使整個集羣中全部節點都成爲secondary而沒法進行寫操做(若是應用程序沒有設置相應的ReadReference也可能不能進行讀取操做)。

官方推薦的最小的副本集也應該具有一個primary節點和兩個secondary節點。兩個節點的副本集不具有真正的故障轉移能力。

四:應用

1:手動切換Primary節點到本身給定的節點
上面已經提到過了優先集priority,由於默認的都是1,因此只須要把給定的服務器的priority加到最大便可。讓245 成爲主節點,操做以下:

mmm:PRIMARY> rs.conf() #查看配置
{
    "_id" : "mmm",
    "version" : 6,  #每改變一次集羣的配置,副本集的version都會加1。 "members" : [
        {
            "_id" : 1,
            "host" : "192.168.200.252:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.200.245:27017"
        },
        {
            "_id" : 3,
            "host" : "192.168.200.25:27017"
        }
    ]
}
mmm:PRIMARY> rs.status() #查看狀態
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T07:25:51Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 47,
            "optime" : Timestamp(1392708304, 1),
            "optimeDate" : ISODate("2014-02-18T07:25:04Z"),
            "lastHeartbeat" : ISODate("2014-02-18T07:25:50Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T07:25:50Z"),
            "pingMs" : 0,
            "lastHeartbeatMessage" : "syncing to: 192.168.200.25:27017",
            "syncingTo" : "192.168.200.25:27017"
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 47,
            "optime" : Timestamp(1392708304, 1),
            "optimeDate" : ISODate("2014-02-18T07:25:04Z"),
            "lastHeartbeat" : ISODate("2014-02-18T07:25:50Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T07:25:51Z"),
            "pingMs" : 0,
            "lastHeartbeatMessage" : "syncing to: 192.168.200.25:27017",
            "syncingTo" : "192.168.200.25:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 13019,
            "optime" : Timestamp(1392708304, 1),
            "optimeDate" : ISODate("2014-02-18T07:25:04Z"),
            "self" : true
        }
    ],
    "ok" : 1
}
mmm:PRIMARY> cfg=rs.conf() # { "_id" : "mmm", "version" : 4, "members" : [ { "_id" : 1, "host" : "192.168.200.252:27017" }, { "_id" : 2, "host" : "192.168.200.245:27017" }, { "_id" : 3, "host" : "192.168.200.25:27017" } ] } mmm:PRIMARY> cfg.members[1].priority=2 #修改priority 2 mmm:PRIMARY> rs.reconfig(cfg) #從新加載配置文件,強制了副本集進行一次選舉,優先級高的成爲Primary。在這之間整個集羣的全部節點都是secondary mmm:SECONDARY> rs.status() { "set" : "mmm", "date" : ISODate("2014-02-18T07:27:38Z"), "myState" : 2, "syncingTo" : "192.168.200.245:27017", "members" : [ { "_id" : 1, "name" : "192.168.200.252:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 71, "optime" : Timestamp(1392708387, 1), "optimeDate" : ISODate("2014-02-18T07:26:27Z"), "lastHeartbeat" : ISODate("2014-02-18T07:27:37Z"), "lastHeartbeatRecv" : ISODate("2014-02-18T07:27:38Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 192.168.200.245:27017", "syncingTo" : "192.168.200.245:27017" }, { "_id" : 2, "name" : "192.168.200.245:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 71, "optime" : Timestamp(1392708387, 1), "optimeDate" : ISODate("2014-02-18T07:26:27Z"), "lastHeartbeat" : ISODate("2014-02-18T07:27:37Z"), "lastHeartbeatRecv" : ISODate("2014-02-18T07:27:38Z"), "pingMs" : 0, "syncingTo" : "192.168.200.25:27017" }, { "_id" : 3, "name" : "192.168.200.25:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 13126, "optime" : Timestamp(1392708387, 1), "optimeDate" : ISODate("2014-02-18T07:26:27Z"), "errmsg" : "syncing to: 192.168.200.245:27017", "self" : true } ], "ok" : 1 }

這樣,給定的245服務器就成爲了主節點。

2:添加仲裁節點

把25節點刪除,重啓。再添加讓其爲仲裁節點:

rs.addArb("192.168.200.25:27017")
mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-18T08:14:36Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 795,
            "optime" : Timestamp(1392711068, 100),
            "optimeDate" : ISODate("2014-02-18T08:11:08Z"),
            "lastHeartbeat" : ISODate("2014-02-18T08:14:35Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T08:14:35Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.245:27017"
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 14703,
            "optime" : Timestamp(1392711068, 100),
            "optimeDate" : ISODate("2014-02-18T08:11:08Z"),
            "self" : true
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 26,
            "lastHeartbeat" : ISODate("2014-02-18T08:14:34Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-18T08:14:34Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}
mmm:PRIMARY> rs.conf()
{
    "_id" : "mmm",
    "version" : 9,
    "members" : [
        {
            "_id" : 1,
            "host" : "192.168.200.252:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.200.245:27017",
            "priority" : 2
        },
        {
            "_id" : 3,
            "host" : "192.168.200.25:27017",
            "arbiterOnly" : true
        }
    ]
}

上面說明已經讓25服務器成爲仲裁節點。副本集要求參與選舉投票(vote)的節點數爲奇數,當咱們實際環境中由於機器等緣由限制只有兩個(或偶數)的節點,這時爲了實現 Automatic Failover引入另外一類節點:仲裁者(arbiter),仲裁者只參與投票不擁有實際的數據,而且不提供任何服務,所以它對物理資源要求不嚴格。

經過實際測試發現,當整個副本集集羣中達到50%的節點(包括仲裁節點)不可用的時候,剩下的節點只能成爲secondary節點,整個集羣只能讀不能 寫。好比集羣中有1個primary節點,2個secondary節點,加1個arbit節點時:當兩個secondary節點掛掉了,那麼剩下的原來的 primary節點也只能降級爲secondary節點;當集羣中有1個primary節點,1個secondary節點和1個arbit節點,這時即便 primary節點掛了,剩下的secondary節點也會自動成爲primary節點。由於仲裁節點不復制數據,所以利用仲裁節點能夠實現最少的機器開 銷達到兩個節點熱備的效果。

3:添加備份節點

hidden(成員用於支持專用功能):這樣設置後此機器在讀寫中都不可見,而且不會被選舉爲Primary,可是能夠投票,通常用於備份數據。

把25節點刪除,重啓。再添加讓其爲hidden節點:

mmm:PRIMARY> rs.add({"_id":3,"host":"192.168.200.25:27017","priority":0,"hidden":true})
{ "down" : [ "192.168.200.25:27017" ], "ok" : 1 }
mmm:PRIMARY> rs.conf()
{
    "_id" : "mmm",
    "version" : 17,
    "members" : [
        {
            "_id" : 1,
            "host" : "192.168.200.252:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.200.245:27017"
        },
        {
            "_id" : 3,
            "host" : "192.168.200.25:27017",
            "priority" : 0, "hidden" : true
        }
    ]
}

測試其可否參與投票:關閉當前的Primary,查看是否自動轉移Primary

關閉Primary(252):
mmm:PRIMARY> use admin
switched to db admin
mmm:PRIMARY> db.shutdownServer()

連另外一個連接察看:
mmm:PRIMARY> rs.status()
{
    "set" : "mmm",
    "date" : ISODate("2014-02-19T09:11:45Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 1,
            "name" : "192.168.200.252:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" :"(not reachable/healthy)",
            "uptime" : 4817,
            "optime" : Timestamp(1392801006, 1),
            "optimeDate" : ISODate("2014-02-19T09:10:06Z"),
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.200.245:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "PRIMARY",
            "uptime" : 401,
            "optime" : Timestamp(1392801006, 1),
            "optimeDate" : ISODate("2014-02-19T09:10:06Z"),
            "lastHeartbeat" : ISODate("2014-02-19T09:11:44Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-19T09:11:43Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        },
        {
            "_id" : 3,
            "name" : "192.168.200.25:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 99,
            "optime" : Timestamp(1392801006, 1),
            "optimeDate" : ISODate("2014-02-19T09:10:06Z"),
            "lastHeartbeat" : ISODate("2014-02-19T09:11:44Z"),
            "lastHeartbeatRecv" : ISODate("2014-02-19T09:11:43Z"),
            "pingMs" : 0,
            "syncingTo" : "192.168.200.252:27017"
        }
    ],
    "ok" : 1
}
上面說明Primary已經轉移,說明hidden具備投票的權利,繼續查看是否有數據複製的功能。
#####
mmm:PRIMARY> db.test.count()
20210
mmm:PRIMARY> for(var i=0;i<90;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY> db.test.count()
20300 Secondady:
mmm:SECONDARY> db.test.count()
Wed Feb 19 17:18:19.469 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180
mmm:SECONDARY> rs.slaveOk()
mmm:SECONDARY> db.test.count()
20300
上面說明hidden具備數據複製的功能

後面你們能夠在上面進行備份了,後一篇會介紹如何備份、還原以及一些平常維護須要的操做。

4:添加延遲節點

Delayed(成員用於支持專用功能):能夠指定一個時間延遲從primary節點同步數據。主要用於處理誤刪除數據立刻同步到從節點致使的不一致問題。

把25節點刪除,重啓。再添加讓其爲Delayed節點:

mmm:PRIMARY> rs.add({"_id":3,"host":"192.168.200.25:27017","priority":0,"hidden":true,"slaveDelay":60})  #語法
{ "down" : [ "192.168.200.25:27017" ], "ok" : 1 }

mmm:PRIMARY> rs.conf()
{
    "_id" : "mmm",
    "version" : 19,
    "members" : [
        {
            "_id" : 1,
            "host" : "192.168.200.252:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.200.245:27017"
        },
        {
            "_id" : 3,
            "host" : "192.168.200.25:27017",
            "priority" : 0,  "slaveDelay" : 60, "hidden" : true
        }
    ]
}

測試:操做Primary,看數據是否60s後同步到delayed節點。

mmm:PRIMARY> db.test.count()
20300
mmm:PRIMARY> for(var i=0;i<200;i++){db.test.insert({"name":"test"+i,"age":123})}
mmm:PRIMARY> db.test.count()
20500

Delayed:
mmm:SECONDARY> db.test.count()
20300 #60秒以後
mmm:SECONDARY> db.test.count()
20500

上面說明delayed可以成功的把同步操做延遲60秒執行。除了上面的成員以外,還有:    

Secondary-Only:不能成爲primary節點,只能做爲secondary副本節點,防止一些性能不高的節點成爲主節點。

Non-Voting:沒有選舉權的secondary節點,純粹的備份數據節點。

具體成員信息以下:

 

成爲primary

對客戶端可見

參與投票

延遲同步

複製數據

Default

Secondary-Only

Hidden

Delayed

Arbiters

Non-Voting

5:讀寫分離

MongoDB副本集對讀寫分離的支持是經過Read Preferences特性進行支持的,這個特性很是複雜和靈活。

應用程序驅動經過read reference來設定如何對副本集進行讀取操做,默認的,客戶端驅動全部的讀操做都是直接訪問primary節點的,從而保證了數據的嚴格一致性。

支持五種的read preference模式官網說明

primary
主節點,默認模式,讀操做只在主節點,若是主節點不可用,報錯或者拋出異常。
primaryPreferred
首選主節點,大多狀況下讀操做在主節點,若是主節點不可用,如故障轉移,讀操做在從節點。
secondary
從節點,讀操做只在從節點, 若是從節點不可用,報錯或者拋出異常。
secondaryPreferred
首選從節點,大多狀況下讀操做在從節點,特殊狀況(如單主節點架構)讀操做在主節點。
nearest
最鄰近節點,讀操做在最鄰近的成員,多是主節點或者從節點,關於最鄰近的成員請參考

注意:2.2版本以前的MongoDB對Read Preference支持的還不徹底,若是客戶端驅動採用primaryPreferred實際上讀取操做都會被路由到secondary節點

由於讀寫分離是經過修改程序的driver的,故這裏就不作說明,具體的能夠參考這篇文章或則能夠在google上查閱。

驗證:(Python)

經過python來驗證MongoDB ReplSet的特性。

1:主節點斷開,看是否影響寫入

腳本:

#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:27017,192.168.200.202:27017,192.168.200.204:27017", replicaSet="drug",read_preference=2, safe=True)
#打印Primary服務器
#print conn.primary
#打印全部服務器
#print conn.seeds
#打印Secondary服務器
#print conn.secondaries

#print conn.read_preference
#print conn.server_info()
for i in xrange(1000): try: conn.test.tt.insert({"name":"test" + str(i)}) time.sleep(1) print conn.primary print conn.secondaries except: pass

腳本執行打印出的內容:

zhoujy@zhoujy:~$ python test.py 
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])

(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])

(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])

(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])

(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])

('192.168.200.202', 27017)                                          ##Primary宕機,選舉產生新Primary
set([(u'192.168.200.204', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])     ##開啓以前宕機的Primary,變成了Secondary

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])

('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
View Code

體操做以下:

在執行腳本的時候,模擬Primary宕機,再把其開啓。看到其從201(Primary)上遷移到202上,201變成了Secondary。查看插入的數據發現其中間有一段數據丟失了。

{ "name" : "GOODODOO15" }
{ "name" : "GOODODOO592" }
{ "name" : "GOODODOO593" }

其實這部分數據是因爲在選舉過程期間丟失的,要是不容許數據丟失,則把在選舉期間的數據放到隊列中,等到找到新的Primary,再寫入。

上面的腳本可能會出現操做時退出,這要看xrange()裏的數量了,因此用一個循環修改(更直觀):

#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:27017,192.168.200.202:27017,192.168.200.204:27017", replicaSet="drug",read_preference=2, safe=True)

#打印Primary服務器
#print conn.primary
#打印全部服務器
#print conn.seeds
#打印Secondary服務器
#print conn.secondaries

#print conn.read_preference
#print conn.server_info()

while True:
    try:
        for i in xrange(100):
            conn.test.tt.insert({"name":"test" + str(i)})
            print "test" + str(i)
            time.sleep(2)
            print conn.primary
            print conn.secondaries
            print '\n'
    except:
        pass

上面的實驗證實了:在Primary宕機的時候,程序腳本仍能夠寫入,不須要人爲的去幹預。只是期間須要10s左右(選舉時間)的時間會出現不可用,進一步說明,寫操做時在Primary上進行的。

2:主節點斷開,看是否影響讀取

腳本:

#coding:utf-8
import time
from pymongo import ReplicaSetConnection
conn = ReplicaSetConnection("192.168.200.201:27017,192.168.200.202:27017,192.168.200.204:27017", replicaSet="drug",read_preference=2, safe=True)

#打印Primary服務器
#print conn.primary
#打印全部服務器
#print conn.seeds
#打印Secondary服務器
#print conn.secondaries

#print conn.read_preference
#print conn.server_info()

for i in xrange(1000):
    
    time.sleep(1)
    obj=conn.test.tt.find({},{"_id":0,"name":1}).skip(i).limit(1)
    for item in obj:
        print item.values()
    print conn.primary
    print conn.secondaries

腳本執行打印出的內容:

zhoujy@zhoujy:~$ python tt.py 
[u'GOODODOO0']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO1']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO2']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
…………
…………
[u'GOODODOO604']
(u'192.168.200.201', 27017)
set([('192.168.200.202', 27017), (u'192.168.200.204', 27017)])
[u'GOODODOO605']                                                       ##主宕機(201),再開啓,沒有影響,繼續讀取下一條
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'GOODODOO606']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'GOODODOO607']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
…………
…………
[u'test8']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])
[u'test9']
('192.168.200.202', 27017)
set([(u'192.168.200.204', 27017), (u'192.168.200.201', 27017)])    
[u'test10']                                                            ##主再次宕機,不開啓,沒有影響,繼續讀取下一條
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
[u'test11']
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
[u'test12']
(u'192.168.200.204', 27017)
set([(u'192.168.200.201', 27017)])
View Code

具體操做以下:

在執行腳本的時候,模擬Primary宕機,再把其開啓。看到201(Primary)上遷移到202上,201變成了Secondary,讀取數據沒有間斷。再讓Primary宕機,不開啓,讀取也不受影響。

上面的實驗證實了:在Primary宕機的時候,程序腳本仍能夠讀取,不須要人爲的去幹預。一進步說明,讀取是在Secondary上面。


總結:

剛接觸MongoDB,能想到的就這些,後期發現一些新的知識點會不定時更新該文章。


更多信息見:

http://www.cnblogs.com/magialmoon/p/3251330.html

http://www.cnblogs.com/magialmoon/p/3261849.html

http://www.cnblogs.com/magialmoon/p/3268963.html

http://www.lanceyan.com/tech/mongodb/mongodb_cluster_1.html

http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html

http://m.blog.csdn.net/blog/lance_yan/19332981

http://www.cnblogs.com/geekma/archive/2013/05/09/3068988.html

讀寫分離:

http://blog.chinaunix.net/uid-15795819-id-3075952.html

http://blog.csdn.net/kyfxbl/article/details/12219483

相關文章
相關標籤/搜索