已經出版的《大話Java性能優化》請你們多多支持,《深刻學習JVM&G1 GC》、《動手學習Apache ZooKeeper》2016年下半年出版。java
分佈式中有這麼一個疑難問題,客戶端向一個分佈式集羣的服務端發出一系列更新數據的消息,因爲分佈式集羣中的各個服務端節點是互爲同步數據的,因此運行完客戶端這系列消息指令後各服務端節點的數據應該是一致的,但因爲網絡或其餘緣由,各個服務端節點接收到消息的序列可能不一致,最後致使各節點的數據不一致。要確保數據一致,須要選舉算法的支撐,這就引伸出了今天咱們要討論的題目,關於選舉算法的原理解釋及實現,選舉包括對機器的選舉,也包括對消息的選舉。node
若是你須要開發一個分佈式集羣系統,通常來講你都須要去實現一個選舉算法,選舉出Master節點,其餘節點是Slave節點,爲了解決Master節點的單點問題,通常咱們也會選舉出一個Master-HA節點。算法
這類選舉算法的實現能夠採用本文後面介紹的Paxos算法,或者使用ZooKeeper組件來幫助進行分佈式協調管理,固然也有不少應用程序採用本身設計的簡單的選舉算法。這類型簡單的選舉算法能夠依賴不少計算機硬件因素做爲選舉因子,好比IP地址、CPU核數、內存大小、自定義序列號等等,好比採用自定義序列號,咱們假設每臺服務器利用組播方式獲取局域網內全部集羣分析相關的服務器的自定義序列號,以自定義序列號做爲優先級,若是接收到的自定義序列號比本地自定義序列號大,則退出競爭,最終選擇一臺自定義序列號最大的服務器做爲Leader服務器,其餘服務器則做爲普通服務器。這種簡單的選舉算法沒有考慮到選舉過程當中的異常狀況,選舉產生後不會再對選舉結果有異議,這樣可能會出現序列號較小的機器被選定爲Master節點(有機器臨時脫離集羣),實現僞代碼如清單1所示。apache
state:=candidate; send(my_id):receive(nid); while nid!=my_id do if nid>my_id then state:=no_leader; send(nid):receive(nid); od; if state=candidate then state:=leader;promise
1安全 2性能優化 3服務器 4網絡 5session 6 7 8 9 10 11 12 13 14 15 |
state:=candidate;
send(my_id):receive(nid);
while nid!=my_id
do if nid>my_id
then state:=no_leader;
send(nid):receive(nid);
od;
if state=candidate then state:=leader; |
原始問題起源於東羅馬帝國(拜占庭帝國)。拜占庭帝國國土遼闊,爲了防護目的,每支軍隊都分隔很遠,將軍之間只能依靠信差傳信。在戰爭的時候,拜占庭軍隊內全部司令和將軍必需達成一致的共識,決定是否有贏的機會纔去攻打敵人的陣營。可是,在軍隊內有可能存有叛徒和敵軍的間諜,左右將軍們的決定又擾亂總體軍隊的秩序。所以表決的結果並不必定能表明大多數人的意見。這時候,在已知有成員謀反的狀況下,其他忠誠的將軍在不受叛徒的影響下如何達成一致的協議,拜占庭問題就此造成。
拜占庭將軍問題實則是一個協議問題。一個可信的計算機系統必須容忍一個或多個部件的失效,失效的部件可能送出相互矛盾的信息給系統的其餘部件。這正是目前網絡安全要面對的狀況,如銀行交易安全、存款安全等。美國911恐怖襲擊發生以後,你們廣泛認識到銀行的異地備份很是重要。紐約的一家銀行能夠在東京、巴黎、蘇黎世設置異地備份,當某些點受到攻擊甚至破壞之後,能夠保證帳目仍然不錯,得以復原和恢復。從技術的角度講,這是一個很困難的問題,由於被攻擊的系統不但可能不做爲,並且可能進行破壞。國家的安全就更沒必要說了,對付這類故障的問題被抽象地表達爲拜占庭將軍問題。
解決拜占庭將軍問題的算法必須保證
A.全部忠誠的將軍必須基於相同的行動計劃作出決策;
B.少數叛徒不能使忠誠的將軍作出錯誤的計劃。
拜占庭問題的解決可能性
(1)叛徒數大於或等於1/3,拜占庭問題不可解
若是有三位將軍,一人是叛徒。當司令發進攻命令時,將軍3可能告訴將軍2,他收到的是「撤退」的命令。這時將軍2收到一個「進攻」的命令,一個「撤退」的命令,而無所適從。
若是司令是叛徒,他告訴將軍2「進攻」,將軍3「撤退」。當將軍3告訴將軍2,他收到「撤退」命令時,將軍2因爲收到了司令「進攻」的命令,而沒法與將軍3保持一致。
正因爲上述緣由,在三模冗餘系統中,若是容許一機有拜占庭故障,即叛徒數等於1/3,於是,拜占庭問題不可解。也就是說,三模冗餘對付不了拜占庭故障。三模冗餘只能容故障-凍結(fail-frost)那類的故障。就是說元件故障後,它就凍結在某一個狀態不動了。對付這類故障,用三模冗餘比較有效。
(2)用口頭信息,若是叛徒數少於1/3,拜占庭問題可解
這裏是在四模冗餘基礎上解決。在四模中有一個叛徒,叛徒數是少於1/3的。
拜占庭問題可解是指全部忠誠的將軍遵循同一命令。若司令是忠誠的,則全部忠誠將軍遵循其命令。咱們能夠給出一個多項式複雜性的算法來解這一問題。算法的中心思想很簡單,就是司令把命令發給每一位將軍,各將軍又將收到的司令的命令轉告給其餘將軍,遞歸下去,最後用多數表決。例如,司令送一個命令v給全部將軍。若將軍3是叛徒,當他轉告給將軍2時命令可能變成x。但將軍2收到{v, v, x},多數表決之後仍爲v,忠誠的將軍可達成一致。若是司令是叛徒,他發給將軍們的命令可能互不相同,爲x, y, z。當副官們互相轉告司令發來的信息時,他們會發現,他們收到的都是{x,y,z},於是也取得了一致。
(3)用書寫信息,若是至少有2/3的將軍是忠誠的,拜占庭問題可解
所謂書寫信息,是指帶簽名的信息,便可認證的信息。它是在口頭信息的基礎上,增長兩個條件:
①忠誠司令的簽名不能僞造,內容修改可被檢測。
②任何人均可以識別司令的簽名,叛徒能夠僞造叛徒司令的簽名。
一種已經給出的算法是接收者收到信息後,簽上本身的名字後再發給別人。因爲書寫信息的保密性,能夠證實,用書寫信息,若是至少有2/3的將軍是忠誠的,拜占庭問題可解。
例如,若是司令是叛徒,他發送「進攻」命令給將軍1,並帶有他的簽名0,發送「撤退」命令給將軍2,也帶簽名0。將軍們轉送時也帶了簽名。因而將軍1收到{「進攻」:0,「撤退」:0,2},說明司令發給本身的命令是「進攻」,而發給將軍2的命令是「撤退」,司令對咱們發出了不一樣的命令。對將軍2同解。
算法起源
Paxos算法是LesileLamport於1990年提出的一種基於消息傳遞且具備高度容錯特性的一致性算法,是目前公認的解決分佈式一致性問題最有效的算法之一。
在常見的分佈式系統中,總會發生諸如機器宕機或網絡異常等狀況。Paxos算法須要解決的問題就是如何在一個可能發生上述異常的分佈式系統中,快速且正確地在集羣內部對某個數據的值達成一致,而且保證不論發生以上任何異常,都不會破壞整個系統的一致性。
爲了更加清晰概念,當client一、client二、client3分別發出消息指令A、B、C時,Server1~4因爲網絡問題,接收到的消息序列就可能各不相同,這樣就可能因爲消息序列的不一樣致使Server1~4上的數據不一致。對於這麼一個問題,在分佈式環境中很難經過像單機裏處理同步問題那麼簡單,而Paxos算法就是一種處理相似於以上數據不一致問題的方案。
Paxos算法是要在一堆消息中經過選舉,使得消息的接收者或者執行者能達成一致,按照一致的消息順序來執行。其實,以最簡單的想法來看,爲了達到全部人執行相同序列的指令,徹底能夠經過串行來作,好比在分佈式環境前加上一個FIFO隊列來接收全部指令,而後全部服務節點按照隊列裏的順序來執行。這個方法固然能夠解決一致性問題,但它不符合分佈式特性,若是這個隊列出現異常這麼辦?而Paxos的高明之處就在於容許各個client互不影響地向服務端發指令,大夥按照選舉的方式達成一致,這種方式具備分佈式特性,容錯性更好。
Paxos規定了四種角色(Proposer,Acceptor,Learner,以及Client)和兩個階段(Promise和Accept)。
實現原理
Paxos算法的主要交互過程在Proposer和Acceptor之間。Proposer與Acceptor之間的交互主要有4類消息通訊。
這4類消息對應於paxos算法的兩個階段4個過程:
階段1:
階段2:
Paxos算法的最大優勢在於它的限制比較少,它容許各個角色在各個階段的失敗和重複執行,這也是分佈式環境下常有的事情,只要大夥按照規矩辦事便可,算法的自己保障了在錯誤發生時仍然獲得一致的結果。
基本概念
ZooKeeper並無徹底採用Paxos算法,而是使用了一種稱爲ZooKeeper Atomic Broadcast(ZAB,ZooKeeper原子消息廣播協議)的協議做爲其數據一致性的核心算法。
ZAB協議是爲分佈式協調服務ZooKeeper專門設計的一種支持崩潰恢復的原子廣播協議。ZAB協議最初並無要求其具備很好的擴展性,最初只是爲雅虎公司內部那些高吞吐量、低延遲、健壯、簡單的分佈式系統場景設計的。在ZooKeeper的官方文檔中也指出,ZAB協議並不像Paxos算法那樣,是一種通用的分佈式一致性算法,它是一種特別爲ZooKeeper設計的崩潰可恢復的原子消息廣播算法。
ZooKeeper使用一個單一的主進程來接收並處理客戶端的全部事務請求,並採用ZAB的原子廣播協議,將服務器數據的狀態變動以事務Proposal的形式廣播到全部的副本進程上去。ZAB協議的這個主備模型架構保證了同一時刻集羣中只可以有一個主進程來廣播服務器的狀態變動,所以可以很好地處理客戶端大量的併發請求。另外一方面,考慮到在分佈式環境中,順序執行的一些狀態變動其先後會存在必定的依賴關係,有些狀態變動必須依賴於比它早生成的那些狀態變動,例如變動C須要依賴變動A和變動B。這樣的依賴關係也對ZAB協議提出了一個要求,即ZAB協議須要保證若是一個狀態變動已經被處理了,那麼全部其依賴的狀態變動都應該已經被提早處理掉了。最後,考慮到主進程在任什麼時候候都有可能出現奔潰退出或重啓現象,所以,ZAB協議還須要作到在當前主進程出現上述異常狀況的時候,依舊可以工做。
清單4所示是ZooKeeper集羣啓動時選舉過程所打印的日誌,從裏面能夠看出初始階段是LOOKING狀態,該節點在極短期內就被選舉爲Leader節點。
zookeeper.out: 2016-06-14 16:28:57,336 [myid:3] - INFO [main:QuorumPeerMain@127] - Starting quorum peer 2016-06-14 16:28:57,592 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774] - LOOKING 2016-06-14 16:28:57,593 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@818] - New election. My id = 3, proposed zxid=0xc00000002 2016-06-14 16:28:57,599 [myid:3] - INFO [WorkerSender[myid=3]:QuorumPeer$QuorumServer@149] - Resolved hostname: 10.17.138.225 to address: /10.17.138.225 2016-06-14 16:28:57,599 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0xc (n.peerEpoch) LOOKING (my state) 2016-06-14 16:28:57,602 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0xc (n.peerEpoch) LOOKING (my state) 2016-06-14 16:28:57,605 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0xc (n.peerEpoch) LOOKING (my state) 2016-06-14 16:28:57,806 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@856] - LEADING 2016-06-14 16:28:57,808 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Leader@59] - TCP NoDelay set to: true
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
zookeeper.out:
2016-06-14 16:28:57,336 [myid:3] - INFO [main:QuorumPeerMain@127] - Starting quorum peer 2016-06-14 16:28:57,592 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774] - LOOKING 2016-06-14 16:28:57,593 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@818] - New election. My id = 3, proposed zxid=0xc00000002 2016-06-14 16:28:57,599 [myid:3] - INFO [WorkerSender[myid=3]:QuorumPeer$QuorumServer@149] - Resolved hostname: 10.17.138.225 to address: /10.17.138.225
2016-06-14 16:28:57,599 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0xc (n.peerEpoch) LOOKING (my state)
2016-06-14 16:28:57,602 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0xc (n.peerEpoch) LOOKING (my state)
2016-06-14 16:28:57,605 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xc00000002 (n.zxid) , 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0xc (n.peerEpoch) LOOKING (my state)
2016-06-14 16:28:57,806 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@856] - LEADING 2016-06-14 16:28:57,808 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Leader@59] - TCP NoDelay set to: true |
ZAB協議實現原理
ZAB協議的核心是定義了對於那些會改變ZooKeeper服務器數據狀態的事務請求的處理方式,即全部事務請求必須由一個全局惟一的服務器來協調處理,這樣的服務器被稱爲Leader服務器,而餘下的服務器則稱爲Follower服務器,ZooKeeper後來又引入了Observer服務器,主要是爲了解決集羣過大時衆多Follower服務器的投票耗時時間較長問題,這裏不作過多討論。Leader服務器負責將一個客戶端事務請求轉換成一個事務Proposal(提議),並將該Proposal分發給集羣中全部的Follower服務器。以後Leader服務器須要等待全部Follower服務器的反饋信息,一旦超過半數的Follower服務器進行了正確的反饋後,那麼Leader就會再次向全部的Follower服務器分發Commit消息,要求其將前一個Proposal進行提交。
支持模式
ZAB協議包括兩種基本的模式,分別是崩潰恢復和消息廣播。
當整個服務框架在啓動的過程當中,或是當Leader服務器出現網絡中斷、崩潰退出與重啓等異同步以後,ZAB協議就會退出恢復模式。其中,所謂的狀態同步是指數據同步,用來保證集羣中存在過半的惡機器可以和Leader服務器的數據狀態保持一致。一般狀況下,ZAB協議會進入恢復模式並選舉產生新的Leader服務器。當選舉產生了新的Leader服務器,同時集羣中已經有過半的機器與該Leader服務器完成了狀態。在清單4所示選舉的基礎上,咱們把Leader節點的進程手動關閉(kill -9 pid),隨即進入崩潰恢復模式,從新選舉Leader的過程日誌輸出如清單5所示。
2016-06-14 17:33:27,723 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 2, error = java.io.EOFException atjava.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-06-14 17:33:27,723 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 2, error = java.io.EOFException atjava.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795) 2016-06-14 17:33:27,728 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850) 2016-06-14 17:33:27,728 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker 2016-06-14 17:33:27,729 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@140] - Shutting down 2016-06-14 17:33:27,730 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@467] - shutting down 2016-06-14 17:33:27,730 [myid:2] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389) at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879) at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65) at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715) 2016-06-14 17:33:27,730 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@107] - Shutting down 2016-06-14 17:33:27,731 [myid:2] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@736] - Send worker leaving thread 2016-06-14 17:33:27,732 [myid:2] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop! 2016-06-14 17:33:27,732 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:CommitProcessor@184] - Shutting down 2016-06-14 17:33:27,733 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@417] - shutdown of request processor complete 2016-06-14 17:33:27,733 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@153] - CommitProcessor exited loop! 2016-06-14 17:33:27,733 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down 2016-06-14 17:33:27,734 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@187] - SyncRequestProcessor exited! 2016-06-14 17:33:27,734 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774] - LOOKING 2016-06-14 17:33:27,739 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/hemeng/zookeeper-3.4.7/data/zookeepe r/version-2/snapshot.c00000002[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@856] – LEADING 2016-06-14 17:33:27,957 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@361] - LEADING - LEADER ELECTION TOOK - 222
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
2016-06-14 17:33:27,723 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 2, error =
java.io.EOFException
atjava.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795)
2016-06-14 17:33:27,723 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@810] - Connection broken for id 3, my id = 2, error =
java.io.EOFException
atjava.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795)
2016-06-14 17:33:27,728 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:850)
2016-06-14 17:33:27,728 [myid:2] - WARN [RecvWorker:3:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker
2016-06-14 17:33:27,729 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerZooKeeperServer@140] - Shutting down
2016-06-14 17:33:27,730 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@467] - shutting down
2016-06-14 17:33:27,730 [myid:2] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:389)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715)
2016-06-14 17:33:27,730 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FollowerRequestProcessor@107] - Shutting down
2016-06-14 17:33:27,731 [myid:2] - WARN [SendWorker:3:QuorumCnxManager$SendWorker@736] - Send worker leaving thread
2016-06-14 17:33:27,732 [myid:2] - INFO [FollowerRequestProcessor:2:FollowerRequestProcessor@97] - FollowerRequestProcessor exited loop!
2016-06-14 17:33:27,732 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:CommitProcessor@184] - Shutting down
2016-06-14 17:33:27,733 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FinalRequestProcessor@417] - shutdown of request processor complete
2016-06-14 17:33:27,733 [myid:2] - INFO [CommitProcessor:2:CommitProcessor@153] - CommitProcessor exited loop!
2016-06-14 17:33:27,733 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:SyncRequestProcessor@209] - Shutting down
2016-06-14 17:33:27,734 [myid:2] - INFO [SyncThread:2:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2016-06-14 17:33:27,734 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@774] - LOOKING
2016-06-14 17:33:27,739 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileSnap@83] - Reading snapshot /home/hemeng/zookeeper-3.4.7/data/zookeepe
r/version-2/snapshot.c00000002[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer@856] – LEADING
2016-06-14 17:33:27,957 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Leader@361] - LEADING - LEADER ELECTION TOOK - 222 |
當集羣中已經有過半的Follower服務器完成了和Leader服務器的狀態同步,那麼整個服務框架就能夠進入消息廣播模式了。當一臺一樣遵照ZAB協議的服務器啓動後加入到集羣中時,若是此時集羣中已經存在一個Leader服務器在負責進行消息廣播,那麼新加入的服務器就會自覺地進入數據恢復模式:找到Leader所在的服務器,並與其進行數據同步,而後一塊兒參與到消息廣播流程中去。ZooKeeper設計成只容許惟一的一個Leader服務器來進行事務請求的處理。Leader服務器在接收到客戶端的事務請求後,會生成對應的事務提案併發起一輪廣播協議;而若是集羣中的其餘機器接收到客戶端的事務請求,那麼這些非Leader服務器會首先將這個事務請求轉發給Leader服務器。
三個階段
整個ZAB協議主要包括消息廣播和崩潰恢復這兩個過程,進一步能夠細分爲三個階段,分別是發現、同步和廣播階段。組成ZAB協議的每個分佈式進程,會循環地執行這三個階段,咱們將這樣一個循環稱爲一個主進程週期。
階段一主要就是Leader選舉過程,用於在多個分佈式進程中選舉出主進程,準Leader和Follower的工做流程分別以下。
1.Follower將本身最後接受的事務Proposal的epoch值發送給準Leader;
2.當接收到來自過半Follower的消息後,準Leader會生成消息給這些過半的Follower。關於這個epoch值e’,準Leader會從全部接收到的CEPOCH消息中選取出最大的epoch值,而後對其進行加1操做,即爲e’。
3.當Follower接收到來自準Leader的NEWEPOCH消息後,若是其檢測到當前的CEPOCH值小於e’,那麼就會將CEPOCH賦值爲e’,同時向這個準Leader反饋ACK消息。在這個反饋消息中,包含了當前該Follower的epoch CEPOCH(F p),以及該Follower的歷史事務Proposal集合:hf。
當Leader接收到來自過半Follower的確認消息ACK以後,Leader就會從這過半服務器中選取出一個Follower,並使用其做爲初始化事務集合Ie’。
ZooKeeper選舉算法執行流程圖如圖4所示。
在完成發現流程以後,就進入了同步階段。在這一階段中,Leader和Follower的工做流程以下:
1.Leader會將e’和Ie’以NEWLEADER(e’,Ie’)消息的形式發送給全部Quorum中的Follower。
2.當Follower接收到來自Leader的NEWLEADER(e’,Ie’)消息後,若是Follower發現CEPOCH(F p)不等於e’,就直接進入下一輪循環,由於此時Follower發現本身還在上一輪,或者更上輪,沒法參與本輪的同步。
若是等於e’,那麼Follower就會執行事務應用操做。
最後,Follower會反饋給Leader,代表本身已經接受並處理了全部Ie’中的事務Proposal。
3.當Leader接收到來自過半Follower針對NEWLEADER(e’,Ie’)的反饋消息後,就會向全部的Follower發送commit消息。至此Leader完成階段二。
4.當Follower收到來自Leader的Commit消息後,就會依次處理並提交全部的Ie’中未處理的事務。至此Follower完成階段二。
新增的節點會從Leader節點同步最新的鏡像,日誌輸出如清單8所示。
2016-06-14 19:45:41,173 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 16 2016-06-14 19:45:41,175 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: 10.17.138.225 to address: /10.17.13 8.225 2016-06-14 19:45:41,179 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Learner@329] - Getting a snapshot from leader 2016-06-14 19:45:41,182 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0xe00000007 to /home/hemeng/zookeeper-3.4.7/data/ zookeeper/version-2/snapshot.e00000007 2016-06-14 19:45:44,344 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:56072 2016-06-14 19:45:44,346 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing srvr command from /127.0.0.1:56072 2016-06-14 19:45:44,347 [myid:3] - INFO [Thread-1:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:56072 (no session established for client)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
2016-06-14 19:45:41,173 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Follower@63] - FOLLOWING - LEADER ELECTION TOOK - 16
2016-06-14 19:45:41,175 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: 10.17.138.225 to address: /10.17.13
8.225
2016-06-14 19:45:41,179 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:Learner@329] - Getting a snapshot from leader
2016-06-14 19:45:41,182 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0xe00000007 to /home/hemeng/zookeeper-3.4.7/data/
zookeeper/version-2/snapshot.e00000007
2016-06-14 19:45:44,344 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:56072
2016-06-14 19:45:44,346 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing srvr command from /127.0.0.1:56072
2016-06-14 19:45:44,347 [myid:3] - INFO [Thread-1:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:56072 (no session established for client) |
2016-06-14 19:44:58,835 [myid:2] - INFO [/10.17.138.225:3888:QuorumCnxManager$Listener@541] - Received connection request /10.17.138.227:48764 2016-06-14 19:44:58,837 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xe00000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0xe (n.peerEpoch) LEADING (my state) 2016-06-14 19:44:58,850 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@329] - Follower sid: 3 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1de18380 2016-06-14 19:44:58,851 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@384] - Synchronizing with Follower sid: 3 maxCommittedLog=0xe00000007 minCommittedLog=0xe00000001 peerLastZxid=0xe00000000 2016-06-14 19:44:58,852 [myid:2] - WARN [LearnerHandler-/10.17.138.227:36025:LearnerHandler@451] - Unhandled proposal scenario 2016-06-14 19:44:58,852 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@458] - Sending SNAP 2016-06-14 19:44:58,852 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@482] - Sending snapshot last zxid of peer is 0xe00000000 zxid of leader is 0xe00000007sent zxid of db as 0xe00000007 2016-06-14 19:44:58,894 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@518] - Received NEWLEADER-ACK message from 3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
2016-06-14 19:44:58,835 [myid:2] - INFO [/10.17.138.225:3888:QuorumCnxManager$Listener@541] - Received connection request /10.17.138.227:48764
2016-06-14 19:44:58,837 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0xe00000000 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0xe (n.peerEpoch) LEADING (my state)
2016-06-14 19:44:58,850 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@329] - Follower sid: 3 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer@1de18380
2016-06-14 19:44:58,851 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@384] - Synchronizing with Follower sid: 3 maxCommittedLog=0xe00000007 minCommittedLog=0xe00000001 peerLastZxid=0xe00000000
2016-06-14 19:44:58,852 [myid:2] - WARN [LearnerHandler-/10.17.138.227:36025:LearnerHandler@451] - Unhandled proposal scenario
2016-06-14 19:44:58,852 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@458] - Sending SNAP
2016-06-14 19:44:58,852 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@482] - Sending snapshot last zxid of peer is 0xe00000000 zxid of leader is 0xe00000007sent zxid of db as 0xe00000007
2016-06-14 19:44:58,894 [myid:2] - INFO [LearnerHandler-/10.17.138.227:36025:LearnerHandler@518] - Received NEWLEADER-ACK message from 3 |
完成同步階段以後,ZAB協議就能夠正式開始接收客戶端新的事務請求,並進行消息廣播流程。
1.Leader接收到客戶端新的事務請求後,會生成對應的事務Proposal,並根據ZXID的順序向全部Follower發送提案<e’,<v,z>>,其中epoch(z)=e’。
2.Follower根據消息接收到的前後次序來處理這些來自Leader的事務Proposal,並將他們追加到hf中去,以後再反饋給Leader。
3.當Leader接收到來自過半Follower針對事務Proposal<e’,<v,z>>的ACK消息後,就會發送Commit<e’,<v,z>>消息給全部的Follower,要求它們進行事務的提交。
4.當Follower接收到來自Leader的Commit<e’,<v,z>>消息後,就會開始提交事務Proposal<e’,<v,z>>。須要注意的是,此時該Follower一定已經提交了事務Proposal<v,z’>。
如清單6所示,新增一個節點,運行日誌輸出。
2016-06-14 19:40:43,882 [myid:2] - INFO [LearnerHandler-/10.17.138.234:38144:ZooKeeperServer@678] - submitRequest() 2016-06-14 19:40:43,882 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@533] - pRequest() 2016-06-14 19:40:43,933 [myid:2] - INFO [CommitProcessor:2:FinalRequestProcessor@87] - processRequest() 2016-06-14 19:40:43,933 [myid:2] - INFO [CommitProcessor:2:ZooKeeperServer@1022] - processTxn() 2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:DataTree@457] - createNode() 2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:WatchManager@96] - triggerWatch() 2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:WatchManager@96] - triggerWatch()
1 2 3 4 5 6 7 8 9 10 11 12 13 |
2016-06-14 19:40:43,882 [myid:2] - INFO [LearnerHandler-/10.17.138.234:38144:ZooKeeperServer@678] - submitRequest()
2016-06-14 19:40:43,882 [myid:2] - INFO [ProcessThread(sid:2 cport:-1)::PrepRequestProcessor@533] - pRequest()
2016-06-14 19:40:43,933 [myid:2] - INFO [CommitProcessor:2:FinalRequestProcessor@87] - processRequest()
2016-06-14 19:40:43,933 [myid:2] - INFO [CommitProcessor:2:ZooKeeperServer@1022] - processTxn()
2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:DataTree@457] - createNode()
2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:WatchManager@96] - triggerWatch()
2016-06-14 19:40:43,934 [myid:2] - INFO [CommitProcessor:2:WatchManager@96] - triggerWatch() |
2016-06-14 18:46:04,488 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@678] - submitRequest() 2016-06-14 18:46:04,489 [myid:1] - INFO [CommitProcessor:1:FinalRequestProcessor@87] - processRequest() 2016-06-14 18:46:04,489 [myid:1] - INFO [CommitProcessor:1:FinalRequestProcessor@163] - request.type=11 2016-06-14 18:46:14,154 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1554e0e2c160001, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203) atjava.lang.Thread.run(Thread.java:745) 2016-06-14 18:46:14,156 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /0:0:0:0:0:0:0:1:42678 which had sessionid 0x1554e0e2c160001
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
2016-06-14 18:46:04,488 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@678] - submitRequest()
2016-06-14 18:46:04,489 [myid:1] - INFO [CommitProcessor:1:FinalRequestProcessor@87] - processRequest()
2016-06-14 18:46:04,489 [myid:1] - INFO [CommitProcessor:1:FinalRequestProcessor@163] - request.type=11
2016-06-14 18:46:14,154 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x1554e0e2c160001, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
atjava.lang.Thread.run(Thread.java:745)
2016-06-14 18:46:14,156 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /0:0:0:0:0:0:0:1:42678 which had sessionid 0x1554e0e2c160001 |
實現代碼
具體關於Leader節點的選舉程序代碼分析,請見本人的另外一篇文章《Apache ZooKeeper服務啓動源碼解釋》。
如清單5所示,當手動關閉Leader節點後,原有的Follower節點會經過QuorumPeer對應的線程發現Leader節點出現異常,並開始從新選舉,線程代碼如清單10所示。
case FOLLOWING: try { LOG.info("FOLLOWING"); setFollower(makeFollower(logFactory)); follower.followLeader(); } catch (Exception e) { LOG.warn("Unexpected exception",e); } finally { follower.shutdown(); setFollower(null); setPeerState(ServerState.LOOKING); } break;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
case FOLLOWING:
try { LOG.info("FOLLOWING"); setFollower(makeFollower(logFactory)); follower.followLeader(); } catch (Exception e) { LOG.warn("Unexpected exception",e); } finally { follower.shutdown(); setFollower(null); setPeerState(ServerState.LOOKING); } break; |
全部的Follower節點都會進入到Follower類進行主節點的檢查,如清單11所示。
void followLeader() throws InterruptedException { try { connectToLeader(addr); longnewEpochZxid = registerWithLeader(Leader.FOLLOWERINFO); //check to see if the leader zxid is lower than ours //this should never happen but is just a safety check longnewEpoch = ZxidUtils.getEpochFromZxid(newEpochZxid); if (newEpoch<self.getAcceptedEpoch()) { LOG.error("Proposed leader epoch " + ZxidUtils.zxidToString(newEpochZxid) + " is less than our accepted epoch " + ZxidUtils.zxidToString(self.getAcceptedEpoch())); throw new IOException("Error: Epoch of leader is lower"); } syncWithLeader(newEpochZxid); QuorumPacketqp = new QuorumPacket(); while (self.isRunning()) { readPacket(qp); processPacket(qp); } } catch (Exception e) { LOG.warn("Exception when following the leader", e); try { sock.close(); } catch (IOException e1) { e1.printStackTrace(); } // clear pending revalidations pendingRevalidations.clear(); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
void followLeader() throws InterruptedException {
try { connectToLeader(addr); longnewEpochZxid = registerWithLeader(Leader.FOLLOWERINFO); //check to see if the leader zxid is lower than ours //this should never happen but is just a safety check
longnewEpoch = ZxidUtils.getEpochFromZxid(newEpochZxid);
if (newEpoch<self.getAcceptedEpoch()) { LOG.error("Proposed leader epoch " + ZxidUtils.zxidToString(newEpochZxid) + " is less than our accepted epoch " + ZxidUtils.zxidToString(self.getAcceptedEpoch())); throw new IOException("Error: Epoch of leader is lower"); } syncWithLeader(newEpochZxid); QuorumPacketqp = new QuorumPacket();
while (self.isRunning()) { readPacket(qp); processPacket(qp); } } catch (Exception e) { LOG.warn("Exception when following the leader", e);
try { sock.close(); } catch (IOException e1) { e1.printStackTrace(); } // clear pending revalidations pendingRevalidations.clear(); } |
RecvWorker線程會繼續拋出Leader鏈接不上的錯誤。
通過一系列的SHUTDOWN操做後,退出了以前集羣正常時的線程,從新開始新的選舉,有進入了LOOKING狀態,首先經過QuorumPeer類的loadDataBase方法獲取最新的鏡像,而後在FastLeaderElection類內部,傳入本身的ZXID和MYID,按照選舉機制對比ZXID和MYID的方式,選舉出Leader節點,這個過程和初始選舉方式是同樣的。
集羣穩定後ZooKeeper在收到新加節點請求後,不會再次選舉Leader節點,會直接給該新增節點賦予FOLLOWER角色。而後經過清單11的代碼找到Leader節點的IP地址,而後經過獲取到的最新的EpochZxid,即最新的事務ID,調用方法syncWithLeader查找最新的投票經過的鏡像(Snap),如清單12所示。
protected void syncWithLeader(long newLeaderZxid) throws IOException, InterruptedException{ caseLeader.UPTODATE: if (!snapshotTaken) { // true for the pre v1.0 case zk.takeSnapshot(); self.setCurrentEpoch(newEpoch); } self.cnxnFactory.setZooKeeperServer(zk); breakouterLoop;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
protected void syncWithLeader(long newLeaderZxid) throws IOException, InterruptedException{
caseLeader.UPTODATE:
if (!snapshotTaken) { // true for the pre v1.0 case
zk.takeSnapshot();
self.setCurrentEpoch(newEpoch);
}
self.cnxnFactory.setZooKeeperServer(zk);
breakouterLoop; |
public void save(DataTreedataTree, ConcurrentHashMap<Long, Integer>sessionsWithTimeouts) throwsIOException { longlastZxid = dataTree.lastProcessedZxid; File snapshotFile = new File(snapDir, Util.makeSnapshotName(lastZxid)); LOG.info("Snapshotting: 0x{} to {}", Long.toHexString(lastZxid), snapshotFile); snapLog.serialize(dataTree, sessionsWithTimeouts, snapshotFile); }
1 2 3 4 5 6 7 8 9 10 11 12 13 |
public void save(DataTreedataTree, ConcurrentHashMap<Long, Integer>sessionsWithTimeouts) throwsIOException {
longlastZxid = dataTree.lastProcessedZxid;
File snapshotFile = new File(snapDir, Util.makeSnapshotName(lastZxid));
LOG.info("Snapshotting: 0x{} to {}", Long.toHexString(lastZxid), snapshotFile);
snapLog.serialize(dataTree, sessionsWithTimeouts, snapshotFile);
} |
當新提交一個事務,例如清單6所示是新增一個ZNODE,這時候會按照ZooKeeperServer->PrepRequestProcessor->FinalRequestProcessor->ZooKeeperServer->DataTree這個方式新增這個節點,最終由ZooKeeperServer類的submitRequest方法提交Proposal並完成。在這個提交Proposal的過程當中,FOLLOWER節點也須要進行投票,如清單14所示。
protected void processPacket(QuorumPacketqp) throws IOException{ switch (qp.getType()) { caseLeader.PING: ping(qp); break; caseLeader.PROPOSAL: TxnHeaderhdr = new TxnHeader(); Record txn = SerializeUtils.deserializeTxn(qp.getData(), hdr); if (hdr.getZxid() != lastQueued + 1) { LOG.warn("Got zxid 0x" + Long.toHexString(hdr.getZxid()) + " expected 0x" + Long.toHexString(lastQueued + 1)); } lastQueued = hdr.getZxid(); fzk.logRequest(hdr, txn); break;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
protected void processPacket(QuorumPacketqp) throws IOException{
switch (qp.getType()) {
caseLeader.PING:
ping(qp);
break;
caseLeader.PROPOSAL:
TxnHeaderhdr = new TxnHeader();
Record txn = SerializeUtils.deserializeTxn(qp.getData(), hdr);
if (hdr.getZxid() != lastQueued + 1) {
LOG.warn("Got zxid 0x"
+ Long.toHexString(hdr.getZxid())
+ " expected 0x"
+ Long.toHexString(lastQueued + 1)); } lastQueued = hdr.getZxid(); fzk.logRequest(hdr, txn); break; |
ZAB協議並非Paxos算法的一個典型實現,在講解ZAB和Paxos之間的區別之間,咱們首先來看下二者的聯繫。
在Paxos算法中,一個新選舉產生的主進程會進行兩個階段的工做。第一階段被稱爲讀階段,在這個階段中,這個新的主進程會經過和全部其餘進程進行通訊的方式來收集上一個主進程提出的提案,並將它們提交。第二階段被稱爲寫階段,在這個階段,當前主進程開始提出它本身的提案。在Paxos算法設計的基礎上,ZAB協議額外添加了一個同步階段。在同步階斷以前,ZAB協議也存在一個和Paxos算法中的讀階段很是相似的過程,稱爲發現階段。在同步階段中,新的Leader會確保存在過半的Follower已經提交了以前Leader週期中的全部事務Proposal。這一同步階段的引入,可以有效地保證Leader在新的週期中提出事務Proposal以前,全部的進程都已經完成了對以前全部事務Proposal的提交。一旦完成同步階段後,那麼ZAB就會執行和Paxos算法相似的寫階段。
總的來說,Paxos算法和ZAB協議的本質區別在於,二者的設計目標不同。ZAB協議主要用於構建一個高可用的分佈式數據主備系統,例如ZooKeeper,而Paxos算法則是用於構建一個分佈式的一致性狀態機系統。
一般在分佈式系統中,構成一個集羣的每一臺機器都有本身的角色,最典型的集羣模式就是Master/Slave模式(主備模式)。在這種模式中,咱們把可以處理全部寫操做的機器稱爲Master機器,把全部經過異步複製方式獲取最新數據,並提供讀服務的機器稱爲Slave機器。在Paxos算法內部,引入了Proposer、Acceptor和Learner三種角色。而在ZooKeeper中,這些概念也作了改變,它沒有沿用傳統的Master/Slave概念,而是引入了Leader、Follower和Observer三種角色。本文經過對Paxos和ZooKeeper ZAB協議進行講解,讓讀者有一些基本的分佈式選舉算法方面的認識。