搭建zookeeper環境

zookeeper是一個強一致的分佈式數據庫,由多個節點共同組成一個分佈式集羣,掛掉任意一個節點,數據庫仍然能夠正常工做。java

獨立模式

下載zookeeper打包文件,並進行解壓node

➜  ~ tar -xvzf apache-zookeeper-3.5.6-bin.tar.gz

進入zookeeper的解壓目錄,重命名conf目錄下的配置文件數據庫

➜  apache-zookeeper-3.5.6-bin mv conf/zoo_sample.cfg conf/zoo.cfg

啓動zookeeper,使用start-foreground啓動到前臺,方便查看服務的輸出信息apache

➜  apache-zookeeper-3.5.6-bin bin/zkServer.sh start-foreground

仲裁模式

zoo.cfg的基礎上進行編輯,建立zoo_1.cfgzoo_2.cfgzoo_3.cfg服務器

須要額外追加的配置信息。冒號分割的第二部分和第三部分爲TCP端口號,分別用於仲裁通信和羣首選舉。分佈式

server.1=127.0.0.1:2222:2223
 server.2=127.0.0.1:3333:3334
 server.3=127.0.0.1:4444:4445

當啓動一個服務器時,咱們須要知道啓動的是哪一個服務器。zookeeper經過讀取dataDir下的名爲myid的文件來獲取服務器ID信息。this

➜  zookeeper echo 1 > zoo_1/data/myid
➜  zookeeper echo 2 > zoo_2/data/myid
➜  zookeeper echo 3 > zoo_3/data/myid

啓動服務,從zoo_1開始.net

➜  zoo_1 ~/apache-zookeeper-3.5.6-bin/bin/zkServer.sh start-foreground ./zoo_1.cfg

由於咱們只啓動了三個zookeeper中的一個,因此整個服務器還沒法運行。code

2020-01-01 12:08:37,016 [myid:1] - INFO  [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1193] - LOOKING
2020-01-01 12:08:37,016 [myid:1] - INFO  [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@885] - New election. My id =  1, proposed zxid=0x0
2020-01-01 12:08:37,021 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@679] - Cannot open channel to 2 at election address /127.0.0.1:3334
java.net.ConnectException: Connection refused (Connection refused)
...
2020-01-01 12:08:37,031 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@679] - Cannot open channel to 3 at election address /127.0.0.1:4445
java.net.ConnectException: Connection refused (Connection refused)
...

啓動第二個服務器,這樣能夠構成仲裁的法定人數server

➜  zoo_2 ~/apache-zookeeper-3.5.6-bin/bin/zkServer.sh start-foreground ./zoo_2.cfg

服務器二被選擇爲羣首

2020-01-01 12:10:40,802 [myid:2] - INFO  [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2182)(secure=disabled):Leader@464] - LEADING - LEADER ELECTION TOOK - 54 MS
2020-01-01 12:10:40,804 [myid:2] - INFO  [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2182)(secure=disabled):FileTxnSnapLog@384] - Snapshotting: 0x0 to /tmp/zookeeper/zoo_2/data/version-2/snapshot.0
2020-01-01 12:10:40,812 [myid:2] - INFO  [LearnerHandler-/127.0.0.1:62308:LearnerHandler@406] - Follower sid: 1 : info : 127.0.0.1:2222:2223:participant
2020-01-01 12:10:40,816 [myid:2] - INFO  [LearnerHandler-/127.0.0.1:62308:ZKDatabase@295] - On disk txn sync enabled with snapshotSizeFactor 0.33

訪問集羣

➜  bin ./zkCli.sh -server 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183

發佈與訂閱的例子

啓動一個zk_0,建立一個臨時的znode節點:

[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 9] create -e /master "this is master"
Created /master

啓動另外一個zk_1, 給znode設置一個監視點:

[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 3] ls /master true
'ls path [watch]' has been deprecated. Please use 'ls [-w] path' instead.
[]

再啓動另外一個zk_2,給znode設置一個監視點:

[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 1] ls /master true
'ls path [watch]' has been deprecated. Please use 'ls [-w] path' instead.
[]

zk_0中刪除掉masterzk_1zk_2同時收到刪除的通知消息。

zk_0:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 10] delete /master

zk_1/zk_2:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 2]
WATCHER::

WatchedEvent state:SyncConnected type:NodeDeleted path:/master
相關文章
相關標籤/搜索