docker環境下分析zookeeper觀察者角色

問題引入

zookeeper新引入的角色observer是不參與投票的,經過增長observer節點,能夠在提升zk系統讀吞吐量時,不影響寫吞吐量。docker

那麼問題來了網絡

  1. Zookeeper系統節點若是超過半數宕機,就無法正常提升服務,這裏的節點是否包含observer節點?架構

  2. observer節點是否能提供寫操做?仍是隻是充當「數據視圖」的角色?tcp

  3. 在跨機房中,如何更好的利用observer這個角色?測試

爲了解決這些問題,咱們在docker裏搭建一套zookeeper環境(文末附上docker-compose配置文件zk.yml)。該zookeeper包含:1個leader節點,2個follower節點,2個observer節點。以下zk4,zk5爲observer節點ui

➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml up
➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml ps
Name              Command               State                     Ports                   
------------------------------------------------------------------------------------------
zk1    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2881->2181/tcp, 2888/tcp, 3888/tcp
zk2    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2882->2181/tcp, 2888/tcp, 3888/tcp
zk3    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2883->2181/tcp, 2888/tcp, 3888/tcp
zk4    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2884->2181/tcp, 2888/tcp, 3888/tcp
zk5    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2885->2181/tcp, 2888/tcp, 3888/tcp

一、Zookeeper系統節點若是超過半數宕機,就無法正常提升服務,這裏的節點是否包含observer節點?

測試1:將2個observer節點和1個follower或者leader節點刪掉

➜  docker docker rm -f zk3 zk4 zk5
zk3
zk4
zk5
➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml ps
Name              Command               State                     Ports                   
------------------------------------------------------------------------------------------
zk1    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2881->2181/tcp, 2888/tcp, 3888/tcp
zk2    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2882->2181/tcp, 2888/tcp, 3888/tcp

測試下zookeeper系統是否正常提升服務rest

➜  docker echo stat | nc localhost 2881
Zookeeper version: 3.4.12-e5259e437540f349646870ea94dc2658c4e44b3b, built on 03/27/2018 03:55 GMT
Clients:

Latency min/avg/max: 0/0/0
Received: 2
Sent: 1
Connections: 1
Outstanding: 0
Zxid: 0x300000000
Mode: leader
Node count: 6

如上zookeeper能夠正常提供服務。此時zookeeper系統節點包含:1個leader節點,1個follower節點。code

假如zookeeper系統超過半數節點宕機則沒法提供服務,這裏的節點包括observer,那麼一個擁有5個節點的機子,宕機了3個,理論上是無法提供服務的,與上面結果不符。顯然假設不成立。server

因此,zookeeper系統超過半數節點宕機則沒法提供服務,這裏的節點不包括observer。部署

接下來將zk2也刪除,zookeeper系統無法正常提供服務。

➜  docker docker rm -f zk2
zk2
➜  docker echo stat | nc localhost 2881
This ZooKeeper instance is not currently serving requests

測試2:將2個leader或者follower節點刪除

重啓zookeeper系統

➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml up
➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml ps
Name              Command               State                     Ports                   
------------------------------------------------------------------------------------------
zk1    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2881->2181/tcp, 2888/tcp, 3888/tcp
zk2    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2882->2181/tcp, 2888/tcp, 3888/tcp
zk3    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2883->2181/tcp, 2888/tcp, 3888/tcp
zk4    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2884->2181/tcp, 2888/tcp, 3888/tcp
zk5    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2885->2181/tcp, 2888/tcp, 3888/tcp

刪除2個leader或者follower節點

➜  docker docker rm -f zk1 zk2
zk1
zk2
➜  docker COMPOSE_PROJECT_NAME=zktest docker-compose -f zk.yml ps
Name              Command               State                     Ports                   
------------------------------------------------------------------------------------------
zk3    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2883->2181/tcp, 2888/tcp, 3888/tcp
zk4    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2884->2181/tcp, 2888/tcp, 3888/tcp
zk5    /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:2885->2181/tcp, 2888/tcp, 3888/tcp

測試zookeeper是否正常提供服務

➜  docker echo stat | nc localhost 2883
This ZooKeeper instance is not currently serving requests

結果顯示,沒法正常提供服務。由此進一步驗證上面的結論。

結論

zookeeper系統超過半數節點宕機則沒法提供服務,這裏的節點不包括observer。因此,準確的應該說,zookeeper系統超過半數的follower或者leader節點宕機,則沒法提供服務。

  • zookeeper節點間心跳檢測時,leader會判斷收集到的成功響應節點中,follower節點是否過半,若是不是則判定當前系統已經宕機
  • 用戶往zookeeper寫數據時,leader將提交請求轉發給各個follower節點,並判斷是否過半節點成功響應。若是成功響應,則將數據提交寫入,observer直接共享提交後的數據結果

二、observer節點是否能提供寫操做?仍是隻是充當「數據視圖」的角色?

進入observer節點,建立節點

➜  docker zkCli -server localhost:2885
Connecting to localhost:2885
Welcome to ZooKeeper!
JLine support is enabled
[zk: localhost:2885(CONNECTING) 0] ls
WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2885(CONNECTED) 1] ls /
[zookeeper]
[zk: localhost:2885(CONNECTED) 2] create /test test
Created /test
[zk: localhost:2885(CONNECTED) 3] get /test
test
cZxid = 0x500000002
ctime = Fri Jun 15 12:04:43 CST 2018
mZxid = 0x500000002
mtime = Fri Jun 15 12:04:43 CST 2018
pZxid = 0x500000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
[zk: localhost:2885(CONNECTED) 4]

結論

由上可知,observer和其餘follower節點同樣,支持寫操做。也就是用戶鏈接到observer節點後,發起寫操做請求時,observer節點會將寫請求轉發給leader,該過程和follower同樣。不一樣的時,leader將寫操做分發到各個節點時,並不會分發給observer,由此來保證在增長observer節點時,不會影響寫吞吐量。

三、在跨機房中,如何更好的利用observer這個角色?

假若有兩個機房,一個在中國青島,一個美國紐約。那麼在架構部署時,能夠將leader/follower節點集中部署在中國青島或者美國紐約,從而避免應跨機房致使的網絡通信開銷。由於全部的follower都會參與投票。另外一個機房則動態增長observer節點,來提升系統的讀吞吐量。

zookeeper集羣系統docker-compose配置文件 zk.yml

version: '2'
services:
    zk1:
        image: zookeeper
        restart: always
        container_name: zk1
        ports:
            - "2881:2181"
        environment:
            ZOO_MY_ID: 1
            ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 server.4=zk4:2888:3888:observer server.5=zk5:2888:3888:observer

    zk2:
        image: zookeeper
        restart: always
        container_name: zk2
        ports:
            - "2882:2181"
        environment:
            ZOO_MY_ID: 2
            ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 server.4=zk4:2888:3888:observer server.5=zk5:2888:3888:observer

    zk3:
        image: zookeeper
        restart: always
        container_name: zk3
        ports:
            - "2883:2181"
        environment:
            ZOO_MY_ID: 3
            ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 server.4=zk4:2888:3888:observer server.5=zk5:2888:3888:observer
    zk4:
        image: zookeeper
        restart: always
        container_name: zk4
        ports:
            - "2884:2181"
        environment:
            ZOO_MY_ID: 4
            PEER_TYPE: observer
            ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 server.4=zk4:2888:3888:observer server.5=zk5:2888:3888:observer
    zk5:
        image: zookeeper
        restart: always
        container_name: zk5
        ports:
            - "2885:2181"
        environment:
            ZOO_MY_ID: 5
            PEER_TYPE: observer
            ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 server.5=zk4:2888:3888:observer server.5=zk5:2888:3888:observer
相關文章
相關標籤/搜索