Linux下zookeeper集羣搭建

Linux下zookeeper集羣搭建

部署前準備

  1. 下載zookeeper的安裝包
     http://zookeeper.apache.org/releases.html 我下載的版本是zookeeper-3.4.10。html

  2. 準備三臺服務器
    ip地址爲:java

    172.16.18.198
    172.16.18.199
    172.16.18.200
  3. 檢查jdk版本,安裝jdk環境,jdk須要1.7以上。linux

安裝zookeeper

1.三臺服務器分別上傳zookeeper安裝包,上傳到/opt/目錄下,而後tar zxvf zookeeper-3.4.10.tar.gzapache

2.拷貝zoo_sample.cfg 爲zoo.cfg 修改/opt/zookeeper-3.4.10/conf/zoo.cfg配置文件,添加以下內容:centos

server.1=172.16.18.198:2888:3888
server.2=172.16.18.199:2888:3888
server.3=172.16.18.200:2888:3888

3.修改zookeeper數據文件存放目錄服務器

dataDir=/data/zookeeper

此時zoo.cfg 配置文件內容爲:app

# The number of milliseconds of each tick
tickTime=2000 ##zookeeper單位時間爲2ms
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10  ##對於從節點最初鏈接到主節點時的超時時間,單位爲tick值的倍數。即20ms
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5   ##對於主節點與從節點進行同步操做時的超時時間,單位爲tick值的倍數。即10ms
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181  ##客戶端連接端口
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60 ##客戶端最大連接數
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=172.16.18.198:2888:3888  
server.2=172.16.18.199:2888:3888
server.3=172.16.18.200:2888:3888

4.新建myid文件socket

在三臺服務器的數據存放目錄下新建myid文件,並寫入對應的server.num 中的num數字
如:在172.16.18.198上將server.1中1寫入myidide

echo 1 >/data/zookeeper/myid

5.添加環境變量,方便咱們執行腳本命令ui

vi etc/profile 在最後添加以下兩個。

export ZOOKEEPER_HOME=/opt/zookeeper-3.4.9
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

保存後從新加載一下:

source /etc/profile

6.修改日誌存放目錄(可選)

vi /opt/zookeeper/bin/zkEnv.sh 找到ZOO_LOG_DIR 和 ZOO_LOG4J_PROP位置

if [ "x${ZOO_LOG_DIR}" = "x" ] 
then 
    #配置zookeeper日誌輸出存放路徑 
    ZOO_LOG_DIR="/var/applog/zookeeper" 
fi 

if [ "x${ZOO_LOG4J_PROP}" = "x" ] 
then 
    #配置日誌輸出級別,這裏把幾個級別一併配上 
    ZOO_LOG4J_PROP="INFO,CONSOLE,ROLLINGFILE,TRACEFILE" 
fi

編輯conf目錄下log4j.properties

# Define some default values that can be overridden by system properties 
zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE, TRACEFILE 
zookeeper.console.threshold=INFO 
zookeeper.log.dir=. 
zookeeper.log.file=zookeeper.log 
zookeeper.log.threshold=ERROR 
zookeeper.tracelog.dir=. 
zookeeper.tracelog.file=zookeeper_trace.log 
log4j.rootLogger=${zookeeper.root.logger}

完成log的日誌目錄的修改。

7.啓動zookeeper服務

zkServer.sh start來啓動。

zkServer.sh restart  (重啓)

zkServer.sh status  (查看狀態)

zkServer.sh stop  (關閉)

zkServer.sh start-foreground  (以打印日誌方式啓動)
三臺服務器分別執行:

zkServer.sh start

而後用 status 檢查下狀態 若是出現 Mode:leader 或者Mode:follower 表示搭建成功。不然前臺執行看一下日誌。

$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower

如出現:

2019-04-29 14:04:05,992 [myid:3] - INFO  [ListenerThread:QuorumCnxManager$Listener@739] - My election bind port: /172.16.18.200:3888
2019-04-29 14:04:06,019 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer@865] - LOOKING
2019-04-29 14:04:06,025 [myid:3] - INFO  [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@818] - New election. My id =  3, proposed zxid=0x0
2019-04-29 14:04:06,056 [myid:3] - WARN  [WorkerSender[myid=3]:QuorumCnxManager@588] - Cannot open channel to 1 at election address /172.16.18.198:3888
java.net.NoRouteToHostException: 沒有到主機的路由
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
"zookeeper.log" 303L, 35429C

報這種異常通常有三種狀況:

1):zoo.cfg配置文件中,server.x:2888:3888配置出現錯誤;

2):myid文件內容和server.x不對應,或者myid不在data目錄下;

3):系統防火牆是否在啓動。

我檢查了三種緣由後發現是防火牆running。

centos7下查看防火牆狀態的命令:

firewall-cmd --state

關閉防火牆的命令:

systemctl stop firewalld.service
systemctl disable firewalld.service   (禁止開機啓動,永久關閉防火牆)

關閉防火牆後重啓便可。
8.驗證是否成功
在命令行中輸入:zkCli.sh -server 172.16.18.198:2181(因爲本人在不一樣的辦公地點在修改該文章,因此ip地址也在變化,知道原理便可)便可鏈接到其中一臺ZooKeeper服務器。其餘自動實現同步,客戶端只須要和一臺保持鏈接便可。出現以下表示連接成功

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 172.16.18.198:2181(CONNECTED) 0]
相關文章
相關標籤/搜索