環境:html
192.168.1.5 | zk1 |
---|---|
192.168.1.6 | zk2 |
192.168.1.7 | zk3 |
概念再也不闡述,直接上步驟,文章結尾處,會有些經常使用的zookeeper調優參數java
[root@kuting1 ~]# cat /etc/hostsnode
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.5 zk1 192.168.1.6 zk2 192.168.1.7 zk3
[root@kuting1 ~]# ssh-keygen -t rsa
[root@kuting1 ~]# for i in tail -3 /etc/hosts | awk '{print $2}'
; do ssh-copy-id $i ; done
[root@kuting1 ~]# for i in tail -3 /etc/hosts | awk '{print $2}'
; do scp /etc/hosts $i:/etc/hosts ; doneweb
[root@kuting1 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
本次搭建實驗使用的zk 3.4.11版本docker
[root@kuting1 ~]# tar zxf zookeeper-3.4.11.tar.gz
[root@kuting1 ~]# mkdir /data/server -p #程序目錄
[root@kuting1 ~]# mkdir /data/data/zookeeper/0{0..2} -p #數據目錄
[root@kuting1 ~]# mkdir /data/logs/zookeeper/0{0..2} -p #日誌目錄shell
環境:apache
192.168.1.5 | zk1 |
---|
[root@kuting1 ~]# mv zookeeper-3.4.11 /data/server/zookeeper00
[root@kuting1 ~]# cd /data/server/zookeeper00/conf/
[root@kuting1 conf]# cp zoo_sample.cfg zoo.cfg
[root@kuting1 conf]# vim zoo.cfgvim
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/00 dataLogDir=/data/logs/zookeeper/00 # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.5:2889:3889 server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 1 > /data/data/zookeeper/00/myidbash
[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/01
[root@kuting1 ~]# vim /data/server/zookeeper/01/conf/zoo.cfg服務器
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/01 #目錄名字 dataLogDir=/data/logs/zookeeper/01 # the port at which the clients will connect clientPort=2182 #端口號必需要修改 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.5:2889:3889 server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 2 > /data/data/zookeeper/01/myid
[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/02
[root@kuting1 ~]# vim /data/server/zookeeper/02/conf/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/02 dataLogDir=/data/logs/zookeeper/02 # the port at which the clients will connect clientPort=2183 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.5:2889:3889 server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 3 > /data/data/zookeeper/02/myid
啓動第一個節點
[root@kuting1 ~]# cd /data/server/zookeeper00/bin
[root@kuting1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper00/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
[root@kuting1 bin]# ss -anpt | grep java
LISTEN 0 50 :::2181 :::* users:(("java",pid=13285,fd=25)) LISTEN 0 50 :::46475 :::* users:(("java",pid=13285,fd=19)) LISTEN 0 50 ::ffff:192.168.1.5:3888 :::* users:(("java",pid=13285,fd=26))
啓動第二個節點
[root@kuting1 bin]# ../../zookeeper01/bin/zkServer.sh start
啓動第三個節點
[root@kuting1 bin]# ../../zookeeper02/bin/zkServer.sh start
[root@kuting1 bin]# ss -anpt | grep java
LISTEN 0 50 :::37985 :::* users:(("java",pid=14163,fd=19)) LISTEN 0 50 :::2181 :::* users:(("java",pid=13285,fd=25)) LISTEN 0 50 :::2182 :::* users:(("java",pid=14163,fd=25)) LISTEN 0 50 :::2183 :::* users:(("java",pid=14207,fd=25)) LISTEN 0 50 ::ffff:192.168.1.5:2889 :::* users:(("java",pid=14163,fd=28)) LISTEN 0 50 :::46475 :::* users:(("java",pid=13285,fd=19)) LISTEN 0 50 ::ffff:192.168.1.5:3888 :::* users:(("java",pid=13285,fd=26)) LISTEN 0 50 ::ffff:192.168.1.5:3889 :::* users:(("java",pid=14163,fd=26)) LISTEN 0 50 ::ffff:192.168.1.5:3890 :::* users:(("java",pid=14207,fd=26)) LISTEN 0 50 :::42517 :::* users:(("java",pid=14207,fd=19)) ESTAB 0 0 ::ffff:192.168.1.5:41592 ::ffff:192.168.1.5:3888 users:(("java",pid=14207,fd=27)) ESTAB 0 0 ::ffff:192.168.1.5:38194 ::ffff:192.168.1.5:2889 users:(("java",pid=14207,fd=29)) ESTAB 0 0 ::ffff:192.168.1.5:41080 ::ffff:192.168.1.5:3889 users:(("java",pid=14207,fd=28)) ESTAB 0 0 ::ffff:192.168.1.5:41584 ::ffff:192.168.1.5:3888 users:(("java",pid=14163,fd=27)) ESTAB 0 0 ::ffff:192.168.1.5:3889 ::ffff:192.168.1.5:41080 users:(("java",pid=14163,fd=30)) ESTAB 0 0 ::ffff:192.168.1.5:2889 ::ffff:192.168.1.5:38194 users:(("java",pid=14163,fd=31)) ESTAB 0 0 ::ffff:192.168.1.5:38188 ::ffff:192.168.1.5:2889 users:(("java",pid=13285,fd=28)) ESTAB 0 0 ::ffff:192.168.1.5:2889 ::ffff:192.168.1.5:38188 users:(("java",pid=14163,fd=29)) ESTAB 0 0 ::ffff:192.168.1.5:3888 ::ffff:192.168.1.5:41584 users:(("java",pid=13285,fd=27)) ESTAB 0 0 ::ffff:192.168.1.5:3888 ::ffff:192.168.1.5:41592 users:(("java",pid=13285,fd=29))
登錄到第一個節點,客戶端端口爲2181,建立znode
[root@kuting1 00]# ./zkCli.sh -server 127.0.0.1:2181
[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 127.0.0.1:2181(CONNECTED) 1] create /data test-data
Created /data
[zk: 127.0.0.1:2181(CONNECTED) 2] ls /
[zookeeper, data]
[zk: 127.0.0.1:2181(CONNECTED) 3] quit
登錄到第二個節點,客戶端端口爲2182,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2182
[zk: 127.0.0.1:2182(CONNECTED) 0] ls /
[zookeeper, data]
[zk: 127.0.0.1:2182(CONNECTED) 1] get /data
test-data #數據是一致的
cZxid = 0x100000002
ctime = Sat Aug 04 18:31:39 CST 2018
mZxid = 0x100000002
mtime = Sat Aug 04 18:31:39 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 127.0.0.1:2182(CONNECTED) 2] quit
登錄到第三個節點,客戶端端口爲2183,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2183
[zk: 127.0.0.1:2183(CONNECTED) 0] ls /
[zookeeper, data]
[zk: 127.0.0.1:2183(CONNECTED) 1] get /data
test-data
cZxid = 0x100000002
ctime = Sat Aug 04 18:31:39 CST 2018
mZxid = 0x100000002
mtime = Sat Aug 04 18:31:39 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 127.0.0.1:2183(CONNECTED) 2] quit
查看集羣的狀態、主從信息須要使用 ./zkServer.sh status 命令,可是多個節點的話,逐個查看有些費勁,因此咱們寫一個簡單的shell腳原本批量執行命令。以下
[root@kuting1 ~]# cat checkzk.sh
#!/bin/bash n=(0 1 2) for i in ${n[@]};do echo $i /data/server/zookeeper0$i/bin/zkServer.sh status done
[root@kuting1 ~]# chmod +x checkzk.sh
[root@kuting1 ~]# ./checkzk.sh
0 ZooKeeper JMX enabled by default Using config: /data/server/zookeeper00/bin/../conf/zoo.cfg Mode: follower 1 ZooKeeper JMX enabled by default Using config: /data/server/zookeeper01/bin/../conf/zoo.cfg Mode: leader 2 ZooKeeper JMX enabled by default Using config: /data/server/zookeeper02/bin/../conf/zoo.cfg Mode: follower
看到其中有節點已經當選爲follower,數據已經同步,完成了單機僞分佈式集羣搭建
在搭建單機版以前已經拷貝過環境,關閉防火牆,三臺機器均必須擁有java環境
將單機版刪除,保留一個節點拷貝到其餘機器
[root@kuting1 ~]# rm -rf /data/server/zookeeper0{1..2}
[root@kuting1 ~]# mv /data/server/zookeeper00/ /data/server/zookeeper
在其餘節點上建立/data/server預程序目錄、/data/data/zookeeper數據目錄、/data/logs/zookeeper程序日誌目錄
[root@kuting1 ~]# rsync -az /data/server/zookeeper zk2:/data/server/zookeeper
[root@kuting1 ~]# rsync -az /data/server/zookeeper zk3:/data/server/zookeeper
逐個設置變量,添加在/etc/profile尾部,source加載一下
export ZOOKEEPER_HOME=/data/server/zookeeper export JAVA_HOME=/data/server/java export PATH=$PATH:/data/server/java/bin:/data/server/zookeeper/bin
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/ dataLogDir=/data/logs/zookeeper/ # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.6:2888:3888 server.3=192.168.1.7:2888:3888
默認server.1爲master節點
[root@kuting2 ~]# echo 1 > /data/data/zookeeper/myid
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/ dataLogDir=/data/logs/zookeeper/ # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.6:2888:3888 server.3=192.168.1.7:2888:3888
[root@kuting2 ~]# echo 2 > /data/data/zookeeper/myid
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/data/data/zookeeper/ dataLogDir=/data/logs/zookeeper/ # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=48 server.1=192.168.1.5:2888:3888 server.2=192.168.1.6:2888:3888 server.3=192.168.1.7:2888:3888
[root@kuting2 ~]# echo 3 > /data/data/zookeeper/myid
[root@kuting1 conf]# zkServer.sh start
[root@kuting2 conf]# zkServer.sh start
[root@kuting3 conf]# zkServer.sh start
查看三節點的集羣狀態(在啓動以後有節點選舉的過程,注意關閉防火牆)
[root@kuting1 zookeeper]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: leader
[root@kuting2 conf]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: follower
[root@kuting3 conf]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: follower
[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.7:2181
[zk: 192.168.1.7:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: 192.168.1.7:2181(CONNECTED) 1] create /real-culster real-data
Created /real-culster
[zk: 192.168.1.7:2181(CONNECTED) 2] ls /
[zookeeper, real-culster]
[zk: 192.168.1.7:2181(CONNECTED) 3] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.7:2181(CONNECTED) 4] quit
[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.5:2181
[zk: 192.168.1.5:2181(CONNECTED) 0] ls /
[zookeeper, real-culster]
[zk: 192.168.1.5:2181(CONNECTED) 1] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.5:2181(CONNECTED) 2] quit
[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.6:2181
[zk: 192.168.1.6:2181(CONNECTED) 0] ls /
[zookeeper, real-culster]
[zk: 192.168.1.6:2181(CONNECTED) 1] get /real-culster
real-data
cZxid = 0x100000002
ctime = Sat Sep 29 11:09:40 CST 2018
mZxid = 0x100000002
mtime = Sat Sep 29 11:09:40 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0
[zk: 192.168.1.6:2181(CONNECTED) 2] quit
發現兩個follower節點已經同步znode
中止leader節點
[root@kuting1 zookeeper]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: leader
[root@kuting1 zookeeper]# zkServer.sh stop
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Stopping zookeeper ... STOPPED
這時去到follower節點看看有沒有被選舉到leader
[root@kuting2 ~]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: leader
[root@kuting2 ~]# ip a | grep inet | grep ens33$
inet 192.168.1.6/24 brd 192.168.1.255 scope global noprefixroute ens33
[root@kuting1 conf]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: followe
[root@kuting1 conf]# ip a | grep inet | grep ens33$
inet 192.168.1.5/24 brd 192.168.1.255 scope global noprefixroute ens33
發現已經有follower節點被選舉被leader,以後將停掉的zk啓動起來
[root@kuting3 zookeeper]# zkServer.sh start
[root@kuting3 zookeeper]# zkServer.sh status
ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: follower
發現他不會成爲leader,而是扮演follower角色
[root@kuting2 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /data/server/zookeeper/bin/../conf/zoo.cfg Mode: leader
leader角色沒有改變,因此只有在選舉的時候集羣中的節點纔會切換角色
[root@kuting1 ~]# docker pull zookeeper
[root@kuting1 ~]# mkdir -p /data/docker/docker-compose/zookeeper-cluster
[root@kuting1 ~]# cd $!
[root@kuting1 zookeeper-cluster]# vim docker-compose.yml
version: '2' services: zoo1: image: zookeeper restart: always container_name: zk1 ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 zk2: image: zookeeper restart: always container_name: zk2 ports: - "2182:2181" environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 zk3: image: zookeeper restart: always container_name: zk3 ports: - "2183:2181" environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
[root@kuting1 zookeeper-cluster]# docker-compose up -d
[root@kuting1 zookeeper-cluster]# docker-compose ps
Name Command State Ports ------------------------------------------------------------------------------------------ zk1 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp zk2 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcp zk3 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2182
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT Clients: /172.18.0.1:40116[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/0/0 Received: 1 Sent: 0 Connections: 1 Outstanding: 0 Zxid: 0x100000002 Mode: follower Node count: 4
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2181
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT Clients: /172.18.0.1:55510[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/33/66 Received: 3 Sent: 2 Connections: 1 Outstanding: 0 Zxid: 0x100000002 Mode: follower Node count: 4
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2183
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT Clients: /172.18.0.1:34678[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/0/0 Received: 1 Sent: 0 Connections: 1 Outstanding: 0 Zxid: 0x100000002 Mode: leader Node count: 4 Proposal sizes last/min/max: 32/32/36
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2181 #2181端口爲zk1,根據yml文件中的映射關係依次類推
[zk: 127.0.0.1:2181(CONNECTED) 3] create /data test-data
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2182
[zk: 127.0.0.1:2182(CONNECTED) 0] ls /
[zookeeper, data]
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2183
[zk: 127.0.0.1:2183(CONNECTED) 0] ls /
[zookeeper, data]
發現集羣狀態與數據都已經正常,搭建完畢
① tickTime:CS通訊心跳數,Zookeeper 服務器之間或客戶端與服務器之間維持心跳的時間間隔,也就是每一個 tickTime時間就會發送一個心跳。tickTime以毫秒爲單位。該參數用來定義心跳的間隔時間,zookeeper的客戶端和服務端之間也有和web開發裏相似的session的概念,而zookeeper裏最小的session過時時間就是tickTime的兩倍②initLimit:LF初始通訊時限,集羣中的follower服務器(F)與leader服務器(L)之間 初始鏈接時能容忍的最多心跳數(tickTime的數量)。此配置表示,容許 follower (相對於 leader 而言的「客戶端」)鏈接 並同步到leader 的初始化鏈接時間,它以 tickTime 的倍數來表示。當超過設置倍數的 tickTime 時間,則鏈接失敗
③syncLimit:LF同步通訊時限,集羣中的follower服務器(F)與leader服務器(L)之間 請求和應答之間能容忍的最多心跳數(tickTime的數量)。此配置表示, leader 與 follower 之間發送消息,請求 和 應答時間長度。若是 follower 在設置的時間內不能與leader 進行通訊,那麼此 follower 將被丟棄
④4lw.commands.whitelist:命令的白名單,沒有在的會禁止。4lw.commands.whitelist=stat,ruok, conf, isro
⑤服務器名稱與地址:集羣信息(服務器編號,服務器地址,LF通訊端口,選舉端口)
⑥這個配置項的書寫格式比較特殊,規則以下: server.N=YYY:A:B server.1=itcast05:2888:3888server.2=itcast06:2888:3888 server.3=itcast07:2888:3888