1. 在根目錄建立zookeeper文件夾(service一、service二、service3都建立)shell
[root@localhost /]# mkdir zookeeper服務器
經過Xshell上傳文件到service1服務器:上傳zookeeper-3.4.6.tar.gz到/software文件夾分佈式
2.遠程copy將service1下的/software/zookeeper-3.4.6.tar.gz到service二、service3ide
[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz root@192.168.2.212:/software/學習
[root@localhost software]# scp -r /software/zookeeper-3.4.6.tar.gz root@192.168.2.213:/software/ui
3.copy /software/zookeeper-3.4.6.tar.gz到/zookeeper/目錄(service一、service二、service3都執行)this
[root@localhost software]# cp /software/zookeeper-3.4.6.tar.gz /zookeeper/.net
4.安裝解壓zookeeper-3.4.6.tar.gz(service一、service二、service3都執行)rest
[root@localhost /]# cd /zookeeper/server
[root@localhost zookeeper]# tar -zxvf zookeeper-3.4.6.tar.gz
5.在/zookeeper建立兩個目錄:zkdata、zkdatalog(service一、service二、service3都建立)
[root@localhost zookeeper]# mkdir zkdata
[root@localhost zookeeper]# mkdir zkdatalog
6.進入/zookeeper/zookeeper-3.4.6/conf/目錄
[root@localhost zookeeper]# cd /zookeeper/zookeeper-3.4.6/conf/
[root@localhost conf]# ls
configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg
7. 修改zoo.cfg文件
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zookeeper/zkdata
dataLogDir=/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
#
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.2.211:12888:13888
server.2=192.168.2.212:12888:13888
server.3=192.168.2.213:12888:13888
8. 同步修改service二、service3的zoo.cfg配置
9. myid文件寫入(進入/zookeeper/zkdata目錄下)
[root@localhost /]# cd /zookeeper/zkdata
[root@localhost /]# echo 1 > myid
10. myid文件寫入service二、service3
echo 2 > myid
echo 3 > myid
11.查看zk命令:
[root@localhost ~]# cd /zookeeper/zookeeper-3.4.6/bin/
[root@localhost bin]# ls
README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zookeeper.out
12.執行zkServer.sh查看詳細命令:
[root@localhost bin]# ./zkServer.sh
JMX enabled by default
Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
13. 在service一、service二、service3分別啓動zk服務
[root@localhost bin]# ./zkServer.sh start
14. jps查看zk進程
[root@localhost bin]# jps
31483 QuorumPeerMain
31664 Jps
15. 分別在service一、service二、service3查看zk狀態(能夠看到leader和follower節點)
[root@localhost bin]# ./zkServer.sh status
JMX enabled by default
Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[root@localhost bin]# ./zkServer.sh status
JMX enabled by default
Using config: /zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader
16. 看到leader和follower節點已經安裝成功
分佈式的一些解決方案,有願意瞭解的朋友能夠找咱們團隊探討 。
完整的項目源碼來源歡迎你們一塊兒學習研究相關技術