※ 3臺服務器分別執行html
tar -xf ~/install/zookeeper-3.4.9.tar.gz -C/opt/cloud/packages ln -s /opt/cloud/packages/zookeeper-3.4.9 /opt/cloud/bin/zookeeper ln -s /opt/cloud/packages/zookeeper-3.4.9/conf /opt/cloud/etc/zookeeper mkdir -p /opt/cloud/data/zookeeper/dat mkdir -p /opt/cloud/data/zookeeper/logdat mkdir -p /opt/cloud/logs/zookeeper
mv /opt/cloud/etc/zookeeper/zoo_sample.cfg /opt/cloud/etc/zookeeper/zoo.cfg vi /opt/cloud/etc/zookeeper/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/cloud/data/zookeeper/dat dataLogDir=/opt/cloud/data/zookeeper/logdat[1] # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients maxClientCnxns=100 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir autopurge.snapRetainCount=5[2] # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=6 # server.A=B:C:D server.1=hadoop1:2888:3888[3] server.2=hadoop2:2888:3888 server.3=hadoop3:2888:3888
vi /opt/cloud/etc/zookeeper/log4j.properties
修改配置項apache
zookeeper.root.logger=INFO, DRFA zookeeper.log.dir=/opt/cloud/logs/zookeeper
增長DRFA日誌定義bash
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFA.Append=true log4j.appender.DRFA.DatePattern='.'yyyy-MM-dd log4j.appender.DRFA.File=${zookeeper.log.dir}/${zookeeper.log.file} log4j.appender.DRFA.Threshold=${zookeeper.log.threshold} log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n log4j.appender.DRFA.Encoding=UTF-8 #log4j.appender.DRFA.MaxFileSize=20MB
scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop2:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/log4j.properties hadoop2:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop3:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/log4j.properties hadoop3:/opt/cloud/etc/zookeeper
在dataDir目錄下建立一個myid文件,而後分別在myid文件中按照zoo.cfg文件的server.A中A的數值,在不一樣機器上的該文件中填寫相應的值。服務器
ssh hadoop1 'echo 1 >/opt/cloud/data/zookeeper/dat/myid' ssh hadoop2 'echo 2 >/opt/cloud/data/zookeeper/dat/myid' ssh hadoop3 'echo 3 >/opt/cloud/data/zookeeper/dat/myid'
vi ~/.bashrcapp
增長ssh
export ZOO_HOME=/opt/cloud/bin/zookeeper export ZOOCFGDIR=${ZOO_HOME}/conf export ZOO_LOG_DIR=/opt/cloud/logs/zookeeper export PATH=$ZOO_HOME/bin:$PATH
即刻生效ide
source ~/.bashrcoop
複製到另外兩臺服務器ui
scp ~/.bashrc hadoop2:/home/hadoop scp ~/.bashrc hadoop3:/home/hadoop
1.啓動this
zkServer.sh start
2.輸入jps命令查看進程
QuorumPeerMain
Jps
其中,QuorumPeerMain是zookeeper進程,啓動正常。
三、中止zookeeper進程
zkServer.sh stop
四、啓動zookeeper集羣
[hadoop@hadoop1 ~]$ cexec 'zkServer.sh start' ************************* cloud ************************* --------- hadoop1--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED --------- hadoop2--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED --------- hadoop3--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
五、查看zookeeper集羣狀態
[hadoop@hadoop1 ~]$ cexec 'zkServer.sh status' ************************* cloud ************************* --------- hadoop1--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: follower --------- hadoop2--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: follower --------- hadoop3--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: leader
六、啓動客戶端腳本
zkCli.sh ls /zookeeper ls /zookeeper/quota
vi /opt/cloud/bin/zookeeper/bin/zkServer.sh
找到 nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ 替換爲 nohup "$JAVA" "-Dlog4j.configuration=file:${ZOOCFGDIR}/log4j.properties" \
複製到另外兩臺服務器
scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop2:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop2:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop3:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop3:/opt/cloud/bin/zookeeper/bin/
vi /etc/systemd/system/zookeeper.service
[Unit] Description=Zookeeper service After=network.target [Service] User=hadoop Group=hadoop Type=forking Environment = ZOO_HOME=/opt/cloud/bin/zookeeper Environment = ZOOCFGDIR=/opt/cloud/bin/zookeeper/conf Environment = ZOO_LOG_DIR=/opt/cloud/logs/zookeeper ExecStart=/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh start' ExecStop =/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh stop' [Install] WantedBy=multi-user.target
複製到另外兩臺服務器
scp /etc/systemd/system/zookeeper.service hadoop2:/etc/systemd/system/ scp /etc/systemd/system/zookeeper.service hadoop3:/etc/systemd/system/
從新加載配置信息:systemctl daemon-reload
啓動zookeeper:systemctl start zookeeper
中止zookeeper:systemctl stop zookeeper
查看進程狀態及日誌(重要):systemctl status zookeeper
開機自啓動:systemctl enable zookeeper
關閉自啓動:systemctl disable zookeeper
啓動服務設置爲自動啓動
systemctl daemon-reload
systemctl start zookeeper
systemctl status zookeeper
systemctl enable zookeeper
root用戶操做
systemctl stop zookeeper systemctl disable zookeeper rm /etc/systemd/system/zookeeper.service -f
vi ~/.bashrc
刪除zookeeper相關行
rm /opt/cloud/bin/zookeeper/ -rf rm /opt/cloud/data/zookeeper/ -rf rm /opt/cloud/logs/zookeeper/ -rf rm /opt/cloud/packages/zookeeper-3.4.9/ -rf