@[TOC]html
查閱hadoop2.7.3的文檔咱們能夠看到hadoop2.7.3在搭建高可用的時候使用的是zookeeper-3.4.2版本,因此咱們也按照hadoop官網的提示,接下來咱們安裝zookeeper-3.4.2版本.進入官網下載ZooKeeper3.4.2版本
官網地址:https://zookeeper.apache.org/
點擊Download
java
#1.把zookeeper的壓縮安裝包解壓到/opt/bigdata/目錄下 [root@node1 ~]# tar -xzvf zookeeper-3.4.2.tar.gz -C /opt/bigdata/ #輸入完命令後回車 #2.切換到bigdata目錄下 [root@node1 ~]# cd /opt/bigdata/ #3.按照安裝hadoop的方式,將zookeeper的安裝目錄的所屬組修改成hadoop:hadoop #修改zookeeper安裝目錄的所屬用戶和組爲hadoop:hadoop [root@node1 bigdata]# chown -R hadoop:hadoop zookeeper-3.4.2/ #4.修改zookeeper安裝目錄的讀寫權限 [root@node1 bigdata]# chmod -R 755 zookeeper-3.4.2/
#1.切換到hadoop用戶目錄下 [root@node1 bigdata]# su - hadoop Last login: Thu Jul 18 16:07:39 CST 2019 on pts/0 [hadoop@node1 ~]$ cd /opt/bigdata/zookeeper-3.4.2/ [hadoop@node1 zookeeper-3.4.2]$ cd .. [hadoop@node1 bigdata]$ cd ~ #2.修改hadoop用戶下的環境變量配置文件 [hadoop@node1 ~]$ vi .bash_profile # Get the aliases and functions # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs JAVA_HOME=/usr/java/jdk1.8.0_211-amd64 HADOOP_HOME=/opt/bigdata/hadoop-2.7.3 SPARK_HOME=/opt/spark-2.4.3-bin-hadoop2.7 M2_HOME=/opt/apache-maven-3.0.5 #3.新增zookeeper的環境變量ZOOKEEPER_HOME ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.2/ PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$M2_HOME/bin #4.將zookeeper的環境變量ZOOKEEPER_HOME加入到path中 PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin export JAVA_HOME export HADOOP_HOME export M2_HOME export SPARK_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop #5.導出zookeeper環境變量 export ZOOKEEPER_HOME #6.保存修改內容 :wq! #記得回車 #7.使得環境變量生效 [hadoop@node1 ~]$ source .bash_profile #8.輸入zk而後按鍵盤左側的Tab鍵 [hadoop@node1 ~]$ zk #有以下的提示,表名zookeeper的配置完成 zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh [hadoop@node1 ~]$ zk
將目錄切換到zookeeper的安裝目錄下的conf目錄下複製zoo_sample.cfg文件爲zoo.cfgnode
[hadoop@node1 ~]$ cd /opt/bigdata/zookeeper-3.4.2/conf/ [hadoop@node1 conf]$ ll total 12 -rwxr-xr-x 1 hadoop hadoop 535 Dec 22 2011 configuration.xsl -rwxr-xr-x 1 hadoop hadoop 2161 Dec 22 2011 log4j.properties -rwxr-xr-x 1 hadoop hadoop 808 Dec 22 2011 zoo_sample.cfg #1.複製zoo_sample.cfg模板配置文件爲正式的配置文件zoo.cfg [hadoop@node1 conf]$ cp zoo_sample.cfg zoo.cfg [hadoop@node1 conf]$ ll total 16 -rwxr-xr-x 1 hadoop hadoop 535 Dec 22 2011 configuration.xsl -rwxr-xr-x 1 hadoop hadoop 2161 Dec 22 2011 log4j.properties -rwxr-xr-x 1 hadoop hadoop 808 Jul 19 11:20 zoo.cfg -rwxr-xr-x 1 hadoop hadoop 808 Dec 22 2011 zoo_sample.cfg [hadoop@node1 conf]$
修改dataDir的值爲 dataDir=/var/lib/zookeeper,在文件的末尾添加以下配置:linux
server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888
修改完配置文件記得保存shell
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. 開課吧 kaikeba.com 精選領域名師,只爲人才賦能 6 1.6 建立myid文件 在節點node1,node2,node3對應的/var/lib/zookeeper目錄下(dataDir配置的目錄/var/lib/zookeeper)建立myid文 件,幾個文件內容依次爲1,2,3 以下圖咱們切換到root用戶,在/var/lib目錄下建立zookeeper目錄,由於hadoop用戶對/var/lib目錄沒有寫權限, 因此咱們在建立zookeeper目錄時須要切換到root用戶(擁有最大權限) # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888 #修改完配置文件記得保存
在節點node1,node2,node3對應的/var/lib/zookeeper目錄下(dataDir配置的錄/var/lib/zookeeper)建立myid文件,幾個文件內容依次爲1,2,3。切換到root用戶,在/var/lib目錄下建立zookeeper目錄,由於hadoop用戶對/var/lib目錄沒有寫權限,因此咱們在建立zookeeper目錄時須要切換到apache
root用戶(擁有最大權限) [hadoop@node1 conf]$ vi zoo.cfg #1.切換到root用戶 [hadoop@node1 conf]$ su - root Password: Last login: Fri Jul 19 10:53:59 CST 2019 from 192.168.200.1 on pts/0 #2.建立zookeeper目錄 [root@node1 ~]# mkdir -p /var/lib/zookeeper #3.進入到/var/lib/zookeeper/目錄 [root@node1 ~]# cd /var/lib/zookeeper/ You have new mail in /var/spool/mail/root #4.建立myid配置文件 [root@node1 zookeeper]# touch myid #5.編輯myid文件,輸入1,咱們目前編輯的是node1的節點的myid文件,node2的myid內容爲2,node3的myid內容爲3 [root@node1 zookeeper]# vi myid You have new mail in /var/spool/mail/root #6.查看一下myid文件內容爲1 [root@node1 zookeeper]# cat myid 1 You have new mail in /var/spool/mail/root
#1.配置完成後記得修改zookeeper目錄的所屬組和讀寫權限 [root@node1 zookeeper]# cd .. You have new mail in /var/spool/mail/root #2.修改zookeeper目錄所屬組 [root@node1 lib]# chown -R hadoop:hadoop zookeeper/ #3.修改zookeeper目錄的讀寫權限爲755 [root@node1 lib]# chmod -R 755 zookeeper/ [root@node1 lib]#
#1.複製/var/lib目錄下的zookeeper目錄到node2和node3的/var/lib目錄下 [root@node1 lib]# scp -r zookeeper node2:$PWD [root@node1 lib]# scp -r zookeeper node3:$PWD #2.複製zookeeper安裝目錄到node2和node3的安裝目錄下/opt/bigdata目錄下 [root@node1 lib]# scp -r /opt/bigdata/zookeeper-3.4.2/ node2:/opt/bigdata/ [root@node1 lib]# scp -r /opt/bigdata/zookeeper-3.4.2/ node3:/opt/bigdata/
修改node2節點zookeeper 相關目錄權限centos
#1.修改zookeeper的myid配置目錄所屬組和讀寫權限 [root@node2 lib]# cd ~ [root@node2 ~]# chown -R hadoop:hadoop /var/lib/zookeeper [root@node2 ~]# chmod -R 755 /var/lib/zookeeper #2.修改zookeeper安裝目錄所屬組和讀寫權限 [root@node2 ~]# chown -R hadoop:hadoop /opt/bigdata/zookeeper-3.4.2/ You have new mail in /var/spool/mail/root [root@node2 ~]# chmod -R 755 /opt/bigdata/zookeeper-3.4.2/ [root@node2 ~]#
修改node3節點zookeeper 相關目錄權限bash
#1.修改zookeeper的myid配置目錄所屬組和讀寫權限 [root@node3 bigdata]# cd ~ You have new mail in /var/spool/mail/root [root@node3 ~]# chown -R hadoop:hadoop /var/lib/zookeeper [root@node3 ~]# chmod -R 755 /var/lib/zookeeper #2.修改zookeeper安裝目錄所屬組和讀寫權限 [root@node3 ~]# chown -R hadoop:hadoop /opt/bigdata/zookeeper-3.4.2/ You have new mail in /var/spool/mail/root [root@node3 ~]# chmod -R 755 /opt/bigdata/zookeeper-3.4.2/ [root@node3 ~]#
修改node2節點zookeeper 的myid內容爲2:maven
[root@node2 ~]# vi /var/lib/zookeeper/myid You have new mail in /var/spool/mail/root [root@node2 ~]# cat /var/lib/zookeeper/myid 2 [root@node2 ~]#
修改node3節點zookeeper 的myid內容爲3ide
[root@node3 ~]# vi /var/lib/zookeeper/myid You have new mail in /var/spool/mail/root [root@node3 ~]# cat /var/lib/zookeeper/myid 3 [root@node3 ~]#
咱們在node1節點上直接將hadoop用戶的環境變量配置文件遠程複製到node2和node3的hadoop用戶家目錄下
#1.若是當前登陸用戶是root用戶,須要切換到hadoop用戶下,若是當前用戶是hadoop用戶,請將目錄切換到hadoop用 戶的家目錄下,在進行環境變量文件的遠程複製. [root@node1 lib]# su - hadoop Last login: Fri Jul 19 11:08:44 CST 2019 on pts/0 [hadoop@node1 ~]$ scp .bash_profile node2:$PWD .bash_profile 100% 681 64.8KB/s 00:00 [hadoop@node1 ~]$ scp .bash_profile node3:$PWD .bash_profile 100% 681 156.8KB/s 00:00 [hadoop@node1 ~]$
使得node2的hadoop的環境變量生效
#注意:切換到hadoop用戶下 #1.使得環境變量生效 [hadoop@node2 ~]$ source .bash_profile #2.輸入zk而後按鍵盤左側的Tab鍵 [hadoop@node2 ~]$ zk #3.有以下命令和shell腳本的提示,說明zookeeper的環境變量配置成功. zkCleanup.sh zkCli.sh zkEnv.sh zkServer.sh zkCli.cmd zkEnv.cmd zkServer.cmd [hadoop@node2 ~]$ zk
使得node3的hadoop的環境變量生效
#注意:切換到hadoop用戶下 [root@node3 bigdata]# su - hadoop Last login: Thu Jul 18 15:37:50 CST 2019 on :0 #1.使得環境變量生效 [hadoop@node3 ~]$ source .bash_profile #2.輸入zk而後按鍵盤左側的Tab鍵 [hadoop@node3 ~]$ zk #3.有以下命令和shell腳本的提示,說明zookeeper的環境變量配置成功. zkCleanup.sh zkCli.sh zkEnv.sh zkServer.sh zkCli.cmd zkEnv.cmd zkServer.cmd [hadoop@node3 ~]$ zk
啓動zookeeper集羣須要手動分別依次在三臺機器上啓動,啓動前須要在三臺機器上都將用戶切換爲hadoop用戶.
node1上啓動zookeeper
[hadoop@node1 ~]$ zkServer.sh start JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@node1 ~]$
node2上啓動zookeeper
[hadoop@node2 ~]$ zkServer.sh start JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@node2 ~]$
node3上啓動zookeeper
[hadoop@node3 ~]$ zkServer.sh start JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@node3 ~]$
使用zkServer.sh status命令在三個節點分別執行查看狀態
在node1上查看
[hadoop@node1 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Mode: follower [hadoop@node1 bin]$
在node2上查看
[hadoop@node2 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Mode: follower [hadoop@node2 bin]$
在node3上查看
[hadoop@node3 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Mode: leader [hadoop@node3 bin]$
至此咱們zookeeper集羣安裝完成.
因爲按照hadoop2.7.3版本官方文檔中使用zookeeper-3.4.2版本,可是zookeeper-3.4.2版本比較低,咱們在啓動zookeeper後,可使用jps命令或者ps -ef|grep zookeeper命令查看zookeeper主進程的狀態,可是咱們發現是正常的,若是咱們使用zkServer.sh status命令查看zookeeper的狀態卻顯示是異常的,無論啓動多少次都會獲得一樣的結果。
[hadoop@node1 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Error contacting service. It is probably not running. [hadoop@node2 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Error contacting service. It is probably not running. [hadoop@node3 bin]$ zkServer.sh status JMX enabled by default Using config: /opt/bigdata/zookeeper-3.4.2/bin/../conf/zoo.cfg Error contacting service. It is probably not running.
分析主要有如下兩個緣由形成:
1.centos7上沒有安裝nc工具.
2.zookeeper啓動腳本中的nc命令在不一樣的linux版本中使用了無效的參數致使獲取狀態異常或者獲取的狀態爲
空狀態致使的。
解決方法:
1.使用yum 在三個節點上分別安裝nc工具
yum install nc -y
2.修改zookeeper安裝目錄下的bin目錄下的zkServer.sh腳本文件內容
修改完成後咱們在使用zkServer.sh status就能看到zookeeper的狀態了