Hadoop生態圈-zookeeper徹底分佈式部署html
做者:尹正傑java
版權聲明:原創做品,謝絕轉載!不然將追究法律責任。node
本篇博客部署是創建在Hadoop高可用基礎之上的,關於Hadoop高可用部署請參考:https://www.cnblogs.com/yinzhengjie/p/9070017.html。本篇博客是將Hadoop的高可用配置和zookeeper徹底分佈式結合使用!git
一.分佈式協調框架web
1>.分佈式框架的好處算法
a>.可靠性: 一個或幾個節點的崩潰不會致使整個集羣的崩潰。 b>.可伸縮性: 能夠經過動態添加主機的方式以及修改少許配置文件,以便提示集羣性能。 c>.透明性: 隱藏系統的複雜性,對用戶體現爲一個單一的應用。
2>.分佈式框架的弊端shell
a>.競態條件: 一個或多個主機嘗試運行一個應用,可是該應用只須要被一個主機所運行。 b>.死鎖: 兩個進程分別等待對方完成。 c>.不一致性: 數據的部分丟失。
二.Zookeeper簡介apache
1>.Zookeeper的做用json
答:爲了解決分佈式協調框架的一些弊端,就出現了zookeeper軟件,它是目前大數據中比較流行的分佈式協調框架。windows
2>.Zookeeper的特點
a>.名字服務: 標識集羣中的全部節點,節點可以向其註冊併產生惟一標識。 b>.配置管理: 存儲配置文件,以便共享。 c>.領袖推選機制: 就是根據其自身的算法推選出leader和follower。 d>.鎖和同步服務: 當文件進行修改,會將其進行加鎖,防止多用戶同時寫入。 e>.高有效性數據註冊: 一個節點是數據不能超過1M,這個規則是遞歸的,換句話說,每一個節點及其子節點的存儲空間大小不能超過1M。
3>.本篇博客部署zookeeper架構圖
4>.領袖推選機制介紹
在選舉leader節點時,首先會比較事務id,其次比較myid,若是集羣中已經有一半機器參加選舉,那麼次leader就是整個集羣中的leader。
爲了方便理解,我畫了兩幅圖,便於理解上面的一句話,當事物id一致時推選leader以下:
當事物id不一致時推選leader以下:
三.zookeeper徹底分佈式部署
1>.下載zookeeper安裝包
[yinzhengjie@s101 data]$ sudo yum -y install wget [sudo] password for yinzhengjie: Loaded plugins: fastestmirror Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/2): extras/7/x86_64/primary_db | 149 kB 00:00:00 (2/2): updates/7/x86_64/primary_db | 2.7 MB 00:00:07 Determining fastest mirrors * base: mirrors.huaweicloud.com * extras: mirrors.huaweicloud.com * updates: mirrors.huaweicloud.com Resolving Dependencies --> Running transaction check ---> Package wget.x86_64 0:1.14-15.el7_4.1 will be installed --> Finished Dependency Resolution Dependencies Resolved =============================================================================================================================== Package Arch Version Repository Size =============================================================================================================================== Installing: wget x86_64 1.14-15.el7_4.1 base 547 k Transaction Summary =============================================================================================================================== Install 1 Package Total download size: 547 k Installed size: 2.0 M Downloading packages: wget-1.14-15.el7_4.1.x86_64.rpm | 547 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : wget-1.14-15.el7_4.1.x86_64 1/1 Verifying : wget-1.14-15.el7_4.1.x86_64 1/1 Installed: wget.x86_64 0:1.14-15.el7_4.1 Complete! [yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz --2018-06-20 07:01:38-- http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz Resolving mirrors.hust.edu.cn (mirrors.hust.edu.cn)... 202.114.18.160 Connecting to mirrors.hust.edu.cn (mirrors.hust.edu.cn)|202.114.18.160|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 36667596 (35M) [application/octet-stream] Saving to: ‘zookeeper-3.4.12.tar.gz’ 100%[=====================================================================================>] 36,667,596 132KB/s in 4m 4s 2018-06-20 07:05:43 (146 KB/s) - ‘zookeeper-3.4.12.tar.gz’ saved [36667596/36667596] [yinzhengjie@s101 data]$ ll total 35848 -rw-r--r--. 1 yinzhengjie yinzhengjie 35358 Jun 1 08:48 seq -rw-rw-r--. 1 yinzhengjie yinzhengjie 36667596 Apr 25 08:47 zookeeper-3.4.12.tar.gz [yinzhengjie@s101 data]$
2>.修改配置文件
[yinzhengjie@s101 ~]$ cat /soft/zk/conf/zoo.cfg # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/yinzhengjie/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.102=s102:2888:3888 server.103=s103:2888:3888 server.104=s104:2888:3888 [yinzhengjie@s101 ~]$
3>.分發配置文件
[yinzhengjie@s101 ~]$ more `which xrsync.sh` #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數"; exit fi #獲取文件路徑 file=$@ #獲取子路徑 filename=`basename $file` #獲取父路徑 dirpath=`dirname $file` #獲取完整路徑 cd $dirpath fullpath=`pwd -P` #同步文件到DataNode for (( i=102;i<=105;i++ )) do #使終端變綠色 tput setaf 2 echo =========== s$i %file =========== #使終端變回原來的顏色,即白灰色 tput setaf 7 #遠程執行命令 rsync -lr $filename `whoami`@s$i:$fullpath #判斷命令是否執行成功 if [ $? == 0 ];then echo "命令執行成功" fi done [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xrsync.sh /soft/zk =========== s102 %file =========== 命令執行成功 =========== s103 %file =========== 命令執行成功 =========== s104 %file =========== 命令執行成功 =========== s105 %file =========== 命令執行成功 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xrsync.sh /soft/zookeeper-3.4.12/ =========== s102 %file =========== 命令執行成功 =========== s103 %file =========== 命令執行成功 =========== s104 %file =========== 命令執行成功 =========== s105 %file =========== 命令執行成功 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ su root Password: [root@s101 yinzhengjie]# xrsync.sh /etc/profile =========== s102 %file =========== 命令執行成功 =========== s103 %file =========== 命令執行成功 =========== s104 %file =========== 命令執行成功 =========== s105 %file =========== 命令執行成功 [root@s101 yinzhengjie]# exit exit [yinzhengjie@s101 ~]$
4>.在zk節點註冊ID信息
[yinzhengjie@s101 ~]$ more `which xcall.sh` #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數" exit fi #獲取用戶輸入的命令 cmd=$@ for (( i=101;i<=105;i++ )) do #使終端變綠色 tput setaf 2 echo ============= s$i $cmd ============ #使終端變回原來的顏色,即白灰色 tput setaf 7 #遠程執行命令 ssh s$i $cmd #判斷命令是否執行成功 if [ $? == 0 ];then echo "命令執行成功" fi done [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xcall.sh mkdir ~/zookeeper ============= s101 mkdir /home/yinzhengjie/zookeeper ============ 命令執行成功 ============= s102 mkdir /home/yinzhengjie/zookeeper ============ 命令執行成功 ============= s103 mkdir /home/yinzhengjie/zookeeper ============ 命令執行成功 ============= s104 mkdir /home/yinzhengjie/zookeeper ============ 命令執行成功 ============= s105 mkdir /home/yinzhengjie/zookeeper ============ 命令執行成功 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xcall.sh touch ~/zookeeper/myid ============= s101 touch /home/yinzhengjie/zookeeper/myid ============ 命令執行成功 ============= s102 touch /home/yinzhengjie/zookeeper/myid ============ 命令執行成功 ============= s103 touch /home/yinzhengjie/zookeeper/myid ============ 命令執行成功 ============= s104 touch /home/yinzhengjie/zookeeper/myid ============ 命令執行成功 ============= s105 touch /home/yinzhengjie/zookeeper/myid ============ 命令執行成功 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ for (( i=102;i<=104;i++ )) do ssh s$i "echo -n $i > ~/zookeeper/myid" ;done [yinzhengjie@s101 ~]$
[yinzhengjie@s102 ~]$ more /home/yinzhengjie/zookeeper/myid 102 [yinzhengjie@s102 ~]$
[yinzhengjie@s103 ~]$ more /home/yinzhengjie/zookeeper/myid 103 [yinzhengjie@s103 ~]$
[yinzhengjie@s104 ~]$ more /home/yinzhengjie/zookeeper/myid 104 [yinzhengjie@s104 ~]$
5>.編寫zookeeper啓動腳本(「/usr/local/bin/xzk.sh」,別忘記添加執行權限)
[yinzhengjie@s101 ~]$ more /usr/local/bin/xzk.sh #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -ne 1 ];then echo "無效參數,用法爲: $0 {start|stop|restart|status}" exit fi #獲取用戶輸入的命令 cmd=$1 #定義函數功能 function zookeeperManger(){ case $cmd in start) echo "啓動服務" remoteExecution start ;; stop) echo "中止服務" remoteExecution stop ;; restart) echo "重啓服務" remoteExecution restart ;; status) echo "查看狀態" remoteExecution status ;; *) echo "無效參數,用法爲: $0 {start|stop|restart|status}" ;; esac } #定義執行的命令 function remoteExecution(){ for (( i=102 ; i<=104 ; i++ )) ; do tput setaf 2 echo ========== s$i zkServer.sh $1 ================ tput setaf 9 ssh s$i "source /etc/profile ; zkServer.sh $1" done } #調用函數 zookeeperManger [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ sudo chmod +x /usr/local/bin/xzk.sh [yinzhengjie@s101 ~]$
6>.啓動zookeeper
[yinzhengjie@s101 ~]$ xzk.sh start 啓動服務 ========== s102 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== s103 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== s104 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xzk.sh status 查看狀態 ========== s102 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Mode: follower ========== s103 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Mode: leader ========== s104 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /soft/zk/bin/../conf/zoo.cfg Mode: follower [yinzhengjie@s101 ~]$
四.HDFS高可用配置
1>.啓動zookeeper集羣([yinzhengjie@s101 ~]$ xzk.sh start)
啓動方式參考:"三.zookeeper徹底分佈式部署",一下是查看集羣的zookeeper進程是否啓動成功:
[yinzhengjie@s101 ~]$ xcall.sh jps ============= s101 jps ============ 3217 Jps 命令執行成功 ============= s102 jps ============ 2712 Jps 2558 QuorumPeerMain 命令執行成功 ============= s103 jps ============ 2584 QuorumPeerMain 2751 Jps 命令執行成功 ============= s104 jps ============ 2737 Jps 2575 QuorumPeerMain 命令執行成功 ============= s105 jps ============ 2561 Jps 命令執行成功 [yinzhengjie@s101 ~]$
2>.修改hadoop的核心配置文件
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/yinzhengjie/ha/dfs/name1,/home/yinzhengjie/ha/dfs/name2</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/yinzhengjie/ha/dfs/data1,/home/yinzhengjie/ha/dfs/data2</value> </property> <!-- 高可用配置 --> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>s101:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>s105:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>s101:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>s105:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://s102:8485;s103:8485;s104:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 在容災發生時,保護活躍的namenode --> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/yinzhengjie/.ssh/id_rsa</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> <!-- hdfs-site.xml 配置文件的做用: #HDFS的相關設定,如文件副本的個數、塊大小及是否使用強制權限 等,此中的參數定義會覆蓋hdfs-default.xml文件中的默認配置. dfs.replication 參數的做用: #爲了數據可用性及冗餘的目的,HDFS會在多個節點上保存同一個數據 塊的多個副本,其默認爲3個。而只有一個節點的僞分佈式環境中其僅用 保存一個副本便可,這能夠經過dfs.replication屬性進行定義。它是一個 軟件級備份。 dfs.namenode.name.dir 參數的做用: #本地磁盤目錄,NN存儲fsimage文件的地方。能夠是按逗號分隔的目錄列表, fsimage文件會存儲在所有目錄,冗餘安全。這裏多個目錄設定,最好在多個磁盤, 另外,若是其中一個磁盤故障,不會致使系統故障,會跳過壞磁盤。因爲使用了HA, 建議僅設置一個。若是特別在乎安全,能夠設置2個 dfs.datanode.data.dir 參數的做用: #本地磁盤目錄,HDFS數據應該存儲Block的地方。能夠是逗號分隔的目錄列表 (典型的,每一個目錄在不一樣的磁盤)。這些目錄被輪流使用,一個塊存儲在這個目錄, 下一個塊存儲在下一個目錄,依次循環。每一個塊在同一個機器上僅存儲一份。不存在 的目錄被忽略。必須建立文件夾,不然被視爲不存在。 dfs.nameservices 參數的做用: #nameservices列表。逗號分隔。 dfs.ha.namenodes.mycluster 參數的做用: #dfs.ha.namenodes.[nameservice ID] 命名空間中全部NameNode的惟一標示名稱。 能夠配置多個,使用逗號分隔。該名稱是可讓DataNode知道每一個集羣的全部NameNode。 當前,每一個集羣最多隻能配置兩個NameNode。 dfs.namenode.rpc-address.mycluster.nn1 參數的做用: #dfs.namenode.rpc-address.[nameservice ID].[name node ID] 每一個namenode監聽的RPC地址。 dfs.namenode.http-address.mycluster.nn1 參數的做用: #dfs.namenode.http-address.[nameservice ID].[name node ID] 每一個namenode監聽的http地址。 dfs.namenode.shared.edits.dir 參數的做用: #這是NameNode讀寫JNs組的uri。經過這個uri,NameNodes能夠讀寫edit log內容。URI的格式"qjournal://host1:port1;host2:port2;h ost3:port3/journalId"。這裏的host一、host二、host3指的是Journal Node的地址,這裏必須是奇數個,至少3個;其中journalId是集羣的惟一 標識符,對於多個聯邦命名空間,也使用同一個journalId。 dfs.client.failover.proxy.provider.mycluster 參數的做用: #這裏配置HDFS客戶端鏈接到Active NameNode的一個java類 dfs.ha.fencing.methods 參數的做用: #dfs.ha.fencing.methods 配置active namenode出錯時的處理類。當active namenode出錯時,通常須要關閉該進程。處理方式能夠是s sh也能夠是shell。 dfs.ha.fencing.ssh.private-key-files 參數的做用: #使用sshfence時,SSH的私鑰文件。 使用了sshfence,這個必須指定 dfs.ha.automatic-failover.enabled 參數做用: #配置自動容災,須要把它的值改成true! --> [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/core-site.xml <?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yinzhengjie/ha</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>yinzhengjie</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>s102:2181,s103:2181,s104:2181</value> </property> </configuration> <!-- core-site.xml配置文件的做用: 用於定義系統級別的參數,如HDFS URL、Hadoop的臨時 目錄以及用於rack-aware集羣中的配置文件的配置等,此中的參 數定義會覆蓋core-default.xml文件中的默認配置。 fs.defaultFS 參數的做用: #fs.defaultFS 客戶端鏈接HDFS時,默認的路徑前綴。若是前面配置了nameservice ID的值是mycluster,那麼這裏能夠配置爲受權信息 的一部分 hadoop.tmp.dir 參數的做用: #聲明hadoop工做目錄的地址。 hadoop.http.staticuser.user 參數的做用: #在網頁界面訪問數據使用的用戶名。 ha.zookeeper.quorum 參數的做用: #指定zookeeper的節點。 --> [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/yarn-site.xml <?xml version="1.0"?> <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>s101</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> <!-- yarn-site.xml配置文件的做用: #主要用於配置調度器級別的參數. yarn.resourcemanager.hostname 參數的做用: #指定資源管理器(resourcemanager)的主機名 yarn.nodemanager.aux-services 參數的做用: #指定nodemanager使用shuffle --> [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> <!-- mapred-site.xml 配置文件的做用: #HDFS的相關設定,如reduce任務的默認個數、任務所可以使用內存 的默認上下限等,此中的參數定義會覆蓋mapred-default.xml文件中的 默認配置. mapreduce.framework.name 參數的做用: #指定MapReduce的計算框架,有三種可選,第一種:local(本地),第 二種是mapred(hadoop一代執行框架),第三種是yarn(二代執行框架),我 們這裏配置用目前版本最新的計算框架yarn便可。 --> [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/slaves #指定從DataNode服務器 s102 s103 s104 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ more /soft/hadoop/etc/hadoop/masters #指定從NameNode服務器 s101 s105 [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ tail -7 /soft/hadoop/sbin/start-yarn.sh # start resourceManager #"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR start resourcemanager "$bin"/yarn-daemons.sh --config $YARN_CONF_DIR --hosts masters start resourcemanager # start nodeManager "$bin"/yarn-daemons.sh --config $YARN_CONF_DIR start nodemanager # start proxyserver #"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR start proxyserver [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ tail -7 /soft/hadoop/sbin/stop-yarn.sh # stop resourceManager #"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop resourcemanager "$bin"/yarn-daemons.sh --config $YARN_CONF_DIR --hosts masters stop resourcemanager # stop nodeManager "$bin"/yarn-daemons.sh --config $YARN_CONF_DIR stop nodemanager # stop proxy server "$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop proxyserver [yinzhengjie@s101 ~]$
3>.同步配置文件
[yinzhengjie@s101 ~]$ more `which xrsync.sh` #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數"; exit fi #獲取文件路徑 file=$@ #獲取子路徑 filename=`basename $file` #獲取父路徑 dirpath=`dirname $file` #獲取完整路徑 cd $dirpath fullpath=`pwd -P` #同步文件到DataNode for (( i=102;i<=105;i++ )) do #使終端變綠色 tput setaf 2 echo =========== s$i %file =========== #使終端變回原來的顏色,即白灰色 tput setaf 7 #遠程執行命令 rsync -lr $filename `whoami`@s$i:$fullpath #判斷命令是否執行成功 if [ $? == 0 ];then echo "命令執行成功" fi done [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xrsync.sh /soft/hadoop/etc/ =========== s102 %file =========== 命令執行成功 =========== s103 %file =========== 命令執行成功 =========== s104 %file =========== 命令執行成功 =========== s105 %file =========== 命令執行成功 [yinzhengjie@s101 ~]$
4>.格式化zk數據
[yinzhengjie@s101 ~]$ hdfs zkfc -formatZK 18/06/20 08:35:17 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at s101/172.16.30.101:8020 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:host.name=s101 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_131 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.home=/soft/jdk1.8.0_131/jre 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/soft/hadoop-2.7.3/lib/native 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-327.el7.x86_64 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:user.name=yinzhengjie 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/yinzhengjie 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/yinzhengjie 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=s102:2181,s103:2181,s104:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@74fe5c40 18/06/20 08:35:18 INFO zookeeper.ClientCnxn: Opening socket connection to server s104/172.16.30.104:2181. Will not attempt to authenticate using SASL (unknown error) 18/06/20 08:35:18 INFO zookeeper.ClientCnxn: Socket connection established to s104/172.16.30.104:2181, initiating session 18/06/20 08:35:18 INFO zookeeper.ClientCnxn: Session establishment complete on server s104/172.16.30.104:2181, sessionid = 0x680000ca40a10000, negotiated timeout = 5000 18/06/20 08:35:18 INFO ha.ActiveStandbyElector: Session connected. 18/06/20 08:35:18 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK. 18/06/20 08:35:18 INFO zookeeper.ZooKeeper: Session: 0x680000ca40a10000 closed 18/06/20 08:35:18 INFO zookeeper.ClientCnxn: EventThread shut down [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ echo $? 0 [yinzhengjie@s101 ~]$
5>.啓動hdfs
[yinzhengjie@s101 ~]$ start-dfs.sh Starting namenodes on [s101 s105] s101: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out s105: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s105.out s104: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s104.out s102: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s102.out s103: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s103.out Starting journal nodes [s102 s103 s104] s104: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s104.out s103: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s103.out s102: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s102.out Starting ZK Failover Controllers on NN hosts [s101 s105] s101: starting zkfc, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-zkfc-s101.out s105: starting zkfc, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-zkfc-s105.out [yinzhengjie@s101 ~]$
6>.檢驗hadoop的高可用
[yinzhengjie@s101 ~]$ more `which xcall.sh` #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數" exit fi #獲取用戶輸入的命令 cmd=$@ for (( i=101;i<=105;i++ )) do #使終端變綠色 tput setaf 2 echo ============= s$i $cmd ============ #使終端變回原來的顏色,即白灰色 tput setaf 7 #遠程執行命令 ssh s$i $cmd #判斷命令是否執行成功 if [ $? == 0 ];then echo "命令執行成功" fi done [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xcall.sh jps ============= s101 jps ============ 18710 NameNode 19080 Jps 19022 DFSZKFailoverController 命令執行成功 ============= s102 jps ============ 2265 QuorumPeerMain 7739 Jps 7661 JournalNode 命令執行成功 ============= s103 jps ============ 7704 Jps 2266 QuorumPeerMain 7627 JournalNode 命令執行成功 ============= s104 jps ============ 2274 QuorumPeerMain 7715 Jps 7637 JournalNode 命令執行成功 ============= s105 jps ============ 7777 DFSZKFailoverController 7668 NameNode 7849 Jps 命令執行成功 [yinzhengjie@s101 ~]$
五.初學者遇到問題最霸道的解決方案:(這種暴力解決方式適合我這樣測試集羣中使用,若是你的架構跟個人同樣,那麼這種方法也適合你)
舒適提示:若是你要格式化集羣,最好是將以前的數據目錄更名,若你沒有刪除或修改集羣以前的數據目錄,格式化後的版本信息仍然是上一次沒有格式化的集羣版本信息喲!這樣極可能致使你的DataNode或者NameNode進程啓動不了,或者存在剛剛啓動不一下子進程就掛掉了!
[yinzhengjie@s101 ~]$ hadoop-daemons.sh start journalnode s103: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s103.out s102: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s102.out s104: starting journalnode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-journalnode-s104.out [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs namenode -format 18/06/21 06:46:12 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = s101/172.16.30.101 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.3 STARTUP_MSG: classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z STARTUP_MSG: java = 1.8.0_131 ************************************************************/ 18/06/21 06:46:12 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/06/21 06:46:12 INFO namenode.NameNode: createNameNode [-format] 18/06/21 06:46:12 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration. 18/06/21 06:46:12 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration. 18/06/21 06:46:12 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration. 18/06/21 06:46:12 WARN common.Util: Path /home/yinzhengjie/ha/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration. Formatting using clusterid: CID-f4979818-1b00-4035-9ce6-6a84ab5a27e7 18/06/21 06:46:12 INFO namenode.FSNamesystem: No KeyProvider found. 18/06/21 06:46:12 INFO namenode.FSNamesystem: fsLock is fair:true 18/06/21 06:46:12 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/06/21 06:46:12 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/06/21 06:46:12 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/06/21 06:46:12 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 21 06:46:12 18/06/21 06:46:12 INFO util.GSet: Computing capacity for map BlocksMap 18/06/21 06:46:12 INFO util.GSet: VM type = 64-bit 18/06/21 06:46:12 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 18/06/21 06:46:12 INFO util.GSet: capacity = 2^21 = 2097152 entries 18/06/21 06:46:12 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/06/21 06:46:12 INFO blockmanagement.BlockManager: defaultReplication = 3 18/06/21 06:46:12 INFO blockmanagement.BlockManager: maxReplication = 512 18/06/21 06:46:12 INFO blockmanagement.BlockManager: minReplication = 1 18/06/21 06:46:12 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 18/06/21 06:46:12 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/06/21 06:46:12 INFO blockmanagement.BlockManager: encryptDataTransfer = false 18/06/21 06:46:12 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 18/06/21 06:46:12 INFO namenode.FSNamesystem: fsOwner = yinzhengjie (auth:SIMPLE) 18/06/21 06:46:12 INFO namenode.FSNamesystem: supergroup = supergroup 18/06/21 06:46:12 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/06/21 06:46:12 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster 18/06/21 06:46:12 INFO namenode.FSNamesystem: HA Enabled: true 18/06/21 06:46:12 INFO namenode.FSNamesystem: Append Enabled: true 18/06/21 06:46:13 INFO util.GSet: Computing capacity for map INodeMap 18/06/21 06:46:13 INFO util.GSet: VM type = 64-bit 18/06/21 06:46:13 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 18/06/21 06:46:13 INFO util.GSet: capacity = 2^20 = 1048576 entries 18/06/21 06:46:13 INFO namenode.FSDirectory: ACLs enabled? false 18/06/21 06:46:13 INFO namenode.FSDirectory: XAttrs enabled? true 18/06/21 06:46:13 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/06/21 06:46:13 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/06/21 06:46:13 INFO util.GSet: Computing capacity for map cachedBlocks 18/06/21 06:46:13 INFO util.GSet: VM type = 64-bit 18/06/21 06:46:13 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 18/06/21 06:46:13 INFO util.GSet: capacity = 2^18 = 262144 entries 18/06/21 06:46:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/06/21 06:46:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/06/21 06:46:13 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 18/06/21 06:46:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/06/21 06:46:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/06/21 06:46:13 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/06/21 06:46:13 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/06/21 06:46:13 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/06/21 06:46:13 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/06/21 06:46:13 INFO util.GSet: VM type = 64-bit 18/06/21 06:46:13 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 18/06/21 06:46:13 INFO util.GSet: capacity = 2^15 = 32768 entries Re-format filesystem in Storage Directory /home/yinzhengjie/ha/dfs/name1 ? (Y or N) y Re-format filesystem in Storage Directory /home/yinzhengjie/ha/dfs/name2 ? (Y or N) y Re-format filesystem in QJM to [172.16.30.102:8485, 172.16.30.103:8485, 172.16.30.104:8485] ? (Y or N) y 18/06/21 06:46:18 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1075743885-172.16.30.101-1529588778298 18/06/21 06:46:18 INFO common.Storage: Storage directory /home/yinzhengjie/ha/dfs/name1 has been successfully formatted. 18/06/21 06:46:18 INFO common.Storage: Storage directory /home/yinzhengjie/ha/dfs/name2 has been successfully formatted. 18/06/21 06:46:18 INFO namenode.FSImageFormatProtobuf: Saving image file /home/yinzhengjie/ha/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression 18/06/21 06:46:18 INFO namenode.FSImageFormatProtobuf: Saving image file /home/yinzhengjie/ha/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression 18/06/21 06:46:18 INFO namenode.FSImageFormatProtobuf: Image file /home/yinzhengjie/ha/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 358 bytes saved in 0 seconds. 18/06/21 06:46:18 INFO namenode.FSImageFormatProtobuf: Image file /home/yinzhengjie/ha/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 358 bytes saved in 0 seconds. 18/06/21 06:46:18 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/06/21 06:46:18 INFO util.ExitUtil: Exiting with status 0 18/06/21 06:46:18 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at s101/172.16.30.101 ************************************************************/ [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ha]$ scp -r ~/ha yinzhengjie@s105:~ VERSION 100% 206 0.2KB/s 00:00 seen_txid 100% 2 0.0KB/s 00:00 fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00 fsimage_0000000000000000000 100% 358 0.4KB/s 00:00 VERSION 100% 206 0.2KB/s 00:00 seen_txid 100% 2 0.0KB/s 00:00 fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00 fsimage_0000000000000000000 100% 358 0.4KB/s 00:00 [yinzhengjie@s101 ha]$
[yinzhengjie@s101 ~]$ xcall.sh jps ============= s101 jps ============ 18710 NameNode 19080 Jps 19022 DFSZKFailoverController 命令執行成功 ============= s102 jps ============ 2265 QuorumPeerMain 7739 Jps 7661 JournalNode 命令執行成功 ============= s103 jps ============ 7704 Jps 2266 QuorumPeerMain 7627 JournalNode 命令執行成功 ============= s104 jps ============ 2274 QuorumPeerMain 7715 Jps 7637 JournalNode 命令執行成功 ============= s105 jps ============ 7777 DFSZKFailoverController 7668 NameNode 7849 Jps 命令執行成功 [yinzhengjie@s101 ~]$