大數據教程(11.3)hadoop2.9.1集羣HA高可用搭建

           前面介紹了namenode的safemode模式,本篇博客博主將爲小夥伴們深度解析hadoop集羣HA高可用搭建的全過程。java

    1、集羣規劃node

           主機名                           IP                      安裝的軟件                                運行的進程
           centos-aaron-ha-01    192.168.29.149    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-02    192.168.29.150    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-03    192.168.29.151    jdk、hadoop                            ResourceManager 
           centos-aaron-ha-04    192.168.29.152    jdk、hadoop                            ResourceManager
           centos-aaron-ha-05    192.168.29.153    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-06    192.168.29.154    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-07    192.168.29.155    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMainlinux

    2、服務器基礎環境準備(主機名、ip、域名映射、防火牆關閉、ssh免密登陸、jdk)web

           (1)克隆7臺centos6.9 mini版系統shell

           (2)基礎網絡配置(主機名、ip、網卡、域名映射)apache

               詳細操做請查看大數據教程(2.1):VMware虛擬機克隆(網絡配置問題)bootstrap

           (3)防火牆關閉            centos

#關閉selinux
sudo vi /etc/sysconfig/selinux
修改enforcing爲disabled
#查看防火牆狀態
sudo service iptables status
#關閉防火牆
sudo service iptables stop
#永久關閉防火牆
sudo chkconfig iptables off

           (4)hadoop帳戶配置(hadoop/hadoop)、sudo權限瀏覽器

*****添加用戶
useradd  hadoop  
要修改密碼才能登錄 
passwd hadoop  按提示輸入密碼便可


**爲用戶配置sudo權限
用root編輯 vi /etc/sudoers
在文件的以下位置,爲hadoop添加一行便可
root    ALL=(ALL)       ALL     
hadoop  ALL=(ALL)       ALL

而後,hadoop用戶就能夠用sudo來執行系統級別的指令
[hadoop@shizhan ~]$ sudo useradd huangxiaoming

           (5)ssh免密登陸配置(centos-aaron-ha-01/centos-aaron-ha-03)服務器

               因爲centos-aaron-ha-01用於啓動hdfs系統,因此需配置到其它幾臺服務器的ssh免密登錄;而centos-aaron-ha-03用於啓動yarn系統,一樣須要配置到其它幾臺服務器的ssh免密登錄。

#注意:如下命令需在全部服務器上執行、不然後面的遠程拷貝、ssh均可能不能正常使用
sudo rpm -qa|grep ssh 檢查服務器上已經安裝了的ssh相關軟件
sudo yum list|grep ssh  檢查yum倉庫中可用的ssh相關的軟件包
sudo yum -y install openssh-server 安裝服務端
sudo yum -y install openssh-clinets 安裝客戶端 (sudo yum -y install openssh-clients.x86_64)

               a.域名映射配置並分發到全部服務器

vi /etc/hosts
#新增
192.168.29.149 centos-aaron-ha-01
192.168.29.150 centos-aaron-ha-02
192.168.29.151 centos-aaron-ha-03
192.168.29.152 centos-aaron-ha-04
192.168.29.153 centos-aaron-ha-05
192.168.29.154 centos-aaron-ha-06
192.168.29.155 centos-aaron-ha-07
#經過scp分發到其它幾臺服務器
sudo scp /etc/hosts  root@192.168.29.150:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.151:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.152:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.153:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.154:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.155:/etc/hosts

               b.首先要配置centos-aaron-ha-01到centos-aaron-ha-0一、centos-aaron-ha-0二、centos-aaron-ha-0三、centos-aaron-ha-0四、centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07的免密碼登錄

#在centos-aaron-ha-01上生產一對鑰匙
ssh-keygen -t rsa
#將公鑰拷貝到其餘節點,包括本身
ssh-copy-id hadoop@centos-aaron-ha-01
ssh-copy-id hadoop@centos-aaron-ha-02
ssh-copy-id hadoop@centos-aaron-ha-03
ssh-copy-id hadoop@centos-aaron-ha-04
ssh-copy-id hadoop@centos-aaron-ha-05
ssh-copy-id hadoop@centos-aaron-ha-06
ssh-copy-id hadoop@centos-aaron-ha-07

              c.配置centos-aaron-ha-03到centos-aaron-ha-0三、centos-aaron-ha-0四、centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07的免密碼登錄

#在centos-aaron-ha-03上生產一對鑰匙
ssh-keygen -t rsa
#將公鑰拷貝到其餘節點
ssh-copy-id centos-aaron-ha-03				
ssh-copy-id centos-aaron-ha-04
ssh-copy-id centos-aaron-ha-05
ssh-copy-id centos-aaron-ha-06
ssh-copy-id centos-aaron-ha-07

              d.注意:兩個namenode之間要配置ssh免密碼登錄,別忘了配置centos-aaron-ha-02到centos-aaron-ha-01的免登錄

在centos-aaron-ha-02上生產一對鑰匙
ssh-keygen -t rsa
ssh-copy-id centos-aaron-ha-01

           (6)jdk1.8配置(以centos-aaron-ha-01爲主節點開始安裝)          

#進入文件上傳ssh
Alt+p 
lcd d:/
put jdk-8u191-linux-x64.tar.gz
sudo tar -zxvf jdk-8u191-linux-x64.tar.gz -C /usr/local
#編輯配置文件
sudo vi /etc/profile
#跳到最後
shift+G
#新增一行插入內容
o
#添加下面內容到最後
JAVA_HOME=/usr/local/jdk1.8.0_191/
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
#保存(Esc->shift+:->wq!->回車)
shift+z+z
#分發到全部服務器
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.150:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.151:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.152:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.153:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.154:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.155:/usr/local/
sudo scp /etc/profile  root@192.168.29.150:/etc/profile
sudo scp /etc/profile  root@192.168.29.151:/etc/profile
sudo scp /etc/profile  root@192.168.29.152:/etc/profile
sudo scp /etc/profile  root@192.168.29.153:/etc/profile
sudo scp /etc/profile  root@192.168.29.154:/etc/profile
sudo scp /etc/profile  root@192.168.29.155:/etc/profile
#配置生效
source /etc/profile
#jdk環境校驗
java -version

           (7)zookeeper安裝(以centos-aaron-ha-05爲主節點開始安裝)    

#上傳zookeeper
scp zookeeper-3.4.13.tar.gz hadoop@192.168.29.153:/home/hadoop
#解壓zookeeper
mkdir /home/hadoop/apps/
tar -zxvf zookeeper-3.4.13.tar.gz -C apps/
#修改配置未見
cd /home/hadoop/apps/zookeeper-3.4.13/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
#新增如下內容,並刪除以前的默認dataDir配置
dataDir=/home/hadoop/apps/zookeeper-3.4.13/data
dataLogDir=/home/hadoop/apps/zookeeper-3.4.13/log
server.1=192.168.29.153:2888:3888
server.2=192.168.29.154:2888:3888
server.3=192.168.29.155:2888:3888
#新增data、log目錄並賦權
cd /home/hadoop/apps/zookeeper-3.4.13/
mkdir -m 755 data
mkdir -m 755 log
#在/opt/apps/zookeeper-3.4.13/data文件夾下新建myid文件,myid的文件內容爲server.1後面的1(此處需參照當前服務器id進行配置)
cd data
vi myid
或者
echo "1" > myid
#將集羣分發到
scp -r /home/hadoop/apps/zookeeper-3.4.13 hadoop@192.168.29.154:/home/hadoop/apps/
scp -r /home/hadoop/apps/zookeeper-3.4.13 hadoop@192.168.29.155:/home/hadoop/apps/
#修改其餘機器的配置文件
192.168.29.154上:修改myid爲:2
192.168.29.155上:修改myid爲:3
#啓動zookeeper
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh start
#查看集羣狀態
jps(查看進程)
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh status(查看集羣狀態,主從信息)
#關閉集羣
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh stop

    3、  hadoop HA集羣搭建

             (1)上傳centos6.9-hadoop-2.9.1.tar.gz

             (2)解壓hadoop到/home/hadoop/apps/目錄:tar -zxvf centos6.9-hadoop-2.9.1.tar.gz  -C /home/hadoop/apps/

             (3)修改hadoop-env.sh、yarn-env.sh中JAVA_HOME目錄爲真實目錄

cd /home/hadoop/apps/hadoop-2.9.1/etc/hadoop
vi hadoop-env.sh
#修改如下這句後面的值爲以下
export JAVA_HOME=/usr/local/jdk1.8.0_191

vi yarn-env.sh
#將export JAVA_HOME註釋放開,而且修改這句後面的值爲以下
export JAVA_HOME=/usr/local/jdk1.8.0_191

             (4)將hadoop添加到環境變量中   

#修改centos-aaron-ha-01的配置
vi /etc/profile
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.9.1/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

#分發配置
sudo scp /etc/profile  root@192.168.29.150:/etc/profile
sudo scp /etc/profile  root@192.168.29.151:/etc/profile
sudo scp /etc/profile  root@192.168.29.152:/etc/profile
sudo scp /etc/profile  root@192.168.29.153:/etc/profile
sudo scp /etc/profile  root@192.168.29.154:/etc/profile
sudo scp /etc/profile  root@192.168.29.155:/etc/profile
#配置生效
source /etc/profile

             (5)配置core-site.xml文件

<configuration>
<!-- 指定hdfs的nameservice爲ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bi/</value>
</property>
<!-- 指定hadoop臨時目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hdpdata</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>centos-aaron-ha-05:2181,centos-aaron-ha-06:2181,centos-aaron-ha-07:2181</value>
</property>
</configuration>

             (6)修改hdfs-site.xml

<configuration>
<!--指定hdfs的nameservice爲bi,須要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi</value>
</property>
<!-- bi下面有兩個NameNode,分別是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>centos-aaron-ha-01:9000</value>
</property>
<!-- nn1的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>centos-aaron-ha-01:50070</value>
</property>
<!-- nn2的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>centos-aaron-ha-02:9000</value>
</property>
<!-- nn2的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>centos-aaron-ha-02:50070</value>
</property>
<!-- 指定NameNode的edits元數據在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/bi</value>
</property>
<!-- 指定JournalNode在本地磁盤存放數據的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/journaldata</value>
</property>
<!-- 開啓NameNode失敗自動切換 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失敗自動切換實現方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔離機制方法,多個機制用換行分割,即每一個機制暫用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔離機制時須要ssh免登錄 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔離機制超時時間 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

             (7)修改mapred-site.xml

<configuration>
<!-- 指定mr框架爲yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/*, /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>

             (8)修改yarn-site.xml

<configuration>
<!-- 開啓RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分別指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>centos-aaron-ha-03</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>centos-aaron-ha-04</value>
</property>
<!-- 表示rm1,rm2的網頁訪問地址和端口,也即經過該地址和端口可訪問做業狀況 -->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>centos-aaron-ha-03:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>centos-aaron-ha-04:8088</value>
</property>
<!-- 指定zk集羣地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>centos-aaron-ha-05:2181,centos-aaron-ha-06:2181,centos-aaron-ha-07:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

             (9)將配置好的hadoop分發到其它幾臺服務器

sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-02:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-03:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-04:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-05:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-06:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-07:/home/hadoop/apps

             (10)修改slaves(slaves是指定子節點的位置,由於要在cetnos-aaron-ha-01上啓動HDFS、在cetnos-aaron-ha-03啓動yarn,因此cetnos-aaron-ha-01上的slaves文件指定的是datanode的位置,cetnos-aaron-ha-03上的slaves文件指定的是nodemanager的位置)

vi /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/slaves 
#新增覆蓋 
centos-aaron-ha-05
centos-aaron-ha-06
centos-aaron-ha-07

             (11)啓動zookeeper集羣(分別在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上啓動zk)

cd /home/hadoop/apps/zookeeper-3.4.13/bin/
./zkServer.sh start
#查看狀態:一個leader,兩個follower
./zkServer.sh status

             (12)啓動journalnode(分別在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上執行)

cd /home/hadoop/apps/hadoop-2.9.1/
sbin/hadoop-daemon.sh start journalnode
#運行jps命令檢驗,centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上多了JournalNode進程

             (13)格式化HDFS

#在centos-aaron-ha-01上執行命令:
hdfs namenode -format
#格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/home/hadoop/hdpdata/,而後將/home/hadoop/hdpdata/拷貝到centos-aaron-ha-02的/home/hadoop/hdpdata/下。
scp -r hdpdata/ centos-aaron-ha-02:/home/hadoop/
##也能夠這樣,建議hdfs namenode -bootstrapStandby 【注:此步驟需先啓動centos-aaron-ha-01上的namenode: hadoop-daemon.sh start namenode】

             (14)格式化ZKFC(在centos-aaron-ha-01上執行一次便可)

hdfs zkfc -formatZK

             (15)啓動HDFS(在centos-aaron-ha-01上執行)

sh start-dfs.sh

             (16)啓動YARN(注意:是在centos-aaron-ha-03上執行start-yarn.sh,把namenode和resourcemanager分開是由於性能問題,由於他們都要佔用大量資源,因此把他們分開了,他們分開了就要分別在不一樣的機器上啓動)

sh start-yarn.sh

             (17) 在 centos-aaron-ha-04上啓動resourcemanager

yarn-daemon.sh start resourcemanager

             (18)到此,hadoop-2.9.1配置完畢,能夠統計瀏覽器訪問:

http://centos-aaron-ha-01:50070
NameNode 'hadoop01:9000' (active)
http://centos-aaron-ha-02:50070
NameNode 'hadoop02:9000' (standby)

            (19)驗證HDFS HA

首先向hdfs上傳一個文件
hadoop fs -put /etc/profile /profile
hadoop fs -ls /
而後再kill掉active的NameNode
kill -9 <pid of NN>
經過瀏覽器訪問:http://centos-aaron-ha-01:50070
NameNode 'centos-aaron-ha-01:9000' (active)
這個時候centos-aaron-ha-02上的NameNode變成了active
在執行命令:
hadoop fs -ls /
-rw-r--r--   3 hadoop supergroup       2111 2019-01-06 14:07 /profile
剛纔上傳的文件依然存在!!!
手動啓動那個掛掉的NameNode
hadoop-daemon.sh start namenode
經過瀏覽器訪問:http://centos-aaron-ha-02:50070
NameNode 'centos-aaron-ha-02:9000' (standby)

            (20)驗證YARN:運行一下hadoop提供的demo中的WordCount程序:

 hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount hdfs://bi/profile /out

    4、運行效果

[hadoop@centos-aaron-ha-03 hadoop]$ hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount hdfs://bi/bxx /out2
19/01/08 18:43:16 INFO input.FileInputFormat: Total input files to process : 1
19/01/08 18:43:17 INFO mapreduce.JobSubmitter: number of splits:1
19/01/08 18:43:17 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
19/01/08 18:43:17 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/01/08 18:43:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1546944151190_0001
19/01/08 18:43:17 INFO impl.YarnClientImpl: Submitted application application_1546944151190_0001
19/01/08 18:43:17 INFO mapreduce.Job: The url to track the job: http://centos-aaron-ha-03:8088/proxy/application_1546944151190_0001/
19/01/08 18:43:17 INFO mapreduce.Job: Running job: job_1546944151190_0001
19/01/08 18:43:26 INFO mapreduce.Job: Job job_1546944151190_0001 running in uber mode : false
19/01/08 18:43:26 INFO mapreduce.Job:  map 0% reduce 0%
19/01/08 18:43:33 INFO mapreduce.Job:  map 100% reduce 0%
19/01/08 18:43:39 INFO mapreduce.Job:  map 100% reduce 100%
19/01/08 18:43:39 INFO mapreduce.Job: Job job_1546944151190_0001 completed successfully
19/01/08 18:43:39 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=98
                FILE: Number of bytes written=401749
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=122
                HDFS: Number of bytes written=60
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=4081
                Total time spent by all reduces in occupied slots (ms)=3394
                Total time spent by all map tasks (ms)=4081
                Total time spent by all reduce tasks (ms)=3394
                Total vcore-milliseconds taken by all map tasks=4081
                Total vcore-milliseconds taken by all reduce tasks=3394
                Total megabyte-milliseconds taken by all map tasks=4178944
                Total megabyte-milliseconds taken by all reduce tasks=3475456
        Map-Reduce Framework
                Map input records=5
                Map output records=8
                Map output bytes=76
                Map output materialized bytes=98
                Input split bytes=78
                Combine input records=8
                Combine output records=8
                Reduce input groups=8
                Reduce shuffle bytes=98
                Reduce input records=8
                Reduce output records=8
                Spilled Records=16
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=176
                CPU time spent (ms)=1030
                Physical memory (bytes) snapshot=362827776
                Virtual memory (bytes) snapshot=4141035520
                Total committed heap usage (bytes)=139497472
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=44
        File Output Format Counters 
                Bytes Written=60
[hadoop@centos-aaron-ha-03 hadoop]$
[hadoop@centos-aaron-ha-03 hadoop]$ hdfs dfs -ls /out2
Found 2 items
-rw-r--r--   3 hadoop supergroup          0 2019-01-08 18:43 /out2/_SUCCESS
-rw-r--r--   3 hadoop supergroup         60 2019-01-08 18:43 /out2/part-r-00000
[hadoop@centos-aaron-ha-03 hadoop]$ hdfs dfs -cat /out2/part-r-00000
ddfsZZ  1
df      1
dsfsd   1
hello   1
sdfdsf  1
sdfsd   1
sdss    1
xxx     1
[hadoop@centos-aaron-ha-03 hadoop]$

yarn的集羣狀況查看:

[hadoop@centos-aaron-ha-03 ~]$ yarn rmadmin -getServiceState rm1
 active
[hadoop@centos-aaron-ha-04 ~]$ yarn rmadmin -getServiceState rm2
 standby

hdfs的集羣狀況查看

    5、最後總結

           本次搭建集羣出現了一些問題,例如,集羣搭建後,其它一切正常;但沒法執行mapreduce程序。該問題的解決須要根據mrapplication容器日誌來查看,博主這是由於mapred-site.xml裏面的hadoop的class路徑配置和yarn-site.xml裏面的webapp端口配置致使的。小夥伴們遇到問題須要根據yarn的界面日誌來定位。

           最後寄語,以上是博主本次文章的所有內容,若是你們以爲博主的文章還不錯,請點贊;若是您對博主其它服務器大數據技術或者博主本人感興趣,請關注博主博客,而且歡迎隨時跟博主溝通交流。

相關文章
相關標籤/搜索