第7章 YARN HA配置

[TOC] ResourceManager (RM)負責跟蹤集羣中的資源,以及調度應用程序(例如,MapReduce做業)。在Hadoop 2.4以前,集羣中只有一個ResourceManager,當其中一個宕機時,將影響整個集羣。高可用性特性增長了冗餘的形式,即一個主動/備用的ResourceManager對,以即可以進行故障轉移。node

YARN HA的架構以下圖所示: 本例中,各節點的角色分配以下表所示:web

節點 角色
centos01 ResourceManager NodeManager
centos02 ResourceManager NodeManager
centos03 NodeManager

下面將逐步講解YARN HA的配置步驟。 #7.1 yarn-site.xm文件配置 (1)修改yarn-site.xm文件,加入如下內容:shell

<details> <summary><font color="blue">點擊展開內容</font></summary>apache

<!--YARN HA配置-->
	<property>
	  <name>yarn.resourcemanager.ha.enabled</name>
	  <value>true</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.cluster-id</name>
	  <value>cluster1</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.ha.rm-ids</name>
	  <value>rm1,rm2</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.hostname.rm1</name>
	  <value>centos01</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.hostname.rm2</name>
	  <value>centos02</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.webapp.address.rm1</name>
	  <value>centos01:8088</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.webapp.address.rm2</name>
	  <value>centos02:8088</value>
	</property>
	<property>
	  <name>yarn.resourcemanager.zk-address</name>
	  <value>centos01:2181,centos02:2181,centos03:2181</value>
	</property>     
	<property><!--啓用RM重啓的功能,默認爲false-->
	  <name>yarn.resourcemanager.recovery.enabled</name>
	  <value>true</value>
	</property> 
	<property>
	  <name>yarn.resourcemanager.store.class</name>
	  <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>

</details> 上述配置參數解析: yarn.resourcemanager.ha.enabled:開啓RM HA功能。 yarn.resourcemanager.cluster-id:標識集羣中的RM。若是設置該選項,須要確保全部的RMs在配置中都有本身的id。 yarn.resourcemanager.ha.rm-ids:RMs的邏輯id列表。能夠自定義,此處設置爲「rm1,rm2」。後面的配置將引用該id。 yarn.resourcemanager.hostname.rm1:指定RM對應的主機名。另外,能夠設置RM的每一個服務地址。 yarn.resourcemanager.webapp.address.rm1:指定RM的Web端訪問地址。 yarn.resourcemanager.zk-address:指定集成的ZooKeeper的服務地址。 yarn.resourcemanager.recovery.enabled:啓用RM重啓的功能,默認爲false。 yarn.resourcemanager.store.class:用於狀態存儲的類,默認爲org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore,基於Hadoop文件系統的實現。還能夠爲org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore,該類爲基於ZooKeeper的實現。此處指定該類。centos

(2)yarn-site.xm文件配置好後,須要將其發送到集羣中其它節點。 (3)接着上一章啓動好的HDFS,繼續進行啓動YARN。 分別在centos0一、centos02節點上執行如下命令,啓動ResourceManager:瀏覽器

[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start resourcemanager

分別在centos0一、centos0二、centos03節點上執行如下命令,啓動nodemanager:架構

[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start nodemanager

(4)YARN啓動後,查看各節點Java進程:app

[hadoop@centos01 hadoop-2.7.1]$ jps
3360 QuorumPeerMain
4080 DFSZKFailoverController
4321 NodeManager
4834 Jps
3908 JournalNode
3702 DataNode
4541 ResourceManager
3582 NameNode

[hadoop@centos02 hadoop-2.7.1]$ jps
4486 Jps
3815 DFSZKFailoverController
4071 NodeManager
4359 ResourceManager
3480 NameNode
3353 QuorumPeerMain
3657 JournalNode
3563 DataNode

[hadoop@centos03 hadoop-2.7.1]$ jps
3496 JournalNode
4104 Jps
3836 NodeManager
3293 QuorumPeerMain
3390 DataNode

此時瀏覽器輸入地址http://centos01:8088 訪問活動狀態的ResourceManager,查看YARN的啓動狀態。以下圖所示。 若是訪問備份ResourceManager地址:http://centos02:8088 發現自動跳轉到了地址http://centos01:8088。這是由於此時活動狀態的ResourceManager在centos01節點上。訪問備份節點的ResourceManager會自動跳轉到活動節點。 #7.2 測試YARN自動故障轉移 在centos01節點上執行MapReduce默認的WordCount程序,當正在執行map階段時,新開一個SSH Shell窗口,殺掉centos01的ResourceManager進程,觀察程序執行過程。執行MapReduce默認的WordCount程序的命令以下:webapp

[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output

執行結果以下:oop

[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
18/03/16 10:48:22 INFO input.FileInputFormat: Total input paths to process : 1
18/03/16 10:48:22 INFO mapreduce.JobSubmitter: number of splits:1
18/03/16 10:48:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521168402181_0001
18/03/16 10:48:23 INFO impl.YarnClientImpl: Submitted application application_1521168402181_0001
18/03/16 10:48:23 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1521168402181_0001/
18/03/16 10:48:23 INFO mapreduce.Job: Running job: job_1521168402181_0001
18/03/16 10:48:56 INFO mapreduce.Job: Job job_1521168402181_0001 running in uber mode : false
18/03/16 10:48:57 INFO mapreduce.Job:  map 0% reduce 0%
18/03/16 10:50:21 INFO mapreduce.Job:  map 100% reduce 0%
18/03/16 10:50:32 INFO mapreduce.Job:  map 100% reduce 100%
18/03/16 10:50:36 INFO mapreduce.Job: Job job_1521168402181_0001 completed successfully
18/03/16 10:50:37 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=1321
                FILE: Number of bytes written=239335
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1094
                HDFS: Number of bytes written=971
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=14130
                Total time spent by all reduces in occupied slots (ms)=7851
                Total time spent by all map tasks (ms)=14130
                Total time spent by all reduce tasks (ms)=7851
                Total vcore-seconds taken by all map tasks=14130
                Total vcore-seconds taken by all reduce tasks=7851
                Total megabyte-seconds taken by all map tasks=14469120
                Total megabyte-seconds taken by all reduce tasks=8039424
        Map-Reduce Framework
                Map input records=29
                Map output records=109
                Map output bytes=1368
                Map output materialized bytes=1321
                Input split bytes=101
                Combine input records=109
                Combine output records=86
                Reduce input groups=86
                Reduce shuffle bytes=1321
                Reduce input records=86
                Reduce output records=86
                Spilled Records=172
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=188
                CPU time spent (ms)=1560
                Physical memory (bytes) snapshot=278478848
                Virtual memory (bytes) snapshot=4195344384
                Total committed heap usage (bytes)=140480512
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=993
        File Output Format Counters 
                Bytes Written=971

從上述結果中能夠看出,雖然ResourceManager進程被殺掉了,可是YARN仍然可以流暢的執行,說明自動故障轉移功能生效了,ResourceManager遇到故障後,自動切換到了centos02節點上繼續執行。此時瀏覽器訪問備用ResourceManager的Web端地址http://centos02:8088發現能夠成功訪問了。顯示任務成功執行完畢。 到此,YARN HA集羣搭建完畢。

相關文章
相關標籤/搜索