[Yarn ResourceManager FairScheduler]

1、配置yarn-site.xmljava

1.yarn.scheduler.fair.allocation.filenode

2.yarn.scheduler.fair.user-as-default-queue: 當程序爲指定隊列名時,是否指定用戶做爲應用程序所在隊列名。若是設置爲false,或者沒有設置,全部未知隊列的應用程序被提交到default的隊列中。默認爲true,表示爲自動建立user名的新隊列。web

  • 2.1 默認配置文件(yarn-site.xml)
    • yarn.scheduler.fair.user-as-default-queue = true
      yarn.scheduler.fair.allow-undeclared-pools = true
    • [root@hftest0001 hadoop]# su hadoop1
    • [hadoop1@hftest0001 hadoop]# hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 2 5
    • 會新創建一個root.hadoop1的queue,job會被提交到root.hadoop1中
    <configuration>
    
    <!-- Site specific YARN configuration properties -->
    	<property>
    		<name>yarn.resourcemanager.hostname</name>
    		<value>hftest0001.webex.com</value>
            </property>
            <property>
            	<name>yarn.nodemanager.aux-services</name>
    		<value>mapreduce_shuffle</value>
    	</property>
    </configuration>
  • 2.2 設置yarn.scheduler.fair.user-as-default-queue=false
    • yarn.scheduler.fair.user-as-default-queue = false
      yarn.scheduler.fair.allow-undeclared-pools = true
    • 重啓yarn
    • su hadoop2
    • [hadoop2@hftest0001 hadoop]$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 2 5
    • job會被提交到root.default隊列中
<configuration>

<!-- Site specific YARN configuration properties -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hftest0001.webex.com</value>
        </property>
        <property>
        	<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

    <property>
		<name>yarn.scheduler.fair.user-as-default-queue</name>
		<value>false</value>
	</property>
</configuration>
  • 2.3 設置yarn.scheduler.fair.allow-undeclared-pools=false
    • yarn.scheduler.fair.user-as-default-queue = false
      yarn.scheduler.fair.allow-undeclared-pools = false
    • su hadoop3
    • [hadoop3@hftest0001 hadoop]$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 2 5
    • job會被提交到root.default隊列中
    • <configuration>
      
      <!-- Site specific YARN configuration properties -->
      	<property>
      		<name>yarn.resourcemanager.hostname</name>
      		<value>hftest0001.webex.com</value>
              </property>
              <property>
              	<name>yarn.nodemanager.aux-services</name>
      		<value>mapreduce_shuffle</value>
      	</property>
      
          <property>
      		<name>yarn.scheduler.fair.user-as-default-queue</name>
      		<value>false</value>
      	</property>
      
          <property>
      		<name>yarn.scheduler.fair.allow-undeclared-pools</name>
      		<value>false</value>
      	</property>
      
      </configuration>
  • 2.4 設置yarn.scheduler.fair.allow-undeclared-pools=false
    • yarn.scheduler.fair.user-as-default-queue = true
      yarn.scheduler.fair.allow-undeclared-pools = false
    • su hadoop4
    • [hadoop4@hftest0001 hadoop]$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 2 5
    • job會被提交到root.default隊列中
    • <configuration>
      
      <!-- Site specific YARN configuration properties -->
      	<property>
      		<name>yarn.resourcemanager.hostname</name>
      		<value>hftest0001.webex.com</value>
              </property>
              <property>
              	<name>yarn.nodemanager.aux-services</name>
      		<value>mapreduce_shuffle</value>
      	</property>
      
          <property>
      		<name>yarn.scheduler.fair.user-as-default-queue</name>
      		<value>true</value>
      	</property>
      
          <property>
      		<name>yarn.scheduler.fair.allow-undeclared-pools</name>
      		<value>false</value>
      	</property>
      
      </configuration>

 

  • 2.5 fair-scheduler.xml
    • [hadoop1@hftest0001 hadoop]$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi -D mapreduce.job.queuename=root.q2 2 5
    • Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1485246179386_0001 to YARN : User hadoop1 cannot submit applications to queue root.q2
    <?xml version="1.0"?>
    <allocations>
    	<queue name="root">
    		<aclSubmitApps> </aclSubmitApps>
    		<aclAdministerApps> </aclAdministerApps>
    		<queue name="q1">
    			<minResources>2048 mb,2 vcores</minResources>
    			<maxResources>8192 mb,8 vcores</maxResources>
    			<maxRunningApps>4</maxRunningApps>
    			<schedulingPolicy>fair</schedulingPolicy>
    			<aclSubmitApps>hadoop1,hadoop2 hadoop1,hadoop2</aclSubmitApps>
    			<aclAdministerApps>hadoop1,hadoop2 hadoop1,hadoop2</aclAdministerApps>
    		</queue>
    		<queue name="q2">
    			<minResources>1024 mb,1 vcores</minResources>
    			<maxResources>4096 mb,4 vcores</maxResources>
    			<maxRunningApps>2</maxRunningApps>
    			<schedulingPolicy>fair</schedulingPolicy>
    			<aclSubmitApps>hadoop2,hadoop3 hadoop2,hadoop3</aclSubmitApps>
    			<aclAdministerApps>hadoop2,hadoop3 hadoop2,hadoop3</aclAdministerApps>
    		</queue>
    	</queue>
    </allocations>

2、apache

  • Yarn
    • yarn.scheduler.minimum-allocation-mb
    • yarn.scheduler.maximum-allocation-mb
    • yarn.nodemanager.vmem-pmem-ratio
    • yarn.nodemanager.resource.memory-mb
  • MapReduce
    • Map
      • mapreduce.map.memory.mb
      • mapreduce.map.java.opts
    • Reduce
      • mapreduce.reduce.memory.mb
      • mapreduce.reduce.java.opts

yarn.nodemanager.resource.memory-mb:表示一臺nodemanager可以提供給Yarn的最大的Memoryapp

yarn.scheduler.maximum-allocation-mb:表示一個container可以提供的最大的Memory(一個Nodemanager可以並行跑N個containers===>  )oop

yarn.scheduler.minimum-allocation-mb:表示一個container可以提供的最小的Memoryspa

 

mapreduce.map.memory.mb:表示Map Task申請的資源。code

            如yarn.scheduler.minimum-allocation-mb = 1024。mapreduce.map.memory.mb = 900.則Yarn會規整到1024MB分配給Map Taskxml

            如yarn.scheduler.minimum-allocation-mb = 1024。yarn.scheduler.minimum-allocation-mb = 2048。mapreduce.map.memory.mb = 1500.則Yarn會規整到2048MB分配給Map Task隊列

 

mapreduce.reduce.memory.mb:表示Map Task申請的資源。    

虛擬內存:

yarn.nodemanager.vmem-pmem-ratio: 默認是2.1.

 

mapreduce.map.java.opts:是JVM內存,通常爲0.75倍的mapreduce.reduce.memory.mb。 預留寫給非JVM空間

相關文章
相關標籤/搜索