1、配置yarn-site.xmljava
1.yarn.scheduler.fair.allocation.filenode
2.yarn.scheduler.fair.user-as-default-queue: 當程序爲指定隊列名時,是否指定用戶做爲應用程序所在隊列名。若是設置爲false,或者沒有設置,全部未知隊列的應用程序被提交到default的隊列中。默認爲true,表示爲自動建立user名的新隊列。web
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>hftest0001.webex.com</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>hftest0001.webex.com</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.fair.user-as-default-queue</name> <value>false</value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>hftest0001.webex.com</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.fair.user-as-default-queue</name> <value>false</value> </property> <property> <name>yarn.scheduler.fair.allow-undeclared-pools</name> <value>false</value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>hftest0001.webex.com</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.fair.user-as-default-queue</name> <value>true</value> </property> <property> <name>yarn.scheduler.fair.allow-undeclared-pools</name> <value>false</value> </property> </configuration>
<?xml version="1.0"?> <allocations> <queue name="root"> <aclSubmitApps> </aclSubmitApps> <aclAdministerApps> </aclAdministerApps> <queue name="q1"> <minResources>2048 mb,2 vcores</minResources> <maxResources>8192 mb,8 vcores</maxResources> <maxRunningApps>4</maxRunningApps> <schedulingPolicy>fair</schedulingPolicy> <aclSubmitApps>hadoop1,hadoop2 hadoop1,hadoop2</aclSubmitApps> <aclAdministerApps>hadoop1,hadoop2 hadoop1,hadoop2</aclAdministerApps> </queue> <queue name="q2"> <minResources>1024 mb,1 vcores</minResources> <maxResources>4096 mb,4 vcores</maxResources> <maxRunningApps>2</maxRunningApps> <schedulingPolicy>fair</schedulingPolicy> <aclSubmitApps>hadoop2,hadoop3 hadoop2,hadoop3</aclSubmitApps> <aclAdministerApps>hadoop2,hadoop3 hadoop2,hadoop3</aclAdministerApps> </queue> </queue> </allocations>
2、apache
yarn.nodemanager.resource.memory-mb:表示一臺nodemanager可以提供給Yarn的最大的Memoryapp
yarn.scheduler.maximum-allocation-mb:表示一個container可以提供的最大的Memory(一個Nodemanager可以並行跑N個containers===> )oop
yarn.scheduler.minimum-allocation-mb:表示一個container可以提供的最小的Memoryspa
mapreduce.map.memory.mb:表示Map Task申請的資源。code
如yarn.scheduler.minimum-allocation-mb = 1024。mapreduce.map.memory.mb = 900.則Yarn會規整到1024MB分配給Map Taskxml
如yarn.scheduler.minimum-allocation-mb = 1024。yarn.scheduler.minimum-allocation-mb = 2048。mapreduce.map.memory.mb = 1500.則Yarn會規整到2048MB分配給Map Task隊列
mapreduce.reduce.memory.mb:表示Map Task申請的資源。
虛擬內存:
yarn.nodemanager.vmem-pmem-ratio: 默認是2.1.
mapreduce.map.java.opts:是JVM內存,通常爲0.75倍的mapreduce.reduce.memory.mb。 預留寫給非JVM空間