在Hadoop 2.7.2集羣下執行以下命令:html
spark-shell --master yarn --deploy-mode clientnode
爆出下面的錯誤:shell
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.apache
在Yarn WebUI上面查看啓動的Cluster狀態,log顯示爲:app
Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current
usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
這是因爲虛擬內存大小超過了設定的數值,能夠修改配置,進行規避。oop
There is a check placed at Yarn level for Vertual and Physical memory usage ratio. Issue is not only that VM doesn't have sufficient pysical memory. But it is because Virtual memory usage is more than expected for given physical memory.ui
Note : This is happening on Centos/RHEL 6 due to its aggressive allocation of virtual memory.spa
It can be resolved either by :.net
Add following property in yarn-site.xml
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>unix
3.Then, restart yarn.
Reference:
http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/
http://blog.chinaunix.net/uid-28311809-id-4383551.html
http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits