在spark yarn模式下跑yarn-client時出現沒法初始化SparkContext錯誤.java
17/09/27 16:17:54 INFO mapreduce.Job: Task Id : attempt_1428293579539_0001_m_000003_0, Status : FAILED Container [pid=7847,containerID=container_1428293579539_0001_01_000005] is running beyond virtual memory limits. Current usage: 123.5 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container.
jdk1.7的時候,沒有以下報錯,可是java1.8出現node
ERROR spark.SparkContext: Error initializing SparkContext. ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
緣由可能與yarn配置文件中 containers 的默認屬性有關,被強制限定了物理內存apache
能夠嘗試中hadoop的conf下yarn.xml配置文件中添加一下屬性:vim
<property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value>
</property>
<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value>
</property>
除此以外,Spark官網上也有Spark Properties說明,其中給出了默認的值oop
個人最終方法是直接複製SPARK_HOME/conf下的spark-defaults.conf.template,更名爲spark-defaults.conf ,vim spark-defaults.conf 把默認值加到1Gspa
spark.yarn.am.memory 1g
而後再執行任務命令就沒有問題了3d
[root@srv01 conf]# ./spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode client --master yarn --driver-memory 2g --queue default /usr/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar