spark-shell啓動報錯:Yarn application has already ended! It might have been killed or unable to launch ap

spark-shell不支持yarn cluster,以yarn client方式啓動node

spark-shell --master=yarn --deploy-mode=client

啓動日誌,錯誤信息以下shell

 

其中「Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME」,只是一個警告,官方的解釋以下:app

大概是說:若是 spark.yarn.jars 和 spark.yarn.archive都沒配置,會把$SPAR_HOME/jars下面全部jar打包成zip文件,上傳到每一個工做分區,因此打包分發是自動完成的,沒配置這倆參數不要緊。ide

 

"Yarn application has already ended! It might have been killed or unable to launch application master",這個但是一個異常,打開mr管理頁面,個人是 http://192.168.128.130/8088 ,oop

重點在紅框處,2.2g的虛擬內存實際值,超過了2.1g的上限。也就是說虛擬內存超限,因此contrainer被幹掉了,活都是在容器乾的,容器被幹掉了,還玩個屁。spa

解決方案日誌

yarn-site.xml 增長配置:code

2個配置2選一便可xml

 1 <!--如下爲解決spark-shell 以yarn client模式運行報錯問題而增長的配置,估計spark-summit也會有這個問題。2個配置只用配置一個便可解決問題,固然都配置也沒問題-->
 2 <!--虛擬內存設置是否生效,若實際虛擬內存大於設置值 ,spark 以client模式運行可能會報錯,"Yarn application has already ended! It might have been killed or unable to l"-->
 3 <property>
 4     <name>yarn.nodemanager.vmem-check-enabled</name>
 5     <value>false</value>
 6     <description>Whether virtual memory limits will be enforced for containers</description>
 7 </property>
 8 <!--配置虛擬內存/物理內存的值,默認爲2.1,物理內存默認應該是1g,因此虛擬內存是2.1g-->
 9 <property>
10     <name>yarn.nodemanager.vmem-pmem-ratio</name>
11     <value>4</value>
12     <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
13 </property>
View Code

 

修改後,啓動hadoop,spark-shell.blog

相關文章
相關標籤/搜索