Spark之submit任務時的Initial job has not accepted any resources; check your cluster UI to ensure that wor

Spark submit任務到Spark集羣時,會出現以下異常:apache

Exception 1:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memoryoop

查看Spark logs文件spark-Spark-org.apache.spark.deploy.master.Master-1-hadoop1.out發現:spa

此時的Spark Web UI界面以下:blog

Reason:內存

以前在Spark配置文件spark-env.sh中,SPARK_LOCAL_IP的配置是localhost,內存爲512M,因此在Spark UI界面中Workers顯示均對應到主機hadoop1默認的的localhost,只是給它分配的workers對應不一樣的端口而已,而以前最大內存爲2.9G,因此5 * 512M > 2.9G的,故上述報錯。hadoop

Solution:ci

修改Spark配置文件spark-env.sh,將SPARK_LOCAL_IP的localhost修改成對應的主機名稱(hadoop1,hadoop2...),並修改SPARK_WORKER_MEMORY的內存配置 < 對應機器分配的內存 便可it

 

提交任務(WordCount)到Spark集羣中,相應腳本以下:spark

執行腳本,運行Spark任務,過程以下:io

./runSpark.sh

 對應的WordCount結果以下:

此時對應的Spark運行UI界面以下:

相關文章
相關標籤/搜索