第一次看到Spark崩潰:Spark Shell內存OOM的現象!

第一次看到Spark崩潰web

Spark Shell內存OOM的現象算法


要搞Spark圖計算,因此用了Google的web-Google.txt,大小71.8MB。shell


以命令:apache

val graph = GraphLoader.edgeListFile(sc,"hdfs://192.168.0.10:9000/input/graph/web-Google.txt")app


創建圖的時候,運算了半天后直接退回了控制檯。ide

界面xian測試


scala> val graph = GraphLoader.edgeListFile(sc,"hdfs://192.168.0.10:9000/input/graph/web-Google.txt")ui

[Stage 0:>                                                          (0 + 2) / 2]./bin/spark-shell: line 44:  3592 Killed                  "${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main --name "Spark shell" "$@"google


第二次Spark崩潰!spa

執行pagerank算法測試,使用的是google的數據web-Google.txt,大小71.8MB。


val rank = graph.pageRank(0.01).vertices

Spark是一個master,2個worker的集羣,結果slave2掛了


wKioL1gpFwzijcA_AADhPu6C3eQ550.png-wh_50

wKiom1gpFw7gsHYMAABVPdqylRQ541.png-wh_50

16/11/14 09:51:44 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20161114084026-0000/1333 on hostPort 192.168.0.5:42898 with 4 cores, 1024.0 MB RAM

16/11/14 09:51:44 INFO client.AppClient$ClientEndpoint: Executor updated: app-20161114084026-0000/1333 is now RUNNING

16/11/14 09:51:44 INFO client.AppClient$ClientEndpoint: Executor updated: app-20161114084026-0000/1333 is now EXITED (Command exited with code 1)

16/11/14 09:51:44 INFO cluster.SparkDeploySchedulerBackend: Executor app-20161114084026-0000/1333 removed: Command exited with code 1

16/11/14 09:51:44 INFO cluster.SparkDeploySchedulerBackend: Asked to remove non-existent executor 1333

16/11/14 09:51:44 INFO client.AppClient$ClientEndpoint: Executor added: app-20161114084026-0000/1334 on worker-20161114083607-192.168.0.5-42898 (192.168.0.5:42898) with 4 cores

結果一直嘗試鏈接slave2,呵呵

wKioL1gpHdjzB8lzAABPlRP98Us847.png-wh_50


wKiom1gpHdmyx3LEAABBqq0YSAc990.png-wh_50

相關文章
相關標籤/搜索