系統:centos6.2java
節點數目:1個master,16個workernode
spark版本:0.8.0apache
內核版本:2.6.32centos
如下是遇到的問題及解決辦法:網絡
緣由:不明app
解決辦法:重啓,從新鏈接。ssh
緣由:關閉集羣后的worker節點上tasktracker節點沒關掉。socket
解決辦法:關閉集羣后,手動找到worker節點上的tasktracker進程並殺死oop
[root@hw024 spark-0.8.0-incubating]# ./bin/start-master.shspa
starting org.apache.spark.deploy.master.Master, logging to /home/zhangqianlong/spark-0.8.0-incubating/bin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-hw024.out
failed to launch org.apache.spark.deploy.master.Master:
Error: Could not find or load main class org.apache.spark.deploy.master.Master
full log in /home/zhangqianlong/spark-0.8.0-incubating/bin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-hw024.out
解決辦法:執行sbt/sbt clean assembly
cd $SPARK_HOME
./run-example org.apache.spark.examples.SparkPi spark://hw024:7077
標準輸出結果:
……
14/03/20 11:13:02 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:39) finished in 1.642 s
14/03/20 11:13:02 INFO cluster.ClusterScheduler: Remove TaskSet 0.0 from pool
14/03/20 11:13:02 INFO spark.SparkContext: Job finished: reduce at SparkPi.scala:39, took 1.708775428 s
Pi is roughly 3.13434
可是查看workernode的/home/zhangqianlong/spark-0.8.0-incubating-bin-hadoop1/work/app-20140320111300-0008/8/stderr內容以下:
Spark Executor Command: "java" "-cp" ":/home/zhangqianlong/spark-0.8.0-incubating-bin-hadoop1/conf:/home/zhangqianlong/spark-0.8.0-incubating-bin-hadoop1/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop1.0.4.jar" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@hw024:60929/user/StandaloneScheduler" "8" "hw018" "24"
====================================
14/03/20 11:05:15 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
14/03/20 11:05:15 INFO executor.StandaloneExecutorBackend: Connecting to driver: akka://spark@hw024:60929/user/StandaloneScheduler
14/03/20 11:05:15 INFO executor.StandaloneExecutorBackend: Successfully registered with driver
14/03/20 11:05:15 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
14/03/20 11:05:15 INFO spark.SparkEnv: Connecting to BlockManagerMaster: akka://spark@hw024:60929/user/BlockManagerMaster
14/03/20 11:05:15 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.
14/03/20 11:05:15 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20140320110515-9151
14/03/20 11:05:15 INFO network.ConnectionManager: Bound socket to port 59511 with id = ConnectionManagerId(hw018,59511)
14/03/20 11:05:15 INFO storage.BlockManagerMaster: Trying to register BlockManager
14/03/20 11:05:15 INFO storage.BlockManagerMaster: Registered BlockManager
14/03/20 11:05:15 INFO spark.SparkEnv: Connecting to MapOutputTracker: akka://spark@hw024:60929/user/MapOutputTracker
14/03/20 11:05:15 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-81a80beb-fd56-4573-9afe-ca9310d3ea8d
14/03/20 11:05:15 INFO server.Server: jetty-7.x.y-SNAPSHOT
14/03/20 11:05:15 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:56230
14/03/20 11:05:16 ERROR executor.StandaloneExecutorBackend: Driver terminated or disconnected! Shutting down.
這個問題困擾我一週了,F**K!
通過屢次與其餘攻城獅討論,該問題不用理會,只要正常運行結束,而且hadoop fs -cat /XX/part-XXX有輸出結果,結果是想要的就行。我猜是由於延時配置問題。
緣由:迭代計算時打開太多臨時文件
解決辦法:修改全部節點的系統打開文件上限設置:/etc/security/limits.conf(注意,不能用ssh遠程登錄後先刪除再拷貝,會致使系統沒法登錄)
重啓spark後生效就能夠解決
6. 因爲在spark上運行的程序,輸入數據太大(當輸入小的時候能夠成功運行)形成程序掛掉。
報錯信息以下:
14/04/15 16:14:33 INFO cluster.ClusterTaskSetManager: Starting task 2.0:92 as TID 594 on executor 24: hw028 (ANY)
14/04/15 16:14:33 INFO cluster.ClusterTaskSetManager: Serialized task 2.0:92 as 2119 bytes in 0 ms
14/04/15 16:14:33 INFO client.Client$ClientActor: Executor updated: app-20140415151451-0000/23 is now FAILED (Command exited with code 137)
14/04/15 16:14:33 INFO cluster.SparkDeploySchedulerBackend: Executor app-20140415151451-0000/23 removed: Command exited with code 137
14/04/15 16:14:33 ERROR client.Client$ClientActor: Master removed our application: FAILED; stopping client
14/04/15 16:14:33 ERROR cluster.SparkDeploySchedulerBackend: Disconnected from Spark cluster!
14/04/15 16:14:33 INFO cluster.ClusterScheduler: Remove TaskSet 2.0 from pool
14/04/15 16:14:33 INFO cluster.ClusterScheduler: Ignoring update from TID 590 because its task set is gone
14/04/15 16:14:33 INFO cluster.ClusterScheduler: Ignoring update from TID 593 because its task set is gone
14/04/15 16:14:33 INFO scheduler.DAGScheduler: Failed to run count at PageRank.scala:43
Exception in thread "main" org.apache.spark.SparkException: Job failed: Error: Disconnected from Spark cluster
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:379)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)
緣由:RDD太大了,每一個節點好多RDD,致使內存不夠。
解決辦法:修改運行命令或者sprak-env.sh,添加參數 -Dspark.akka.frameSize=10000(以M爲單位的)。
7. 因爲輸入數據太大或者網絡太差致使的woker節點「no recent heart beats」。
現象: WARN storage.BlockManagerMasterActor: Removing BlockManager BlockManagerId(0, hw032, 39782, 0) with no recent heart beats: 46910ms exceeds 45000ms
緣由:因爲網絡差或者數據量太大,worker節點在必定的時間內(默認45s)沒有給master信號,master覺得它掛了。
解決辦法:修改運行命令或者sprak-env.sh,添加參數 -Dspark.storage.blockManagerHeartBeatMs=60000(以ms爲單位,即6分鐘)。