問題描述:java
項目中一位同事提交了一部分代碼,代碼分爲一個抽象類,裏面含有sparkcontent,sparkSession對象;而後又三個子類實例化上述抽象類,這三個子類處理三個任務,最後在同一個Main類,裏面調用這個子類的處理任務的方法,進行計算;在本地(local)運行,一切正常,部署到測試服務器,會報以下異常:sql
18/07/03 14:11:58 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, emr-worker-1.cluster-65494, executor 1): java.lang.ExceptionInInitializerError at task.api_monitor.HttpStatusTask$$anonfun$2.apply(HttpStatusTask.scala:91) at task.api_monitor.HttpStatusTask$$anonfun$2.apply(HttpStatusTask.scala:85) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:193) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration at org.apache.spark.SparkContext.(SparkContext.scala:376) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:918) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:910) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:910) at task.AbstractApiMonitorTask.(AbstractApiMonitorTask.scala:22) at task.api_monitor.HttpStatusTask$.(HttpStatusTask.scala:18) at task.api_monitor.HttpStatusTask$.(HttpStatusTask.scala) ... 12 more
分析異常發現是因爲沒有指定Master的URL致使子類不能正常初始化。apache
解決:查找網上資源,結合自身代碼結構發現,在spark運行日誌中(運行模式是yarn)會有三個yarn.client出現,說明每一個子類任務都會有一個相對應的driver,這個說明每一個子類的任務開始都會實例化自身的sparkSession,可是一個spark 應用對應了一個main函數,放在一個driver裏,driver裏有一個對應的實例(spark context).driver 負責向各個節點分發資源以及數據。那麼若是你把建立實例放在了main函數的外面,driver就無法分發了。因此若是這樣寫在local模式下是能夠成功的,在分佈式就會報錯。(參考來源:https://blog.csdn.net/sinat_33761963/article/details/51723175)所以,改變代碼結構把抽象類中的公有的資源,在main函數中建立,順利解決問題。api
總結:出現上述問題,主要是對spark的分佈式運行理解的不是很透徹,仍需努力提高!服務器