SparkStreaming讀取Kakfa數據時發生OffsetOutOfRangeException異常

參考文章:http://www.jianshu.com/p/791137760c14apache

 

運行SparkStreming程序一段時間後,發現產生了異常:fetch

ERROR JobScheduler: Error running job streaming job 1496767480000 ms.0
org.apache.spark.SparkException: 
Job aborted due to stage failure:
Task
13 in stage 37560.0 failed 4 times,
most recent failure: Lost task 13.3 in stage 37560.0
(TID 3891416, 192.169.2.33, executor 1):
kafka.common.OffsetOutOfRangeException

若是消息體太大了,超過 fetch.message.max.bytes=1m的默認配置,那麼Spark Streaming會直接拋出OffsetOutOfRangeException異常,而後中止服務。spa

 

解決方案:Kafka consumer中設置fetch.message.max.bytes爲大一點的內存code

 

好比設置爲50M:1024*1024*50blog

fetch.message.max.bytes=52428800內存

相關文章
相關標籤/搜索