spark解決 org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow

錯誤java

使用spark sql 處理數據時報這個錯誤sql

Exception in thread "main" java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3107 in stage 308.0 failed 4 times, most recent failure: Lost task 3107.3 in stage 308.0 (TID 620318, XXX): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 1572864, required: 3236381
Serialization trace:
values (org.apache.spark.sql.catalyst.expressions.GenericInternalRow). To avoid this, increase spark.kryoserializer.buffer.max value.
        at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:299)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
        at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
        at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
        at com.peopleyuqing.tool.SparkJDBC.excuteQuery(SparkJDBC.java:64)
        at com.peopleyuqing.main.ContentSubThree.main(ContentSubThree.java:24)

方法express

val sparkConf = newSparkConf().setAppName(Constants.SPARK_NAME_APP)

     .set("spark.kryoserializer.buffer.max","128m");

緣由apache

緣由分析: RDD extends scala.AnyRef withscala.Serializable  ,因此在使用textFile ,讀取表的數據等大量建立新的rdd,df,ds等 數據集的時候,注意把 這個值調大ui

相關文章
相關標籤/搜索