Mapreduce報錯:Split metadata size exceeded 10000000

報錯信息java

Failure Info:Job initialization failed: java.io.IOException: Split metadata size exceeded 10000000.   
Aborting job job_201205162059_1073852 at   
org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:48) at   
org.apache.hadoop.mapred.JobInProgress.createSplits(JobInProgress.java:817) at   
org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:711) at   
org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4028) at   
org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at   
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at   
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)

出錯緣由:該job的job.splitmetainfo文件大小超過限制;node

1. job.splitmetainfo,該文件記錄split的元數據信息,job split ----> HDFS block && slave nodeapache

  存放路徑位於:${hadoop.tmp.dir}/mapred/staging/${user.name}/.staging/jobId/oop

2. 參數mapreduce.jobtracker.split.metainfo.maxsize控制該文件的最大大小,默認爲:10000000(10M)code

相關文章
相關標籤/搜索