hadoop :java.io.FileNotFoundException: File does not exist:

 

 

1.用hadoop的eclipse插件下M/R程序的時候,有時候會報java

Exception in thread "main" java.lang.IllegalArgumentException: Pathname /D:/hadoop/hadoop-2.2.0/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar from hdfs://uat84:49100/D:/hadoop/hadoop-2.2.0/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar is not a valid DFS filenamenode

服務器下報錯是:apache

[2014-05-11 19:09:40,019] ERROR [main] (UserGroupInformation.java:1494) org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://uat84:49100/usr/local/bigdata/hbase/lib/hadoop-common-2.2.0.jar服務器

Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://uat84:49100/usr/local/bigdata/hbase/lib/hadoop-common-2.2.0.jarapp

        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)eclipse

        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)ide

        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)oop

        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)this

        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)插件

        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheMa

這是tm的什麼fucking錯誤!!拿着別人正確的程序一點一點查,發現是由於有這句話:

Configuration conf = new Configuration();

conf.set("fs.default.name", "hdfs://uat84:49100");

這是什麼意思呢,就是說,你若是是本地跑,就是不引入mapred-site,yarn-site,core-site這些配置文件,那麼這個地方也不要設置,由於你是在本地跑M/R程序,(fs.default.name默認值是file:///,表示本地文件系統)這個地方卻又告訴hadoop,須要的jar包從hdfs中取,固然會報以上的問題。那麼,在本地跑直接去掉這句話就ok了。

反之,若是你是提交到集羣,引入了mapred-site,yarn-site,卻沒有引入core-site,也沒有設置fs.default.name,那麼,由於不知道namenode的地址,沒法將job.jar提交到hadoop集羣上,所以會報以下錯誤:

[2014-05-13 16:35:03,625] INFO [main] (Job.java:1358) org.apache.hadoop.mapreduce.Job - Job job_1397132528617_2814 failed with state FAILED due to: Application application_1397132528617_2814 failed 2 times due to AM Container for appattempt_1397132528617_2814_000002 exited with  exitCode: -1000 due to: File file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1397132528617_2814/job.jar does not exist

.Failing this attempt.. Failing the application.

牛不牛!所以咱們只要告訴hadoop咱們的namenode地址就能夠了。引入core-site或是設置

fs.default.name 都是同樣的

 

2.在跑hdfs到hbase的代碼的時候,報出以下錯誤:

java.lang.RuntimeException: java.lang.NoSuchMethodException: CopyOfHive2Hbase$Redcuer.<init>()

at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:405)

at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:445)

Caused by: java.lang.NoSuchMethodException: CopyOfHive2Hbase$Redcuer.<init>()

at java.lang.Class.getConstructor0(Class.java:2715)

at java.lang.Class.getDeclaredConstructor(Class.java:1987)

at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)

... 3 more

fucking man啊,光看這個,誰能看出來Rducer哪裏寫的有問題!我服他了!!!

也是對比人家正確的代碼,一點一點看,發現,原來是由於個人reduce方法不是static的!!!!hadoop 簡單程序下的錯誤和陷阱 - silver9886@126 - silver9886@126的博客想死的心都有了

public static class Redcuer extends TableReducer<Text, Text, NullWritable> {  

private String[] columname ;

public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {  

System.out.println("!!!!!!!!!!!!!!!!!!!reduce!!!!!!!" + key.toString());

Put put = new Put(Bytes.toBytes("test1234"));

put.add(Bytes.toBytes("fc"), Bytes.toBytes("1"), Bytes.toBytes("2"));

    context.write(NullWritable.get(), put);

}

}

相關文章
相關標籤/搜索