Hadoop2.6.2的Eclipse插件的使用

歡迎轉載,且請註明出處,在文章頁面明顯位置給出原文鏈接。html

本文連接:http://www.cnblogs.com/zdfjf/p/5178197.htmljava

首先給出eclipse插件的下載地址:http://download.csdn.net/download/zdfjf/9421244linux

  • 1.插件的安裝

插件下載後,放在eclipse安裝目錄下的plugins文件夾下,而後重啓eclipse,就會發現Project Explorer窗口裏多出DFS Locations這一項,對應的是HDFS裏存放的文件,如今裏邊尚未顯示目錄結構,不用着急,第二步配置以後,目錄結構就會出現了。express

 

我忽然想起來博客園上有一篇文章對這部分介紹的很好,並且我感受對這一部分,我不會寫的比他好。因此我就不浪費時間了,直接參考蝦皮工做室的,原文連接http://www.cnblogs.com/xia520pi/archive/2012/05/20/2510723.html,能夠對這一部分配置完成,下面咱們要說的是配置完成後,有一些問題致使運行程序不能成功。經過不斷調試,我把我運行成功的代碼和相應的配置貼出來。apache

  • 2.代碼
 1 /**
 2  * Licensed to the Apache Software Foundation (ASF) under one
 3  * or more contributor license agreements.  See the NOTICE file
 4  * distributed with this work for additional information
 5  * regarding copyright ownership.  The ASF licenses this file
 6  * to you under the Apache License, Version 2.0 (the
 7  * "License"); you may not use this file except in compliance
 8  * with the License.  You may obtain a copy of the License at
 9  *
10  *     http://www.apache.org/licenses/LICENSE-2.0
11  *
12  * Unless required by applicable law or agreed to in writing, software
13  * distributed under the License is distributed on an "AS IS" BASIS,
14  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15  * See the License for the specific language governing permissions and
16  * limitations under the License.
17  */
18 package org.apache.hadoop.examples;
19 
20 import java.io.IOException;
21 import java.util.StringTokenizer;
22 
23 import org.apache.hadoop.conf.Configuration;
24 import org.apache.hadoop.fs.Path;
25 import org.apache.hadoop.io.IntWritable;
26 import org.apache.hadoop.io.Text;
27 import org.apache.hadoop.mapreduce.Job;
28 import org.apache.hadoop.mapreduce.Mapper;
29 import org.apache.hadoop.mapreduce.Reducer;
30 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
31 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
32 import org.apache.hadoop.util.GenericOptionsParser;
33 
34 public class WordCount {
35 
36   public static class TokenizerMapper 
37        extends Mapper<Object, Text, Text, IntWritable>{
38     
39     private final static IntWritable one = new IntWritable(1);
40     private Text word = new Text();
41       
42     public void map(Object key, Text value, Context context
43                     ) throws IOException, InterruptedException {
44       StringTokenizer itr = new StringTokenizer(value.toString());
45       while (itr.hasMoreTokens()) {
46         word.set(itr.nextToken());
47         context.write(word, one);
48       }
49     }
50   }
51   
52   public static class IntSumReducer 
53        extends Reducer<Text,IntWritable,Text,IntWritable> {
54     private IntWritable result = new IntWritable();
55 
56     public void reduce(Text key, Iterable<IntWritable> values, 
57                        Context context
58                        ) throws IOException, InterruptedException {
59       int sum = 0;
60       for (IntWritable val : values) {
61         sum += val.get();
62       }
63       result.set(sum);
64       context.write(key, result);
65     }
66   }
67 
68   public static void main(String[] args) throws Exception {
69       System.setProperty("HADOOP_USER_NAME", "hadoop"); 70     Configuration conf = new Configuration();
71     conf.set("mapreduce.framework.name", "yarn"); 72     conf.set("yarn.resourcemanager.address", "192.168.0.1:8032"); 73     conf.set("mapreduce.app-submission.cross-platform", "true"); 74     String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
75     if (otherArgs.length < 2) {
76       System.err.println("Usage: wordcount <in> [<in>...] <out>");
77       System.exit(2);
78     }
79     Job job = new Job(conf, "word count1");
80     job.setJarByClass(WordCount.class);
81     job.setMapperClass(TokenizerMapper.class);
82     job.setCombinerClass(IntSumReducer.class);
83     job.setReducerClass(IntSumReducer.class);
84     job.setOutputKeyClass(Text.class);
85     job.setOutputValueClass(IntWritable.class);
86     for (int i = 0; i < otherArgs.length - 1; ++i) {
87       FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
88     }
89     FileOutputFormat.setOutputPath(job,
90       new Path(otherArgs[otherArgs.length - 1]));
91     System.exit(job.waitForCompletion(true) ? 0 : 1);
92   }
93 }

 這裏第69行,由於我windows上用戶名爲frank,集羣上用戶名是hadoop ,因此這裏增長配置文件,把HADOOP_USER_NAME設置爲hadoop。第71和72行是由於配置文件沒有起做用,若是不加這兩行,會以本地方式運行,沒有提交到集羣上運行。第73行由於是跨平臺的,windows->linux,因此加上這一句。windows

而後,最重要的一步來了,注意,注意,注意,重要的事說3遍。

插件原本會自動把項目打成jar包,上傳運行。可是有問題,如今不會自動打包。因此,咱們要把project打成jar包,而後build path ,配置爲項目的外部依賴包,而後右鍵run as -> run on hadoop.就能運行成功了。app

ps:這是個人一種方法,在配置的過程當中,遇到的問題多種多樣,形成問題的緣由也不盡相同。So,多搜索,多思考,解決問題。less

相關文章
相關標籤/搜索