(博客園-番茄醬原創)java
在個人系統中,hadoop-2.5.1的安裝路徑是/opt/lib64/hadoop-2.5.1下面,而後hadoop-2.2.0的路徑是/home/hadoop/下載/hadoop-2.2.0,個人eclipse的安裝路徑是/opt/programming/atd-bundle/eclipse。linux
由於老師須要咱們寫mapreduce程序,因此如今須要配置hadoop的eclipse插件。以前在windows下面安裝hadoop一直會有莫名其妙的問題,因此索性直接在linux下面裝了。Linux下面還更簡單一些。git
下面談談如何配置吧。github
其實此次配置,並非直接生成hadoop2.5.1的插件,而是生成hadoop2.2.0的插件,可是兼容hadoop-2.5.1。(這句話實際上指的是下面1步驟中的那個包是基於hadoop-2.2.0的開發的而且編譯時候依賴hadoop-2.2.0,因此咱們須要下載hadoop-2.2.0)。所以,咱們須要下載的東西有3個,一個是hadoop插件源文件,一個是ant(fedora20在線安裝),一個是額外的hadoop-2.2.0.tar.gzapache
。編程
打開eclipse,而後進行一些配置windows
先選擇hadoop的安裝路徑app
而後點擊okeclipse
而後點擊hadoop location,而後新建一個location工具
右擊鼠標新建一個location
到這邊,hadoop的eclipse就配置完畢了。若是你的hadoop的是開啓的狀態下,在eclipse中即可以直接操做dfs了
對了,若是你要跑wordcount程序,你須要在hadoop的src包中找到WordCount.java文件,
該目錄下面有好多例子,目錄是hadoop-2.5.1-src/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples
附上其中除了命名空間的包名的代碼
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); for (int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
運行以前,須要配置run的參數:hdfs://localhost:9000/input hdfs://localhost:9000/output。而後再run as-> run on hadoop(要先把hadoop開啓)
在input文件夾下面放置2個文件,好比file1.txt,file2.txt,而後運行事後程序會新建一個output文件夾,裏面會包含結果
file1
file2.txt
output下面文件內容