2-Hadoop學習之旅-MapReduce

MapReduce設計理念java

  • 移動計算,而不是移動數據。

MapReduce之Helloworld(Word Count)處理過程apache

輸入圖片說明

MapReduce的Split大小 - max.split(200M) - min.split(50M) - block(128M) - max(min.split,min(max.split,block))=128Mapp

Mapperide

  • Map-reducede 的 思想就是「分而治之」
    • Mapper負責「分」,即把發雜的任務分解爲若干個「簡單的任務」執行
  • 「簡單的任務」有幾個含義:
    • 數據或計算規模相對於原任務要大大縮小;
    • 就近計算,即會被分配到存放了所需數據的節點進行計算;
    • 這些小任務能夠並行計算,彼此間幾乎沒有依賴關係。

輸入圖片說明

Reduceoop

  • 對map階段的結果進行彙總;
  • reducer的數目由mapred-site.xml配置文件裏的項目mapred.reduce.tasks決定。缺省值爲1,用戶能夠覆蓋(通常在程序中調整,不修改xml默認值)

輸入圖片說明

shuffler(最爲複雜的一個環節).net

  • 參考:MapReduce:詳解Shuffle過程
  • 在mapper和reduce中間的一個步驟
  • 能夠把mapper的輸出按照某種key值從新切分和組合成N份,把key值符合某種範圍的組輸送到特定的reduce那裏去處理
  • 能夠簡化reduce過程

輸入圖片說明

附:Helloworld之WordCount設計

//WCJob.java

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.StringUtils;

/**
 * MapReduce_Helloworld程序
 *
 * WCJob
 * @since V1.0.0
 * Created by SET on 2016-09-11 11:35:15
 * @see
 */
public class WCJob {
    public static void main(String[] args) throws Exception {
        Configuration config = new Configuration();
        config.set("fs.defaultFS", "hdfs://master:8020");
        config.set("yarn-resourcemanager.hostname", "slave2");

        FileSystem fs = FileSystem.newInstance(config);

        Job job = new Job(config);

        job.setJobName("word count");

        job.setJarByClass(WCJob.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);

        job.setMapperClass(WCMapper.class);
        job.setReducerClass(WCReducer.class);
        
        job.setCombinerClass(WCReducer.class);

        FileInputFormat.addInputPath(job, new Path("/user/wc/wc"));
        Path outputpath = new Path("/user/wc/output");
        if(fs.exists(outputpath)) {
            fs.delete(outputpath, true);
        }
        FileOutputFormat.setOutputPath(job, outputpath);


        boolean flag = job.waitForCompletion(true);
        if(flag) {
            System.out.println("Job success@!");
        }
    }

    private static class WCMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
        @Override
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            /**
             * 格式:hadoop hello world
             * map 拿到每一行數據 切分
            */
            String[] strs = StringUtils.split(value.toString(), ' ');
            for(String word : strs) {
                context.write(new Text(word), new IntWritable(1));
            }
        }
    }

    private static class WCReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

        @Override
        protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
            int sum = 0;

            for(IntWritable intWritable : values) {
                sum += intWritable.get();
            }
            context.write(new Text(key), new IntWritable(sum));
        }
    }
}
相關文章
相關標籤/搜索