HBase數據導出到HDFS

1、目的java

把hbase中某張表的數據導出到hdfs上一份。python

實現方式這裏介紹兩種:一種是本身寫mr程序來完成,一種是使用hbase提供的類來完成。apache

2、自定義mr程序將hbase數據導出到hdfs上app

2.1首先看看hbase中t1表中的數據:ide

2.2mr的代碼以下:工具

比較重要的語句是oop

job.setNumReduceTasks(0);//爲何要設置reduce的數量是0呢?讀者能夠本身考慮下
TableMapReduceUtil.initTableMapperJob(args[0], new Scan(),HBaseToHdfsMapper.class ,Text.class, Text.class, job);//這行語句指定了mr的輸入是hbase的哪張表,scan能夠對這個表進行filter操做。spa

public class HBaseToHdfs {
	public static void main(String[] args) throws Exception {
		Configuration conf = HBaseConfiguration.create();
		Job job = Job.getInstance(conf, HBaseToHdfs.class.getSimpleName());
		job.setJarByClass(HBaseToHdfs.class);
		
		job.setMapperClass(HBaseToHdfsMapper.class);
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(Text.class);
		
		job.setNumReduceTasks(0);
		
		TableMapReduceUtil.initTableMapperJob(args[0], new Scan(),HBaseToHdfsMapper.class ,Text.class, Text.class, job);
		//TableMapReduceUtil.addDependencyJars(job);
		
		job.setOutputFormatClass(TextOutputFormat.class);
		FileOutputFormat.setOutputPath(job, new Path(args[1]));
		
		job.waitForCompletion(true);
	}
	
	
	public static class HBaseToHdfsMapper extends TableMapper<Text, Text> {
		private Text outKey = new Text();
		private Text outValue = new Text();
		@Override
		protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
			//key在這裏就是hbase的rowkey
			byte[] name = null;
			byte[] age = null;
			byte[] gender = null;
			byte[] birthday = null;
			try {
				name = value.getColumnLatestCell("f1".getBytes(), "name".getBytes()).getValue();
			} catch (Exception e) {}
			try {
				age = value.getColumnLatestCell("f1".getBytes(), "age".getBytes()).getValue();
			} catch (Exception e) {}
			try {
				gender = value.getColumnLatestCell("f1".getBytes(), "gender".getBytes()).getValue();
			} catch (Exception e) {}
			try {
				birthday = value.getColumnLatestCell("f1".getBytes(), "birthday".getBytes()).getValue();
			} catch (Exception e) {}
			outKey.set(key.get());
			String temp = ((name==null || name.length==0)?"NULL":new String(name)) + "\t" + ((age==null || age.length==0)?"NULL":new String(age)) + "\t" + ((gender==null||gender.length==0)?"NULL":new String(gender)) + "\t" +  ((birthday==null||birthday.length==0)?"NULL":new String(birthday));
			System.out.println(temp);
			outValue.set(temp);
			context.write(outKey, outValue);
		}

	}
}

2.3打包執行code

hadoop jar hbaseToDfs.jar com.lanyun.hadoop2.HBaseToHdfs t1 /t1orm

2.4查看hdfs上的文件

(my_python_env)[root@hadoop26 ~]# hadoop fs -cat /t1/part*
1    zhangsan    10    male    NULL
2    lisi    NULL    NULL    NULL
3    wangwu    NULL    NULL    NULL
4    zhaoliu    NULL    NULL    1993

至此,導出成功

3、使用hbase自帶的工具進行導出

hbase自帶的工具是:org.apache.hadoop.hbase.mapreduce.Export
3.1如何使用這個工具呢?查看幫助信息

(my_python_env)[root@hadoop26 ~]# hbase org.apache.hadoop.hbase.mapreduce.Export
ERROR: Wrong number of arguments: 0
Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]

3.2使用工具來導出

hbase org.apache.hadoop.hbase.mapreduce.Export t1 /t2

至此已經完成導出。

相關文章
相關標籤/搜索