PIG中輸入輸出分隔符默認是製表符\t,而到了hive中,默認變成了八進制的\001,java
也就是ASCII: ctrl - A正則表達式
Oct Dec Hex ASCII_Char sql
001 1 01 SOH (start of heading)apache
官方的解釋說是儘可能不和文中的字符重複,所以選用了 crtrl - A,單個的字符能夠經過ide
row format delimited fields terminated by '#'; 指定,PIG的單個分隔符的也能夠經過 PigStorage指定,函數
可是多個字符作分隔符呢?PIG是直接報錯,而HIVE只認第一個字符,而無視後面的多個字符。oop
解決辦法:ui
PIG能夠自定義加載函數(load function):繼承LoadFunc,重寫幾個方法就ok了,this
詳見:http://my.oschina.net/leejun2005/blog/83825 spa
而在hive中,自定義多分隔符(Multi-character delimiter strings),有2種方法能夠實現:
RegexSerDe是hive自帶的一種序列化/反序列化的方式,主要用來處理正則表達式。
RegexSerDe主要下面三個參數:
input.regex
output.format.string
input.regex.case.insensitive
下面給出一個完整的範例:
add jar /home/june/hadoop/hive-0.8.1-bin/lib/hive_contrib.jar; CREATE TABLE b( c0 string, c1 string, c2 string) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'input.regex' = '([^,]*),,,,([^,]*),,,,([^,]*)', 'output.format.string' = '%1$s %2$s %3$s') STORED AS TEXTFILE; cat b.txt 1,,,,2,,,,3 a,,,,b,,,,c 9,,,,5,,,,7 load data local inpath 'b.txt' overwrite into table b; select * from b
REF:
http://grokbase.com/t/hive/user/115sw9ant2/hive-create-table
//使用多字符來分隔字段,則須要你自定義InputFormat來實現。 package org.apache.hadoop.mapred; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileSplit; import org.apache.hadoop.mapred.InputSplit; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.LineRecordReader; import org.apache.hadoop.mapred.RecordReader; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.TextInputFormat; public class MyDemoInputFormat extends TextInputFormat { @Override public RecordReader<LongWritable, Text> getRecordReader( InputSplit genericSplit, JobConf job, Reporter reporter) throws IOException { reporter.setStatus(genericSplit.toString()); MyDemoRecordReader reader = new MyDemoRecordReader( new LineRecordReader(job, (FileSplit) genericSplit)); return reader; } public static class MyDemoRecordReader implements RecordReader<LongWritable, Text> { LineRecordReader reader; Text text; public MyDemoRecordReader(LineRecordReader reader) { this.reader = reader; text = reader.createValue(); } @Override public void close() throws IOException { reader.close(); } @Override public LongWritable createKey() { return reader.createKey(); } @Override public Text createValue() { return new Text(); } @Override public long getPos() throws IOException { return reader.getPos(); } @Override public float getProgress() throws IOException { return reader.getProgress(); } @Override public boolean next(LongWritable key, Text value) throws IOException { Text txtReplace; while (reader.next(key, text)) { txtReplace = new Text(); txtReplace.set(text.toString().toLowerCase().replaceAll("\\|\\|\\|", "\001")); value.set(txtReplace.getBytes(), 0, txtReplace.getLength()); return true; } return false; } } } //這時候的建表語句是: create external table IF NOT EXISTS test( id string, name string )partitioned by (day string) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.MyDemoInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION '/log/dw_srclog/test';
採集日誌到Hive http://blog.javachen.com/2014/07/25/collect-log-to-hive/
參考:
hive處理日誌,自定義inputformat
http://running.iteye.com/blog/907806
http://superlxw1234.iteye.com/blog/1744970
原理很簡單:hive 的內部分隔符是「 \001 」,只要把分隔符替換成「\001 」便可。
若是咱們須要修改爲自定義的,例如爲空,一樣咱們也要利用正則序列化:
hive> CREATE TABLE sunwg02 (id int,name STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ( 'field.delim'='\t', 'escape.delim'='\\', 'serialization.null.format'=' ) STORED AS TEXTFILE; OK Time taken: 0.046 seconds hive> insert overwrite table sunwg02 select * from sunwg00; Loading data to table sunwg02 2 Rows loaded to sunwg02 OK Time taken: 18.756 seconds 查看sunwg02在hdfs的文件 [hjl@sunwg src]$ hadoop fs -cat /hjl/sunwg02/attempt_201105020924_0013_m_000000_0 mary 101 tom NULL值沒有被轉寫成’\N’
PS:
其實話說回來這個功能很簡單,但不知爲什麼做者沒有直接支持,或許將來的版本會支持的。
1|JOHN|abu1/abu21|key1:1'\004'2'\004'3/key12:6'\004'7'\004'8 2|Rain|abu2/abu22|key2:2'\004'2'\004'3/key22:6'\004'7'\004'8 3|Lisa|abu3/abu23|key3:3'\004'2'\004'3/key32:6'\004'7'\004'8
針對上述文件能夠看到, 紫色方框裏的都是 array,可是爲了不 array 和 map嵌套array 裏的分隔符衝突,
採用了不一樣的分隔符,一個是 / , 一個是 \004,爲何要用 \004 呢?
由於 hive 默認支持 8 級分隔符:\001~\008,用戶只能重寫覆蓋 \001~\003,其它級別的分隔符 hive 會本身識別解析。
因此以本例來看,建表語句以下:
create EXTERNAL table IF NOT EXISTS testSeparator( id string, name string, itemList array<String>, kvMap map<string, array<int>> ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' COLLECTION ITEMS TERMINATED BY '/' MAP KEYS TERMINATED BY ':' LINES TERMINATED BY '\n' LOCATION '/tmp/dsap/rawdata/ooxx/3';
hive 結果以下:
關於這塊知識能夠參考:Hadoop The Definitive Guide - Chapter 12: Hive, Page No: 433, 434
[1] HIVE nested ARRAY in MAP data type
http://stackoverflow.com/questions/18812025/hive-nested-array-in-map-data-type