hive文件存儲格式包括如下幾類:html
TEXTFILEjava
SEQUENCEFILEnode
RCFILEsql
自定義格式apache
其中TEXTFILE爲默認格式,建表時不指定默認爲這個格式,導入數據時會直接把數據文件拷貝到hdfs上不進行處理。markdown
SequenceFile,RCFile格式的表不能直接從本地文件導入數據,數據要先導入到textfile格式的表中,而後再從textfile表中用insert導入到SequenceFile,RCFile表中。oop
默認格式,數據不作壓縮,磁盤開銷大,數據解析開銷大。
可結合Gzip、Bzip2使用(系統自動檢查,執行查詢時自動解壓),但使用這種方式,hive不會對數據進行切分,從而沒法對數據進行並行操做。
實例:性能
> create table test1(str STRING) > STORED AS TEXTFILE; OK Time taken: 0.786 seconds #寫腳本生成一個隨機字符串文件,導入文件: > LOAD DATA LOCAL INPATH '/home/work/data/test.txt' INTO TABLE test1; Copying data from file:/home/work/data/test.txt Copying file: file:/home/work/data/test.txt Loading data to table default.test1 OK Time taken: 0.243 seconds
SequenceFile是Hadoop API提供的一種二進制文件支持,其具備使用方便、可分割、可壓縮的特色。
SequenceFile支持三種壓縮選擇:NONE, RECORD, BLOCK。 Record壓縮率低,通常建議使用BLOCK壓縮。
示例:編碼
> create table test2(str STRING) > STORED AS SEQUENCEFILE; OK Time taken: 5.526 seconds hive> SET hive.exec.compress.output=true; hive> SET io.seqfile.compression.type=BLOCK; hive> INSERT OVERWRITE TABLE test2 SELECT * FROM test1;
RCFILE是一種行列存儲相結合的存儲方式。首先,其將數據按行分塊,保證同一個record在一個塊上,避免讀一個記錄須要讀取多個block。其次,塊數據列式存儲,有利於數據壓縮和快速的列存取。RCFILE文件示例:spa
> create table test3(str STRING) > STORED AS RCFILE; OK Time taken: 0.184 seconds > INSERT OVERWRITE TABLE test3 SELECT * FROM test1;
實踐證實RCFile目前沒有性能優點, 只有存儲上能省10%的空間, 做者本身都認可. Facebook用它也就是爲了存儲,. RCFile目前沒有使用特殊的壓縮手段, 例如算術編碼, 後綴樹等, 沒有像InfoBright那樣能skip 大量io.
ORC是RCfile的升級版,性能有大幅度提高, 並且數據能夠壓縮存儲,壓縮比和Lzo壓縮差很少,比text文件壓縮比能夠達到70%的空間。並且讀性能很是高,能夠實現高效查詢。 具體介紹https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC 建表語句以下: 同時,將ORC的表中的NULL取值,由默認的\N改成'',
方式一:
hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'last_modified_by'='pmp_bi', 'last_modified_time'='1465992624', 'transient_lastDdlTime'='1465992624')
方式二:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' with serdeproperties('serialization.null.format' = '') STORED AS ORC; 查看結果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992726')
方式三:
drop table test_orc; create table if not exists test_orc( advertiser_id string, ad_plan_id string, cnt BIGINT ) partitioned by (day string, type TINYINT COMMENT '0 as bid, 1 as win, 2 as ck', hour TINYINT) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS ORC; 查看結果 hive> show create table test_orc; CREATE TABLE `test_orc`( `advertiser_id` string, `ad_plan_id` string, `cnt` bigint) PARTITIONED BY ( `day` string, `type` tinyint COMMENT '0 as bid, 1 as win, 2 as ck', `hour` tinyint) ROW FORMAT DELIMITED NULL DEFINED AS '' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' LOCATION 'hdfs://namenode/hivedata/warehouse/pmp.db/test_orc' TBLPROPERTIES ( 'transient_lastDdlTime'='1465992916')
當用戶的數據文件格式不能被當前 Hive 所識別的時候,能夠自定義文件格式。
用戶能夠經過實現inputformat和outputformat來自定義輸入輸出格式,參考代碼:.\hive-0.8.1\src\contrib\src\java\org\apache\hadoop\hive\contrib\fileformat\base64
實例:
> create table test4(str STRING) > stored as > inputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextInputFormat' > outputformat 'org.apache.hadoop.hive.contrib.fileformat.base64.Base64TextOutputFormat';
$ cat test1.txt aGVsbG8saGl2ZQ== aGVsbG8sd29ybGQ= aGVsbG8saGFkb29w
test1文件爲base64編碼後的內容,decode後數據爲:
hello,hive hello,world hello,hadoop
load數據並查詢:
hive> LOAD DATA LOCAL INPATH '/home/work/test1.txt' INTO TABLE test4; Copying data from file:/home/work/test1.txt Copying file: file:/home/work/test1.txt Loading data to table default.test4 OK Time taken: 4.742 seconds hive> select * from test4; OK hello,hive hello,world hello,hadoop Time taken: 1.953 seconds
相比TEXTFILE和SEQUENCEFILE,RCFILE因爲列式存儲方式,數據加載時性能消耗較大,可是具備較好的壓縮比和查詢響應。數據倉庫的特色是一次寫入、屢次讀取,所以,總體來看,RCFILE相比其他兩種格式具備較明顯的優點。
參考連接: http://blog.csdn.net/yfkiss/article/details/7787742
http://blog.csdn.net/longshenlmj/article/details/51702343
http://www.cnblogs.com/ggjucheng/archive/2013/01/03/2843318.html