hive學習日誌

hive學習日誌java

hive處理的輸入數據通常是巨量的,編寫hive查詢語句須要必定的mr知識的及過硬的hql知識,有些失誤可能會形成幾個小時的運行浪費。sql

不登陸hive cli運行hql的方法:
bin/hive -e 'select * from t1'數據庫

非交互模式運行hql腳本
bin/hive -f hive.sqlapache

交互模式運行hql腳本
bin/hive -i hive.sql函數

HQL數據類型
data_type
: primitive_type
| array_type
| map_type
| struct_typeoop

primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| STRING學習

array_type
: ARRAY < data_type >日誌

map_type
: MAP < primitive_type, data_type >orm

struct_type
: STRUCT < col_name : data_type [COMMENT col_comment], ...>進程

HQL語法:

建立表
徹底新建
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[ROW FORMAT row_format]
[STORED AS file_format]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)]
[AS select_statement]

[EXTERNAL] 表示外接表,不會保存任何東西到hive的數據庫,而是放到LOCATION指定的位置,或者不指定LOCATION放到默認位置。

PARTITIONED by 表示分區字段,自動加到hive表裏,添加了分區以後,每次處理數據都要求指定分區的值,形勢如: PARTITIONE(p1=1,p2=2)
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] buckets的劃分,能把一個分區的文件劃分爲多個,做爲多個maper的入口。

ROW FORMAT 指定行列分割符號等
: DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]

STORED AS
: SEQUENCEFILE
| TEXTFILE
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

AS select_statement
select_statement 一個select語句,表示從一個select語句導入數據。如:
CREATE TABLE new_key_value_store
ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"
STORED AS RCFile AS
SELECT (key % 1024) new_key, concat(key, value) key_value_pair
FROM key_value_store
SORT BY new_key, key_value_pair;

從已知表建立:
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
LIKE existing_table_name
[LOCATION hdfs_path]

刪除表
drop TABLE tablename

hive的udf
hive能夠經過反射機制來直接執行udf。
如:
SELECT reflect("java.lang.String", "valueOf", 1),
FROM src LIMIT 1;

java.lang.String指明一個類完整路徑名,
"valueOf" 靜態公共函數
1 參數,參數格式(param1,param2 ...)

join的進程數:

SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key1) 單進程當join對比項始終有一個表的一個字段時(如b.key1)mr會在一個進程中完成。SELECT a.val, b.val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2) 2進程這種狀況下,多一個join多一個進程

相關文章
相關標籤/搜索