Hive的使用

    Hive是一個基於hadoop平臺的數據倉庫工具,主要對海量數據進行統計分析 java

    一、運行模式(集羣與本地) web

        1.一、集羣模式:>SET mapred.job.tracker=cluster 數據庫

        1.二、本地模式:>SET mapred.job.tracker=local apache

    二、訪問Hive的3鍾方式 函數

        2.一、終端訪問 工具

            #hive  或者  #hive --service cli   oop

        2.二、web訪問,端口9999 lua

            #hive --service hwi & spa

        2.三、hive遠程服務,端口10000 翻譯

            #hive --service hiveserver &

    三、數據類型

       3.一、基本數據類型 :    

            數據類型
            佔用長度
            tinyint
      1byte(-128~127)
            smallint
      2byte(-2^16 ~ 2^16-1)
            int
      4byte(-2^31 ~ 2^31-1)
            bigint
      8byte(-2^63 ~ 2^63-1)
            float
      4byte單精度
            double
      8byte雙精度
            string

            boolean

        3.二、複合數據類型:ARRAY,MAP,STRUCT,UNION

    四、數據存儲

        4.一、基於HDFS

        4.二、存儲結構:database 、table 、file 、view

        4.三、指定行、列分隔符便可解析數據

    五、基本操做

        5.一、建立數據庫:>create database db_name

        5.二、指定數據庫:>use db

        5.三、顯示錶:show tables;

        5.四、建立表

                5.4.一、內部表(默認):create table table_name(param_name type1,param_name2 type2,...) row format delimited fields terminated by '分隔符';

                 例:create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';

                內部表相似數據庫表,存儲在HDFS上(位置經過hive.metastore.warehouse.dir參數查看,除了外部表之外都保存在此處的表),表被刪除時,表的元數據信息一塊兒被刪除。

                加載數據:load data local inpath 'path' into table table_name;

                5.4.二、分區表:create table table_name(param_name type1,param_name2 type2,...) partitioned by (param_name type) row format delimited fields terminated by '分隔符';

                例:create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';

                和普通表的區別:各個數據劃分到不一樣的分區文件,表中的每個partition對應表下的一個目錄,儘管

                加載數據:load data local inpath 'path' into table table_name partition (parti_param1='value',parti_param2='value',..); 

                添加分區:alter table partition_table add partition (daytime='2013-02-04',city='bj');

                刪除分區:alter table partition_table drop partition (daytime='2013-02-04',city='bj'),元數據和數據文件被刪除,可是目錄還存在

                5.4.三、外部表:create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location 'hdfs_path';

                加載數據:load data inpath 'hdfs_path' table_name;

                5.4.四、桶表:是對數據進行哈希取值,而後放到不一樣文件中存儲。
                建立表:create table bucket_table(id string) clustered by(id) into 4 buckets;

                加載數據:

                        set hive.enforce.bucketing = true;

                        必須先把以上的操做執行才能加載數據
                        insert into table bucket_table select name from stu;    
                        insert overwrite table bucket_table select name from stu;

                數據加載到桶表時,會對字段取hash值,而後與桶的數量取模。把數據放到對應的文件中。

                對數據抽樣調查:select * from bucket_table tablesample(bucket 1 out of 4 on id);
        六、建立視圖:CREATE VIEW v1 AS select * from t1;

        七、修改表:alter table tb_name add columns (param_name,type);
        八、刪除表:drop table tb_name;

        九、數據導入

            9.一、加載數據:LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE]     INTO TABLE tablename     [PARTITION (partcol1=val1, partcol2=val2 ...)]

                    數據加載到表時,不會對數據進行轉移,LOAD操做只是將數據複製到HIVE表對應的位置       
           9.二、Hive中表的互導:INSERT OVERWRITE TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement FROM from_statement
            9.三、create as :CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name  (col_name data_type, ...)    …AS SELECT * FROM TB_NAME;

        十、查詢

            10.一、語法結構

                        SELECT [ALL | DISTINCT] select_expr, select_expr, ...
                        FROM table_reference
                        [WHERE where_condition]
                        [GROUP BY col_list]
                        [ CLUSTER BY col_list | [DISTRIBUTE BY col_list] [SORT BY col_list] | [ORDER BY col_list] ]
                        [LIMIT number]

                        ALL and DISTINCT :去重

            10.二、partition查詢

                        利用分區剪枝(input pruning)的特性,相似「分區索引」,只有當語句中出現WHERE纔會啓動分區剪枝

            10.三、LIMIT Clause

                        Limit 能夠限制查詢的記錄數。查詢的結果是隨機選擇的。語法:SELECT * FROM t1 LIMIT 5
            10.四、Top N
                        SET mapred.reduce.tasks = 1   SELECT * FROM sales SORT BY amount DESC LIMIT 5

        十一、錶鏈接

            11.一、內鏈接:select b.name,a.* from dim_ac a join acinfo b on (a.ac=b.acip) limit 10;
            11.二、左外鏈接:select b.name,a.* from dim_ac a left outer join acinfo b on a.ac=b.acip limit 10;

        十二、Java客戶端

            12.一、啓動遠程服務#hive --service hiveserver

            12.二、相關代碼     

Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
Connection con = DriverManager.getConnection("jdbc:hive://192.168.1.102:10000/wlan_dw", "", "");
Statement stmt = con.createStatement();
String querySQL="SELECT * FROM wlan_dw.dim_m order by flux desc limit 10";

ResultSet res = stmt.executeQuery(querySQL);  

while (res.next()) {
    System.out.println(res.getString(1) +"\t" +res.getLong(2)+"\t" +res.getLong(3)+"\t" +res.getLong(4)+"\t" +res.getLong(5));
}

        1三、自定義函數(UDF)

            13.一、UDF函數能夠直接應用於select語句,對查詢結構作格式化處理後,再輸出內容。
            13.二、編寫UDF函數的時候須要注意一下幾點:
                a)自定義UDF須要繼承org.apache.hadoop.hive.ql.UDF。
                b)須要實現evaluate函數,evaluate函數支持重載。

            13.三、步驟
                a)把程序打包放到目標機器上去;
                b)進入hive客戶端,添加jar包:hive>add jar /run/jar/udf_test.jar;
                c)建立臨時函數:hive>CREATE TEMPORARY FUNCTION add_example AS 'hive.udf.Add';
                d)查詢HQL語句:
                    SELECT add_example(8, 9) FROM scores;
                    SELECT add_example(scores.math, scores.art) FROM scores;
                    SELECT add_example(6, 7, 8, 6.8) FROM scores;
                e)銷燬臨時函數:hive> DROP TEMPORARY FUNCTION add_example;
                注:UDF只能實現一進一出的操做,若是須要實現多進一出,則須要實現UDAF

            13.四、代碼

            

package cn.itheima.bigdata.hive;

import java.util.HashMap;

import org.apache.hadoop.hive.ql.exec.UDF;

public class AreaTranslationUDF extends UDF{
    
    private static HashMap<String, String> areaMap = new HashMap<String, String>();
    
    static{
        
        areaMap.put("138", "beijing");
        areaMap.put("139", "shanghai");
        areaMap.put("137", "guangzhou");
        areaMap.put("136", "niuyue");
        
    }

    //用來將手機號翻譯成歸屬地,evaluate方法必定要是public修飾的,不然調不到
    public String evaluate(String phonenbr) {

        String area = areaMap.get(phonenbr.substring(0,3));
        return area==null?"other":area;

    }
    
    //用來求兩個字段的和
    public int evaluate(int x,int y){
        
        return x+y;
    }

}
相關文章
相關標籤/搜索