Hadoop實戰:Hive操做使用

Hive表類型測試html

內部表mysql

數據準備,先在HDFS上準備文本文件,逗號分割,並上傳到/test目錄,而後在Hive裏建立表,表名和文件名要相同。linux

$ cat /tmp/table_test.csv 
1,user1,1000
2,user2,2000
3,user3,3000
4,user4,4000
5,user5,5000

Hive建立表sql

hive> CREATE TABLE table_test (
  id int,
  name string,
  value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;

前半部分跟咱們使用SQL語法差很少,後面的設置表示咱們以’,’爲分隔符導入數據。app

Hive加載HDFS數據ide

$ hive -e 'load data local inpath '/tmp/table_test.csv' into table db_test.table_test'
Loading data to table db_test.table_test
OK
Time taken: 0.148 seconds

同一個文件能夠屢次加載(追加數據),同時會在HDFS數據目錄下多生成一個文件。另外這裏加載數據local關鍵字表示咱們從本地文件加載,若是不加local表示從HDFS中加載數據。oop

Hive查看數據測試

hive> select * from table_test;
OK
1       user1   1000
2       user2   2000
3       user3   3000
4       user4   4000
5       user5   5000
Time taken: 0.058 seconds, Fetched: 5 row(s)

你也可使用select id from table_test,可是注意在Hive中除了select * from table以外可使用全表掃描以外,其他任何查詢都須要走MapRedure。spa

查看HDFS數據文件命令行

[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/table_test/
Found 1 items
-rwxrwxrwx   2 root supergroup         65 2017-06-15 22:27 /user/hive/warehouse/db_test.db/table_test/table_test.csv

注意文件權限屬主爲root,這是由於我是在root用戶下進入hive的,通常在Hadoop用戶下進入hive命令行進行建立表。

從HDFS加載數據到Hive,先上傳數據到HDFS集羣中

[hadoop@hadoop-nn ~]$ hdfs dfs -mkdir /test
[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /test/table_test.csv

建立表

[hadoop@hadoop-nn ~]$ hive
hive> CREATE TABLE hdfs_table (
  id int,
  name string,
  value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;

加載數據

hive> LOAD DATA INPATH '/test/table_test.csv' OVERWRITE INTO TABLE db_test.hdfs_table;
Loading data to table db_test.hdfs_table
OK
Time taken: 0.343 seconds
hive> select * from db_test.hdfs_table;
OK
1       user1   1000
2       user2   2000
3       user3   3000
4       user4   4000
5       user5   5000
Time taken: 0.757 seconds, Fetched: 5 row(s)

注意,若是從HDFS加載數據到Hive後,原有的HDFS的數據文件就不會存在了。

[hadoop@hadoop-nn ~]$ hdfs dfs -ls /test/table_test.csv
ls: `/test/table_test.csv': No such file or directory

查看HDFS數據文件

[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/hdfs_table/
Found 1 items
-rwxrwxrwx   2 hadoop supergroup         65 2017-06-15 22:54 /user/hive/warehouse/db_test.db/hdfs_table/table_test.csv

再次上傳一個文件到對應表的目錄(/user/hive/warehouse/db_test.db/hdfs_table)下

[hadoop@hadoop-nn ~]$ cat /tmp/table_test.csv 
6,user6,6000

[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /user/hive/warehouse/db_test.db/hdfs_table/table_test_20170616.csv

再次查看Hive表

hive> select * from db_test.hdfs_table;
OK
1       user1   1000
2       user2   2000
3       user3   3000
4       user4   4000
5       user5   5000
6       user6   6000
Time taken: 0.053 seconds, Fetched: 6 row(s)

能夠看到,咱們追加的一個表信息也顯示出來了。

分區表

建立分區表時,須要給定一個分區字段,這個分區字段能夠是已經存在的,也能夠是不存在(若是不存在建立表時會自動添加)。Hive分區概念跟MySQL分區差很少。下面建立一個以月爲分區的分區表。

CREATE TABLE par_table (
  id int,
  name string,
  value INT
) partitioned by (day int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

查看錶信息

hive> desc par_table;
OK
id                      int                                         
name                    string                                      
value                   int                                         
day                     int                                         
                 
# Partition Information          
# col_name              data_type               comment             
                 
day                     int                                         
Time taken: 0.023 seconds, Fetched: 9 row(s)

加載數據到Hive分區表中,須要指定對應的分區表進行數據加載

hive> LOAD DATA LOCAL INPATH '/tmp/table_test.csv' OVERWRITE INTO TABLE db_test.par_table PARTITION (day='22');
Loading data to table db_test.par_table partition (day=22)
OK
Time taken: 0.267 seconds
 
hive> LOAD DATA LOCAL INPATH '/tmp/table_test.csv' OVERWRITE INTO TABLE db_test.par_table PARTITION (day='23');
Loading data to table db_test.par_table partition (day=23)
OK
Time taken: 0.216 seconds

查看HDFS數據文件展現樣式

[hadoop@hadoop-nn ~]$ hdfs dfs -ls /user/hive/warehouse/db_test.db/par_table/
Found 1 items
drwxrwxrwx   - hadoop supergroup          0 2017-06-16 01:12 /user/hive/warehouse/db_test.db/par_table/day=22
drwxrwxrwx   - hadoop supergroup          0 2017-06-16 01:12 /user/hive/warehouse/db_test.db/par_table/day=23

能夠看到多了對應的分區目錄了。

查詢數據,查詢時有點不太同樣,若是給定一個where條件指定分區字段(也就是根據查詢字段來進行分區),這樣就只會查詢這個分區的內容,不須要加載全部表。若是查詢字段不是分區字段,那麼就須要掃描全部的分區了。以下兩個示例:

hive> select * from db_test.par_table;
OK
6       user6   6000    22
6       user6   6000    23
Time taken: 0.054 seconds, Fetched: 2 row(s)
 
hive> select * from db_test.par_table where day=22;
OK
6       user6   6000    22
Time taken: 0.068 seconds, Fetched: 1 row(s)

外部表

Hive支持外部表,外部表跟內部表和分區表不一樣。只須要在HDFS中有了對應的文件,而後在Hive就能夠建立一個表並指定對應的目錄就能夠直接查數據了,而不須要執行數據加載任務。下面來測試看看:

先在HDFS中建立目錄和上傳文件:

[hadoop@hadoop-nn ~]$ hdfs dfs -mkdir -p /hive/external
[hadoop@hadoop-nn ~]$ hdfs dfs -put /tmp/table_test.csv /hive/external/ext_table.csv

而後在Hive中直接建立表:

CREATE EXTERNAL TABLE ext_table (
  id int,
  name string,
  value INT
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/hive/external';

此時,直接查詢此表,不須要加載數據了

hive> select * from ext_table;
OK
6       user6   6000
Time taken: 0.042 seconds, Fetched: 1 row(s)

Hive還支持桶表,這裏就不說了,不多用,有興趣自行查看資料。

最後來一個MapReduce處理Hive的過程

hive> select count(*) from table_test;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = hadoop_20170616021047_9c0dc1bf-383f-49ad-83e2-e2e5dfdcb20c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1497424827481_0004, Tracking URL = http://master:8088/proxy/application_1497424827481_0004/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1497424827481_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-06-16 02:10:52,914 Stage-1 map = 0%,  reduce = 0%
2017-06-16 02:10:57,062 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.11 sec
2017-06-16 02:11:02,204 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.53 sec
MapReduce Total cumulative CPU time: 2 seconds 530 msec
Ended Job = job_1497424827481_0004
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.53 sec   HDFS Read: 7980 HDFS Write: 102 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 530 msec
OK
10
Time taken: 15.254 seconds, Fetched: 1 row(s)

能夠好好看一下處理過程,因爲是測試環境因此MP時間好久。

視圖

另外Hive也支持視圖,使用很是簡單,以下配置:

hive> create view view_test as select * from table_test;
OK
Time taken: 0.054 seconds
 
hive> select * from view_test;
OK
d1      user1   1000
d1      user2   2000
d1      user3   3000
d2      user4   4000
d2      user5   5000
Time taken: 0.057 seconds, Fetched: 5 row(s)

Hive元數據信息

而後咱們來查看一下Hive元數據表信息,在MySQL的hive庫下的DBS表中存儲Hive建立的庫信息:

mysql> select * from DBS;
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
| DB_ID | DESC                  | DB_LOCATION_URI                                   | NAME    | OWNER_NAME | OWNER_TYPE |
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
|     1 | Default Hive database | hdfs://master:8020/user/hive/warehouse            | default | public     | ROLE       |
|     6 | NULL                  | hdfs://master:8020/user/hive/warehouse/db_test.db | db_test | hadoop     | USER       |
+-------+-----------------------+---------------------------------------------------+---------+------------+------------+
2 rows in set (0.00 sec)

DB_ID:庫ID,具備惟一性。

DESC:庫描述信息。

DB_LOCATION_URI:庫在HDFS的URI地址。

NAME:庫名稱。

OWNER_NAME:庫的全部者,用什麼系統用戶登陸Hive建立的,其全部者就是誰,通常要在Hadoop用戶下登陸Hive。

OWNER_TYPE:庫的全部者類型。

在hive庫下的TBLS表中存儲咱們建立的表的元數據信息:
mysql> select * from TBLS;
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
| TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER  | RETENTION | SD_ID | TBL_NAME   | TBL_TYPE       | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT |
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
|     11 |  1497579800 |     6 |                0 | root   |         0 |    11 | table_test | MANAGED_TABLE  | NULL               | NULL               |
|     16 |  1497581548 |     6 |                0 | hadoop |         0 |    16 | hdfs_table | MANAGED_TABLE  | NULL               | NULL               |
|     26 |  1497584489 |     6 |                0 | hadoop |         0 |    26 | par_table  | MANAGED_TABLE  | NULL               | NULL               |
|     28 |  1497591914 |     6 |                0 | hadoop |         0 |    31 | ext_table  | EXTERNAL_TABLE | NULL               | NULL               |
+--------+-------------+-------+------------------+--------+-----------+-------+------------+----------------+--------------------+--------------------+
4 rows in set (0.00 sec)

解釋幾個重要參數:

TBL_ID:表ID,具備惟一性。

CREATE_TIME:表建立時間。

DB_ID:所屬庫的ID。

LAST_ACCESS_TIME:最後一次訪問時間。

OWNER:表的全部者,用什麼系統用戶登陸Hive建立的,其全部者就是誰,通常要在Hadoop用戶下登陸Hive。

TBL_NAME:表名稱。

TBL_TYPE:表類型,MANAGED_TABLE表示受託管的表(如內部表、分區表、桶表),EXTERNAL_TABLE表示外部表,兩個有個很大的區別就是受託管的表,當你執行DROP TABLE動做時,會把Hive元數據信息連同HDFS數據也一同刪除。而外部表執行DROP TABLE時不會刪除HDFS的數據,只是把元數據信息刪除了。

原文來自: https://www.linuxprobe.com/hadoop-hive.html

相關文章
相關標籤/搜索