hive 基礎執行語句

hive簡單概念

hive是一種基於Hadoop的數據倉庫的處理工具,目前只支持簡單的相似傳統關係型數據庫的SQL查詢,修改操做功能,他能夠直接將SQL轉化爲MapReduce程序,開發人員沒必要必定要學會寫MR程序,提升了開發效率。mysql

例子:基於mysql存儲的hive環境,hive元數據(hive相關表,表的各個字段屬性等信息)存放在mysql數據庫中,mysql數據存放在hdfs默認是/user/hive/warehouse/hive.db中正則表達式

ddl 語句

mysql做爲元數據存儲 數據庫(hive)結構目錄sql


建立表數據庫

hive> create table test (id int, name string);數據結構

引入分區的概念,由於hive 中的select 通常會掃描整個表,這樣會浪費不少時間,因此引入分區的概念app

hive> create table test2 (id int, name string) partitioned by (ds string);工具

瀏覽表oop

hive>show tables;spa

引入正則表達式 相似like的功能code

hive>show tables '.*t'

查看數據結構

hive> DESCRIBE test;或desc test;

修改或刪除表

hive>alter table test rename to test3;

hive>alter table add columns (new_column type comment '註釋')

hive>drop table test;
DML操做語句

一、倒入數據

 

LOAD DATA LOCAL INPATH '/home/hadoop/test.txt' OVERWRITE INTO TABLE test;

local 表示執行本地,若是去掉默認是取hdfs上的文件,overwrite表示導入數據覆蓋,若是去掉表示append

二、執行查詢

select * from test2 where test2.ds='2014-08-26'

三、值得注意的是 select count(*) from test 與咱們平時關係型數據庫記錄查詢操做不一樣,他執行的是一個mr

hive> select count(*) from test2;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1411720827309_0004, Tracking URL = http://master:8031/proxy/application_1411720827309_0004/
Kill Command = /usr/local/cloud/hadoop/bin/hadoop job -kill job_1411720827309_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
Stage-1 map = 0%, reduce = 0%
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.93 sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.3 sec
MapReduce Total cumulative CPU time: 2 seconds 300 msec
Ended Job = job_1411720827309_0004
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 2.3 sec HDFS Read: 245 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 300 msec
OK
3Time taken: 27.508 seconds, Fetched: 1 row(s)

相關文章
相關標籤/搜索