前言java
Hive是Hadoop一個程序接口,Hive讓數據分析人員快速上手,Hive使用了類SQL的語法,Hive讓JAVA的世界變得簡單而輕巧,Hive讓Hadoop普及到了程序員之外的人。node
從Hive開始,讓分析師們也能玩轉大數據。mysql
目錄程序員
系統環境
裝好hadoop的環境後,咱們能夠把Hive裝在namenode機器上(c1)。
hadoop的環境,請參考:讓Hadoop跑在雲端系列文章,RHadoop實踐系列之一:Hadoop環境搭建sql
下載: hive-0.9.0.tar.gz
解壓到: /home/cos/toolkit/hive-0.9.0shell
hive配置數據庫
~ cd /home/cos/toolkit/hive-0.9.0 ~ cp hive-default.xml.template hive-site.xml ~ cp hive-log4j.properties.template hive-log4j.properties
修改hive-site.xml配置文件
把Hive的元數據存儲到MySQL中apache
~ vi conf/hive-site.xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://c1:3306/hive_metadata?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property>
修改hive-log4j.properties緩存
#log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
設置環境變量app
~ sudo vi /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/cos/toolkit/ant184/bin:/home/cos/toolkit/jdk16/bin:/home/cos/toolkit/maven3/bin:/home/cos/toolkit/hadoop-1.0.3/bin:/home/cos/toolkit/hive-0.9.0/bin" JAVA_HOME=/home/cos/toolkit/jdk16 ANT_HOME=/home/cos/toolkit/ant184 MAVEN_HOME=/home/cos/toolkit/maven3 HADOOP_HOME=/home/cos/toolkit/hadoop-1.0.3 HIVE_HOME=/home/cos/toolkit/hive-0.9.0 CLASSPATH=/home/cos/toolkit/jdk16/lib/dt.jar:/home/cos/toolkit/jdk16/lib/tools.jar
在hdfs上面,建立目錄
$HADOOP_HOME/bin/hadoop fs -mkidr /tmp $HADOOP_HOME/bin/hadoop fs -mkidr /user/hive/warehouse $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
在MySQL中建立數據庫
create database hive_metadata; grant all on hive_metadata.* to hive@'%' identified by 'hive'; grant all on hive_metadata.* to hive@localhost identified by 'hive'; ALTER DATABASE hive_metadata CHARACTER SET latin1;
手動上傳mysql的jdbc庫到hive/lib
~ ls /home/cos/toolkit/hive-0.9.0/lib mysql-connector-java-5.1.22-bin.jar
啓動hive
#啓動metastore服務 ~ bin/hive --service metastore & Starting Hive Metastore Server #啓動hiveserver服務 ~ bin/hive --service hiveserver & Starting Hive Thrift Server #啓動hive客戶端 ~ bin/hive shell Logging initialized using configuration in file:/root/hive-0.9.0/conf/hive-log4j.properties Hive history file=/tmp/root/hive_job_log_root_201211141845_1864939641.txt hive> show tables OK
查詢MySQL數據庫中的元數據
~ mysql -uroot -p mysql> use hive_metadata; Database changed mysql> show tables; +-------------------------+ | Tables_in_hive_metadata | +-------------------------+ | BUCKETING_COLS | | CDS | | COLUMNS_V2 | | DATABASE_PARAMS | | DBS | | IDXS | | INDEX_PARAMS | | PARTITIONS | | PARTITION_KEYS | | PARTITION_KEY_VALS | | PARTITION_PARAMS | | PART_COL_PRIVS | | PART_PRIVS | | SDS | | SD_PARAMS | | SEQUENCE_TABLE | | SERDES | | SERDE_PARAMS | | SORT_COLS | | TABLE_PARAMS | | TBLS | | TBL_COL_PRIVS | | TBL_PRIVS | +-------------------------+ 23 rows in set (0.00 sec)
Hive已經成功安裝,下面是hive的使用攻略。
1. 進入hive控制檯
~ cd /home/cos/toolkit/hive-0.9.0 ~ bin/hive shell Logging initialized using configuration in file:/home/cos/toolkit/hive-0.9.0/conf/hive-log4j.properties Hive history file=/tmp/cos/hive_job_log_cos_201307160003_95040367.txt hive>
新建表
#建立數據(文本以tab分隔) ~ vi /home/cos/demo/t_hive.txt 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34 #建立新表 hive> CREATE TABLE t_hive (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; OK Time taken: 0.489 seconds #導入數據t_hive.txt到t_hive表 hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hive.txt' OVERWRITE INTO TABLE t_hive ; Copying data from file:/home/cos/demo/t_hive.txt Copying file: file:/home/cos/demo/t_hive.txt Loading data to table default.t_hive Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive OK Time taken: 0.397 seconds
查看錶和數據
#查看錶 hive> show tables; OK t_hive Time taken: 0.099 seconds #正則匹配表名 hive>show tables '*t*'; OK t_hive Time taken: 0.065 seconds #查看錶數據 hive> select * from t_hive; OK 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34 Time taken: 0.264 seconds #查看錶結構 hive> desc t_hive; OK a int b int c int Time taken: 0.1 seconds
修改表
#增長一個字段 hive> ALTER TABLE t_hive ADD COLUMNS (new_col String); OK Time taken: 0.186 seconds hive> desc t_hive; OK a int b int c int new_col string Time taken: 0.086 seconds #重命令表名 ~ ALTER TABLE t_hive RENAME TO t_hadoop; OK Time taken: 0.45 seconds hive> show tables; OK t_hadoop Time taken: 0.07 seconds
刪除表
hive> DROP TABLE t_hadoop; OK Time taken: 0.767 seconds hive> show tables; OK Time taken: 0.064 seconds
還以剛纔的t_hive爲例。
#建立表結構 hive> CREATE TABLE t_hive (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
從操做本地文件系統加載數據(LOCAL)
hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hive.txt' OVERWRITE INTO TABLE t_hive ; Copying data from file:/home/cos/demo/t_hive.txt Copying file: file:/home/cos/demo/t_hive.txt Loading data to table default.t_hive Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive OK Time taken: 0.612 seconds #在HDFS中查找剛剛導入的數據 ~ hadoop fs -cat /user/hive/warehouse/t_hive/t_hive.txt 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34
從HDFS加載數據
建立表t_hive2 hive> CREATE TABLE t_hive2 (a int, b int, c int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; #從HDFS加載數據 hive> LOAD DATA INPATH '/user/hive/warehouse/t_hive/t_hive.txt' OVERWRITE INTO TABLE t_hive2; Loading data to table default.t_hive2 Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive2 OK Time taken: 0.325 seconds #查看數據 hive> select * from t_hive2; OK 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34 Time taken: 0.287 seconds
從其餘表導入數據
hive> INSERT OVERWRITE TABLE t_hive2 SELECT * FROM t_hive ; Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_201307131407_0002, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0002 Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2013-07-16 10:32:41,979 Stage-1 map = 0%, reduce = 0% 2013-07-16 10:32:48,034 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:49,050 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:50,068 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:51,082 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:52,093 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:53,102 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.03 sec 2013-07-16 10:32:54,112 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.03 sec MapReduce Total cumulative CPU time: 1 seconds 30 msec Ended Job = job_201307131407_0002 Ended Job = -314818888, job is filtered out (removed at runtime). Moving data to: hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-32-31_323_5732404975764014154/-ext-10000 Loading data to table default.t_hive2 Deleted hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive2 Table default.t_hive2 stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 56, raw_data_size: 0] 7 Rows loaded to t_hive2 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 1.03 sec HDFS Read: 273 HDFS Write: 56 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 30 msec OK Time taken: 23.227 seconds hive> select * from t_hive2; OK 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34 Time taken: 0.134 seconds
建立表並從其餘表導入數據
#刪除表 hive> DROP TABLE t_hive; #建立表並從其餘表導入數據 hive> CREATE TABLE t_hive AS SELECT * FROM t_hive2 ; Total MapReduce jobs = 2 Launching Job 1 out of 2 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_201307131407_0003, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0003 Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2013-07-16 10:36:48,612 Stage-1 map = 0%, reduce = 0% 2013-07-16 10:36:54,648 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.13 sec 2013-07-16 10:36:55,657 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.13 sec 2013-07-16 10:36:56,666 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.13 sec 2013-07-16 10:36:57,673 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.13 sec 2013-07-16 10:36:58,683 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.13 sec 2013-07-16 10:36:59,691 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.13 sec MapReduce Total cumulative CPU time: 1 seconds 130 msec Ended Job = job_201307131407_0003 Ended Job = -670956236, job is filtered out (removed at runtime). Moving data to: hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-36-39_986_1343249562812540343/-ext-10001 Moving data to: hdfs://c1.wtmart.com:9000/user/hive/warehouse/t_hive Table default.t_hive stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 56, raw_data_size: 0] 7 Rows loaded to hdfs://c1.wtmart.com:9000/tmp/hive-cos/hive_2013-07-16_10-36-39_986_1343249562812540343/-ext-10000 MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 1.13 sec HDFS Read: 272 HDFS Write: 56 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 130 msec OK Time taken: 20.13 seconds hive> select * from t_hive; OK 16 2 3 61 12 13 41 2 31 17 21 3 71 2 31 1 12 34 11 2 34 Time taken: 0.109 seconds
僅複製表結構不導數據
hive> CREATE TABLE t_hive3 LIKE t_hive; hive> select * from t_hive3; OK Time taken: 0.077 seconds
從MySQL數據庫導入數據
咱們將在介紹Sqoop時講。
從HDFS複製到HDFS其餘位置
~ hadoop fs -cp /user/hive/warehouse/t_hive / ~ hadoop fs -ls /t_hive Found 1 items -rw-r--r-- 1 cos supergroup 56 2013-07-16 10:41 /t_hive/000000_0 ~ hadoop fs -cat /t_hive/000000_0 1623 611213 41231 17213 71231 11234 11234
經過Hive導出到本地文件系統
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/t_hive' SELECT * FROM t_hive; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_201307131407_0005, Tracking URL = http://c1.wtmart.com:50030/jobdetails.jsp?jobid=job_201307131407_0005 Kill Command = /home/cos/toolkit/hadoop-1.0.3/libexec/../bin/hadoop job -Dmapred.job.tracker=hdfs://c1.wtmart.com:9001 -kill job_201307131407_0005 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2013-07-16 10:46:24,774 Stage-1 map = 0%, reduce = 0% 2013-07-16 10:46:30,823 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:31,833 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:32,844 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:33,856 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:34,865 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:35,873 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.87 sec 2013-07-16 10:46:36,884 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.87 sec MapReduce Total cumulative CPU time: 870 msec Ended Job = job_201307131407_0005 Copying data to local directory /tmp/t_hive Copying data to local directory /tmp/t_hive 7 Rows loaded to /tmp/t_hive MapReduce Jobs Launched: Job 0: Map: 1 Cumulative CPU: 0.87 sec HDFS Read: 271 HDFS Write: 56 SUCCESS Total MapReduce CPU Time Spent: 870 msec OK Time taken: 23.369 seconds #查看本地操做系統 hive> ! cat /tmp/t_hive/000000_0; hive> 1623 611213 41231 17213 71231 11234 11234
注:如下代碼將去掉map,reduce的日誌輸出部分。
普通查詢:排序,列別名,嵌套子查詢
hive> FROM ( > SELECT b,c as c2 FROM t_hive > ) t > SELECT t.b, t.c2 > WHERE b>2 > LIMIT 2; 12 13 21 3
鏈接查詢:JOIN
hive> SELECT t1.a,t1.b,t2.a,t2.b > FROM t_hive t1 JOIN t_hive2 t2 on t1.a=t2.a > WHERE t1.c>10; 1 12 1 12 11 2 11 2 41 2 41 2 61 12 61 12 71 2 71 2
聚合查詢1:count, avg
hive> SELECT count(*), avg(a) FROM t_hive; 7 31.142857142857142
聚合查詢2:count, distinct
hive> SELECT count(DISTINCT b) FROM t_hive; 3
聚合查詢3:GROUP BY, HAVING
#GROUP BY hive> SELECT avg(a),b,sum(c) FROM t_hive GROUP BY b,c 16.0 2 3 56.0 2 62 11.0 2 34 61.0 12 13 1.0 12 34 17.0 21 3 #HAVING hive> SELECT avg(a),b,sum(c) FROM t_hive GROUP BY b,c HAVING sum(c)>30 56.0 2 62 11.0 2 34 1.0 12 34
Hive視圖和數據庫視圖的概念是同樣的,咱們還以t_hive爲例。
hive> CREATE VIEW v_hive AS SELECT a,b FROM t_hive where c>30; hive> select * from v_hive; 41 2 71 2 1 12 11 2
刪除視圖
hive> DROP VIEW IF EXISTS v_hive; OK Time taken: 0.495 seconds
分區表是數據庫的基本概念,但不少時候數據量不大,咱們徹底用不到分區表。Hive是一種OLAP數據倉庫軟件,涉及的數據量是很是大的,因此分區表在這個場景就顯得很是重要!!
下面咱們從新定義一個數據表結構:t_hft
建立數據
~ vi /home/cos/demo/t_hft_20130627.csv 000001,092023,9.76 000002,091947,8.99 000004,092002,9.79 000005,091514,2.2 000001,092008,9.70 000001,092059,9.45 ~ vi /home/cos/demo/t_hft_20130628.csv 000001,092023,9.76 000002,091947,8.99 000004,092002,9.79 000005,091514,2.2 000001,092008,9.70 000001,092059,9.45
建立數據表
DROP TABLE IF EXISTS t_hft; CREATE TABLE t_hft( SecurityID STRING, tradeTime STRING, PreClosePx DOUBLE ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
建立分區數據表
根據業務:按天和股票ID進行分區設計
DROP TABLE IF EXISTS t_hft; CREATE TABLE t_hft( SecurityID STRING, tradeTime STRING, PreClosePx DOUBLE ) PARTITIONED BY (tradeDate INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
導入數據
#20130627 hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hft_20130627.csv' OVERWRITE INTO TABLE t_hft PARTITION (tradeDate=20130627); Copying data from file:/home/cos/demo/t_hft_20130627.csv Copying file: file:/home/cos/demo/t_hft_20130627.csv Loading data to table default.t_hft partition (tradedate=20130627) #20130628 hive> LOAD DATA LOCAL INPATH '/home/cos/demo/t_hft_20130628.csv' OVERWRITE INTO TABLE t_hft PARTITION (tradeDate=20130628); Copying data from file:/home/cos/demo/t_hft_20130628.csv Copying file: file:/home/cos/demo/t_hft_20130628.csv Loading data to table default.t_hft partition (tradedate=20130628)
查看分區表
hive> SHOW PARTITIONS t_hft; tradedate=20130627 tradedate=20130628 Time taken: 0.082 seconds
查詢數據
hive> select * from t_hft where securityid='000001'; 000001 092023 9.76 20130627 000001 092008 9.7 20130627 000001 092059 9.45 20130627 000001 092023 9.76 20130628 000001 092008 9.7 20130628 000001 092059 9.45 20130628 hive> select * from t_hft where tradedate=20130627 and PreClosePx<9; 000002 091947 8.99 20130627 000005 091514 2.2 20130627
Hive基於使用完成,這些都是平常的操做。後面我會繼續講一下,HiveQL優化及Hive的運維。