Hive是基於Hadoop構建的一套數據倉庫分析系統,它提供了豐富的SQL查詢方式來分析存儲在Hadoop 分佈式文件系統中的數據。其在Hadoop的架構體系中承擔了一個SQL解析的過程,它提供了對外的入口來獲取用戶的指令而後對指令進行分析,解析出一個MapReduce程序組成可執行計劃,並按照該計劃生成對應的MapReduce任務提交給Hadoop集羣處理,獲取最終的結果。元數據——如表模式——存儲在名爲metastore的數據庫中。java
1 2 3 |
192.168.186.128 hadoop-master 192.168.186.129 hadoop-slave MySQL安裝在master機器上,hive服務器也安裝在master上 |
下載源碼包,最新版本可自行去官網下載mysql
1 2 3 4 |
[hadoop@hadoop-master ~]$ wget http://mirrors.cnnic.cn/apache/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz [hadoop@hadoop-master ~]$ tar -zxf apache-hive-1.2.1-bin.tar.gz [hadoop@hadoop-master ~]$ ls apache-hive-1.2.1-bin apache-hive-1.2.1-bin.tar.gz dfs hadoop-2.7.1 Hsource tmp |
配置環境變量sql
1 2 3 4 5 |
[root@hadoop-master hadoop]# vi /etc/profile HIVE_HOME=/home/hadoop/apache-hive-1.2.1-bin PATH=$PATH:$HIVE_HOME/bin export HIVE_NAME PATH [root@hadoop-master hadoop]# source /etc/profile |
metastore是Hive元數據集中存放地。它包括兩部分:服務和後臺數據存儲。有三種方式配置metastore:內嵌metastore、本地metastore以及遠程metastore。
本次搭建中採用MySQL做爲遠程倉庫,部署在hadoop-master節點上,hive服務端也安裝在hive-master上,hive客戶端即hadoop-slave訪問hive服務器。數據庫
進入到hive的配置文件目錄下,找到hive-default.xml.template,cp份爲hive-default.xml
另建立hive-site.xml並添加參數apache
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
[hadoop@hadoop-master conf]$ cp hive-default.xml.template hive-site.xml [hadoop@hadoop-master conf]$ pwd /home/hadoop/apache-hive-1.2.1-bin/conf [hadoop@hadoop-master conf]$ vi hive-site.xml <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop-master:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive<value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> </configuration> |
1 2 3 4 |
[hadoop@hadoop-master ~]$ wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.36.tar.gz [hadoop@hadoop-master ~]$ ls apache-hive-1.2.1-bin dfs hadoop-2.7.1 Hsource tmp [hadoop@hadoop-master ~]$ cp mysql-connector-java-5.1.33-bin.jar apache-hive-1.2.1-bin/lib/ |
1 2 3 4 5 6 7 8 |
[hadoop@hadoop-master ~]$ scp -r apache-hive-1.2.1-bin/ hadoop@hadoop-slave:/home/hadoop [hadoop@hadoop-slave conf]$ vi hive-site.xml <configuration> <property> <name>hive.metastore.uris</name> <value>thrift://hadoop-master:9083</value> </property> </configuration> |
要啓動metastore服務服務器
1 2 3 4 5 6 7 8 9 |
[hadoop@hadoop-master ~]$ hive --service metastore & [hadoop@hadoop-master ~]$ jps 10288 RunJar #多了一個進程 9365 NameNode 9670 SecondaryNameNode 11096 Jps 9944 NodeManager 9838 ResourceManager 9471 DataNode |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[hadoop@hadoop-master ~]$ hive Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive> show databases; OK default src Time taken: 1.332 seconds, Fetched: 2 row(s) hive> use src; OK Time taken: 0.037 seconds hive> create table test1(id int); OK Time taken: 0.572 seconds hive> show tables; OK abc test test1 Time taken: 0.057 seconds, Fetched: 3 row(s) hive> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[hadoop@hadoop-slave conf]$ hive Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive> show databases; OK default src Time taken: 1.022 seconds, Fetched: 2 row(s) hive> use src; OK Time taken: 0.057 seconds hive> show tables; OK abc test test1 Time taken: 0.218 seconds, Fetched: 3 row(s) hive> create table test2(id int ,name string); OK Time taken: 5.518 seconds hive> show tables; OK abc test test1 test2 Time taken: 0.102 seconds, Fetched: 4 row(s) hive> |
好了,測試完畢,已經安裝成功了。架構
錯誤描述:hive進入後能夠建立數據庫,可是沒法建立表分佈式
1 2 |
hive>create table table_test(id string,name string); FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.MetaException(message:javax.jdo.JDODataStoreException: An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes |
解決辦法:登陸mysql修改下hive數據庫的編碼方式oop
1 |
mysql>alter database hive character set latin1; |
http://yanliu.org/2015/08/13/Hadoop%E9%9B%86%E7%BE%A4%E4%B9%8BHive%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE/測試