Hive安裝-windows(轉載)

1.安裝hadoopphp

2.從maven中下載mysql-connector-java-5.1.26-bin.jar(或其餘jar版本)放在hive目錄下的lib文件夾html

3.配置hive環境變量,HIVE_HOME=F:\hadoop\apache-hive-2.1.1-binjava

4.hive配置mysql

hive的配置文件放在$HIVE_HOME/conf下,裏面有4個默認的配置文件模板sql

hive-default.xml.template                           默認模板數據庫

hive-env.sh.template                hive-env.sh默認配置apache

hive-exec-log4j.properties.template    exec默認配置dom

 hive-log4j.properties.template               log默認配置maven

可不作任何修改hive也能運行,默認的配置元數據是存放在Derby數據庫裏面的,大多數人都不怎麼熟悉,咱們得改用mysql來存儲咱們的元數據,以及修改數據存放位置和日誌存放位置等使得咱們必須配置本身的環境,下面介紹如何配置。分佈式

(1)建立配置文件

$HIVE_HOME/conf/hive-default.xml.template  -> $HIVE_HOME/conf/hive-site.xml

$HIVE_HOME/conf/hive-env.sh.template  -> $HIVE_HOME/conf/hive-env.sh

$HIVE_HOME/conf/hive-exec-log4j.properties.template ->  $HIVE_HOME/conf/hive-exec-log4j.properties

$HIVE_HOME/conf/hive-log4j.properties.template  -> $HIVE_HOME/conf/hive-log4j.properties

(2)修改 hive-env.sh

export HADOOP_HOME=F:\hadoop\hadoop-2.7.2
export HIVE_CONF_DIR=F:\hadoop\apache-hive-2.1.1-bin\conf
export HIVE_AUX_JARS_PATH=F:\hadoop\apache-hive-2.1.1-bin\lib

(3)修改 hive-site.xml

  1  <!--修改的配置-->  
  2 
  3 <property>  
  4 
  5 <name>hive.metastore.warehouse.dir</name>  
  6 
  7 <!--hive的數據存儲目錄,指定的位置在hdfs上的目錄-->  
  8 
  9 <value>/user/hive/warehouse</value>  
 10 
 11 <description>location of default database for the warehouse</description>  
 12 
 13 </property>  
 14 
 15 <property>  
 16 
 17 <name>hive.exec.scratchdir</name>  
 18 
 19 <!--hive的臨時數據目錄,指定的位置在hdfs上的目錄-->  
 20 
 21 <value>/tmp/hive</value>  
 22 
 23 <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>  
 24 
 25 </property>  
 26 
 27 <property>  
 28 
 29 <name>hive.exec.local.scratchdir</name>  
 30 
 31 <!--本地目錄-->  
 32 
 33 <value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>  
 34 
 35 <description>Local scratch space for Hive jobs</description>  
 36 
 37 </property>  
 38 
 39 <property>  
 40 
 41 <name>hive.downloaded.resources.dir</name>  
 42 
 43 <!--本地目錄-->  
 44 
 45 <value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>  
 46 
 47 <description>Temporary local directory for added resources in the remote file system.</description>  
 48 
 49 </property>  
 50 
 51 <property>  
 52 
 53 <name>hive.querylog.location</name>  
 54 
 55 <!--本地目錄-->  
 56 
 57 <value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>  
 58 
 59 <description>Location of Hive run time structured log file</description>  
 60 
 61 </property>  
 62 
 63 <property>  
 64 
 65 <name>hive.server2.logging.operation.log.location</name>  
 66 
 67 <value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp/operation_logs</value>  
 68 
 69 <description>Top level directory where operation logs are stored if logging functionality is enabled</description>  
 70 
 71 </property>  
 72 
 73 <!--新增的配置-->  
 74 
 75 <property>  
 76 
 77 <name>javax.jdo.option.ConnectionURL</name>  
 78 
 79 <value>jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8</value>  
 80 
 81 </property>  
 82 
 83 <property>  
 84 
 85 <name>javax.jdo.option.ConnectionDriverName</name>  
 86 
 87 <value>com.mysql.jdbc.Driver</value>  
 88 
 89 </property>  
 90 
 91 <property>  
 92 
 93 <name>javax.jdo.option.ConnectionUserName</name>  
 94 
 95 <value>root</value>  
 96 
 97 </property>  
 98 
 99 <property>  
100 
101 <name>javax.jdo.option.ConnectionPassword</name>  
102 
103 <value>root</value>  
104 
105 </property>  
106 
107 <!-- 解決 Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"  -->  
108 
109 <property>  
110 
111 <name>datanucleus.autoCreateSchema</name>  
112 
113 <value>true</value>  
114 
115 </property>  
116 
117 <property>  
118 
119 <name>datanucleus.autoCreateTables</name>  
120 
121 <value>true</value>  
122 
123 </property>  
124 
125 <property>  
126 
127 <name>datanucleus.autoCreateColumns</name>  
128 
129 <value>true</value>  
130 
131 </property>  
132 
133 <!-- 解決 Caused by: MetaException(message:Version information not found in metastore. )  -->  
134 
135 <property>    
136 
137 <name>hive.metastore.schema.verification</name>    
138 
139 <value>false</value>    
140 
141 <description>    
142 
143     Enforce metastore schema version consistency.    
144 
145     True: Verify that version information stored in metastore matches with one from Hive jars.  Also disable automatic    
146 
147           schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures    
148 
149           proper metastore schema migration. (Default)    
150 
151     False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.    
152 
153 </description>    
154 
155 </property>   

注:須要事先在hadoop上建立hdfs目錄

 

(4)日誌文件配置 略

 

5.MySQL設置

(1)建立hive數據庫: create database hive default character set latin1;

(2)grant all on hive.* to hive@'localhost'  identified by 'hive'; 

 flush privileges;

--本人用的是root用戶,因此這步省略

6.
(1)啓動hadoop:start-all.cmd
(2)啓動metastore服務:hive --service metastore
(3)啓動Hive:hive
 若Hive成功啓動,Hive本地模式安裝完成。


七、查看mysql數據庫
use hive; show tables;

8.在hive下建一張表:CREATE TABLE xp(id INT,name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; 

9.在MySQL中查看:select * from TBLS

安裝過程當中遇到的問題

 

(1)hive啓動時報錯Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations 參考 http://宋亞飛.中國/post/98 (2)Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient參考http://blog.csdn.net/freedomboy319/article/details/44828337(3)Caused by: MetaException(message:Version information not found in metastore. )參考 http://blog.csdn.net/youngqj/article/details/19987727(4)Hive 建立表報"Specified key was too long; max key length is 767 bytes" 錯誤參考 http://blog.csdn.net/niityzu/article/details/46606581其餘參考文章:http://www.cnblogs.com/hbwxcw/p/5960551.html hive-1.2.1安裝步驟http://blog.csdn.net/jdplus/article/details/46493553 Hive本地模式安裝及遇到的問題和解決方案http://www.coin163.com/it/x8681464370981050716/spark-Hive CentOS7僞分佈式下 hive安裝過程當中遇到的問題及解決辦法http://www.bogotobogo.com/Hadoop/BigData_hadoop_Hive_Install_On_Ubuntu_16_04.php APACHE HADOOP : HIVE 2.1.0 INSTALL ON UBUNTU 16.04

相關文章
相關標籤/搜索