vim,ssh,java,hadoop環境java
1.安裝mysqlmysql
sudo apt-get autoremove --purge mysql-server-5.0 sudo apt-get remove mysql-server sudo apt-get autoremove mysql-server sudo apt-get remove mysql-common # (很是重要) # 清理殘留數據 dpkg -l |grep ^rc|awk '{print $2}' |sudo xargs dpkg -P # 安裝mysql sudo apt-get install mysql-server sudo apt-get install mysql-client
2. 安裝時須要輸入root帳戶密碼sql
3. 使用命令登陸到mysql新建帳戶數據庫
mysql -uroot -p123 #-u+用戶名 -p+密碼
create database metastore; grant all on metastore.* to hadoop@'%' identified by '123'; grant all on metastore.* to hadoop@'localhost' identified by '123'; flush privileges; exit
建立數據庫apache
其中hadoop爲用戶名,123爲密碼。ubuntu
hadoop@hadoop:/usr/local$ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.7.21-0ubuntu0.16.04.1 (Ubuntu) Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create database metastore; Query OK, 1 row affected (0.00 sec) mysql> grant all on metastore.* to hadoop@'%' identified by '123'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> grant all on metastore.* to hadoop@'localhost' identified by '123'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit; Bye
1. 在/usr/local目錄下解壓並賦予權限vim
hadoop@haoop:/usr/local/$sudo tar -xzvf hive-2.3.2.tar.gz hadoop@haoop:/usr/local/$sudo mv hive-2.3.2 hive #重命名 hadoop@haoop:/usr/local/$sudo chown -R hadoop hive
2. 下載mysql-connector-java-5.1.44的jar包bash
https://dev.mysql.com/downloads/connector/j/eclipse
解壓ssh
sudo tar -xzvf mysql-connector-java-5.1.44.tar.gz
3. 將mysql的jar包複製到/hive/lib目錄下
sudo cp mysql-connector-java-5.1.44-bin.jar /usr/local/hive/lib
4. 添加hive環境變量
sudo vim ~/.bashrc
export HIVE_HOME=/usr/local/hive export PATH=$PATH:$HIVE_HOME/bin export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:. export CLASSPATH=$CLASSPATH:/opt/usr/local/hive/lib/*:.
source ~/.bashrc
5. hive配置文件
hadoop@haoop:/usr/local/$cd ./hive/conf hadoop@haoop:/usr/local/hive/conf$sudo cp hive-default.xml.template hive-default.xml hadoop@haoop:/usr/local/hive/conf$sudo cp hive-default.xml.template hive-site.xml hadoop@haoop:/usr/local/hive/conf$sudo cp hive-env.sh.template hive-env.sh hadoop@haoop:/usr/local/hive/conf$sudo vim hive-env.sh hadoop@haoop:/usr/local/hive/conf$sudo vim hive-site.xml
hive-env.sh
sudo vim hive-env.sh
export HADOOP_HEAPSIZE=1024 export HADOP_HOME=/usr/local/hadoop # 請指定你的hadoop安裝目錄, 這裏個人是/usr/local下 export HIVE_CONF_DIR=/usr/local/hive/conf export HIVE_AUX_JARS_PATH=/usr/local/hive/lib
新建hive-site.xml
此處username是我在mysql中建立的用戶名,password是我設置的密碼。
若是你使用的是原來的template文件,則最好將${system:java.io.tmpdir}/${system:user.name}替換爲本身的目錄 我使用的是 /usr/hive
sudo vim hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hadoop</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123</value> </property> </configuration>
6.查看mysql是否運行
sudo netstat -tap | grep mysql sudo /etc/init.d/mysql restart
7. 初始化hive,兩種初始化方法
schematool -initSchema -dbType mysql #schematool -dbType mysql -initSchema hadoop 123
8. 運行hive
報錯未建立目錄
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-2.3.2.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: Unable to create temp directory /user/hive/tmp
建立/user/hive/tmp目錄
sudo mkdir -p /usr/hive/tmp
報錯權限不夠
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-2.3.2.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: 權限不夠
添加權限
hadoop@hadoop:/user$ sudo chmod -R 777 /user/hive/tmp/ hadoop@hadoop:/user$ sudo chmod -R 777 /user/hive/hive_resources/
正確運行
1.修改/usr/local/hadoop/etc/hadoop/core-site.xml
<!--將hadoop修改成本身的用戶名--> <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.lina.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.lina.groups</name> <value>*</value> </property>
2. 新建java工程項目
添加hive的jar庫
項目>右鍵 build path > congifure build path >libraries >add library > user librarys > new新建hive的jar庫 > ok > add external jars > 全選/hive/lib/目錄下的全部的jar文件添加到hive用戶庫中
3. 打開hive監聽端口hiveserver2
編寫客戶端代碼
package com.hive.createDemo; import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class Hivetest1 { private static String driverName = "org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } //replace "hive" here with the name of the user the queries should run as //此處hadoop爲主機名或者localhost,或者是IP地址+hive使用的用戶名+密碼 Connection con = DriverManager.getConnection("jdbc:hive2://hadoop:10000/default", "hadoop", "123"); Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable"; stmt.execute("drop table if exists " + tableName); stmt.execute("create table " + tableName + " (key int, value string)"); // show tables String sql = "show tables '" + tableName + "'"; System.out.println("Running: " + sql); ResultSet res = stmt.executeQuery(sql); if (res.next()) { System.out.println(res.getString(1)); } // describe table sql = "describe " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } // load data into table // NOTE: filepath has to be local to the hive server // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line String filepath = "/tmp/a.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); stmt.execute(sql); // select * query sql = "select * from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2)); } // regular hive query sql = "select count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1)); } } }
運行代碼