流式大數據計算實踐(7)----Hive安裝

1、前言

一、這一文學習使用Hivejava

2、Hive介紹與安裝

Hive介紹:Hive是基於Hadoop的一個數據倉庫工具,能夠經過HQL語句(相似SQL)來操做HDFS上面的數據,其原理就是將用戶寫的HQL語句轉換成MapReduce任務去執行,這樣不用開發者去寫繁瑣的MapReduce程序,直接編寫簡單的HQL語句,下降了不少學習成本。因爲Hive其實是執行MapReduce,因此Hive的查詢速度較慢,不適合用於實時的計算任務mysql

一、下載Hive的tar包,並解壓sql

tar zxvf /work/soft/installer/apache-hive-2.3.4-bin.tar.gz

二、配置環境變量數據庫

vim /etc/profile #set hive env export HIVE_HOME=/work/soft/apache-hive-2.3.4-bin export PATH=$PATH:$HIVE_HOME/bin source /etc/profile

三、修改配置文件(進入Hive的config目錄)apache

(1)先把模板配置文件複製一份,並編輯(配置一些目錄,以及將數據庫引擎換成MySQL,這裏須要有MySQL環境)vim

cp hive-default.xml.template hive-site.xml

(2)配置的hdfs目錄手動建立bash

hadoop fs -mkdir -p /user/hive/warehouse hadoop fs -mkdir -p /user/hive/tmp hadoop fs -mkdir -p /user/hive/log

(3)將配置文件中的${system:java.io.tmpdir}所有替換成/work/tmp(要記得建立目錄)ide

(4)將配置文件中的${system:user.name}所有替換成${user.name}工具

(5)下面配置中,配置MySQL驅動的包名,若是像我同樣使用高版本的驅動,包名注意是(com.mysql.cj.jdbc.Driver)oop

cp hive-default.xml.template hive-site.xml <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>
  <property>
    <name>hive.exec.scratchdir</name>
    <value>/user/hive/tmp</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>
  <property>
    <name>hive.querylog.location</name>
    <value>/user/hive/log/hadoop</value>
    <description>Location of Hive run time structured log file</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.3.123:3306/myhive?createDatabaseIfNotExist=true&amp;serverTimezone=UTC</value>
    <description> JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.cj.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>

(5)下載好MySQL的驅動包(mysql-connector-java-8.0.13.jar),並放到lib目錄下

(6)接下來修改腳本文件,一樣將模板複製一份並編輯

cp hive-env.sh.template hive-env.sh HADOOP_HOME=/work/soft/hadoop-2.6.4 export HIVE_CONF_DIR=/work/soft/apache-hive-2.3.4-bin/conf

3、Hive啓動

一、首先初始化MySQL,進入到bin目錄下,執行初始化命令

bash schematool -initSchema -dbType mysql

二、看到以下打印,說明初始化ok

三、啓動以前先設置一下HDFS的目錄權限,改爲777(可讀可寫可執行)

hadoop fs -chmod -R 777 /

四、執行命令啓動Hive,見到以下打印,說明啓動ok

hive show databases;

相關文章
相關標籤/搜索