Hive 是一個基於 hadoop 的開源數據倉庫工具,用於存儲和處理海量結構化數據。它把海量數據存儲於 hadoop 文件系統,而不是數據庫,但提供了一套類數據庫的數據存儲和處理機制,並採用 HQL (類 SQL )語言對這些數據進行自動化管理和處理。咱們能夠把 Hive 中海量結構化數據當作一個個的表,而實際上這些數據是分佈式存儲在 HDFS 中的。 Hive 通過對語句進行解析和轉換,最終生成一系列基於 hadoop 的 map/reduce 任務,經過執行這些任務完成數據處理。 java
Hive 誕生於 facebook 的日誌分析需求,面對海量的結構化數據, Hive 以較低的成本完成了以往須要大規模數據庫才能完成的任務,而且學習門檻相對較低,應用開發靈活而高效。 node
Hive 自 2009.4.29 發佈第一個官方穩定版 0.3.0 至今,不過一年的時間,正在慢慢完善,網上能找到的相關資料至關少,尤爲中文資料更少,本文結合業務對 Hive 的應用作了一些探索,並把這些經驗作一個總結,所謂前車可鑑,但願讀者能少走一些彎路。mysql
JDK:1.8 Hadoop Release:2.7.4 centos:7.3 node1(master) 主機: 192.168.252.121 node2(slave1) 從機: 192.168.252.122 node3(slave2) 從機: 192.168.252.123 node4(mysql) 從機: 192.168.252.124
安裝Apache Hive
前提是要先安裝hadoop
集羣,而且hive只須要在hadoop的namenode節點集羣裏安裝便可(須要在有的namenode上安裝),能夠不在datanode節點的機器上安裝。還須要說明的是,雖然修改配置文件並不須要把hadoop運行起來,可是本文中用到了hadoop的hdfs命令,在執行這些命令時你必須確保hadoop是正在運行着的,並且啓動hive的前提也須要hadoop在正常運行着,因此建議先把hadoop集羣啓動起來。sql
安裝MySQL
用於存儲 Hive 的元數據(也能夠用 Hive 自帶的嵌入式數據庫 Derby,可是 Hive 的生產環境通常不用 Derby),這裏只須要安裝 MySQL 單機版便可,若是想保證高可用的化,也能夠部署 MySQL 主從模式;數據庫
Hadoopapache
Hadoop-2.7.4 集羣快速搭建segmentfault
MySQL 隨意任選其一centos
CentOs7.3 安裝 MySQL 5.7.19 二進制版本app
搭建 MySQL 5.7.19 主從複製,以及複製實現細節分析maven
su hadoop cd /home/hadoop/ wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-2.3.0/apache-hive-2.3.0-bin.tar.gz tar -zxvf apache-hive-2.3.0-bin.tar.gz mv apache-hive-2.3.0-bin hive-2.3.0
若是是對全部的用戶都生效就修改vi /etc/profile
文件
若是隻針對當前用戶生效就修改 vi ~/.bahsrc
文件
sudo vi /etc/profile
#hive export PATH=${HIVE_HOME}/bin:$PATH export HIVE_HOME=/home/hadoop/hive-2.3.0/
使環境變量生效,運行 source /etc/profile
使/etc/profile
文件生效
cd /home/hadoop/hive-2.3.0/conf cp hive-default.xml.template hive-site.xml
使用 hadoop 新建 hdfs 目錄,由於在 hive-site.xml 中有默認以下配置:
<property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property>
進入 hadoop 安裝目錄 執行hadoop命令新建/user/hive/warehouse目錄,並受權,用於存儲文件
cd /home/hadoop/hadoop-2.7.4 bin/hadoop fs -mkdir -p /user/hive/warehouse bin/hadoop fs -mkdir -p /user/hive/tmp bin/hadoop fs -mkdir -p /user/hive/log bin/hadoop fs -chmod -R 777 /user/hive/warehouse bin/hadoop fs -chmod -R 777 /user/hive/tmp bin/hadoop fs -chmod -R 777 /user/hive/log
用如下命令檢查目錄是否建立成功
bin/hadoop fs -ls /user/hive
搜索hive.exec.scratchdir,將該name對應的value修改成/user/hive/tmp
<property> <name>hive.exec.scratchdir</name> <value>/user/hive/tmp</value> </property>
搜索hive.querylog.location,將該name對應的value修改成/user/hive/log/hadoop
<property> <name>hive.querylog.location</name> <value>/user/hive/log/hadoop</value> <description>Location of Hive run time structured log file</description> </property>
搜索javax.jdo.option.connectionURL,將該name對應的value修改成MySQL的地址
<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.252.124:3306/hive?createDatabaseIfNotExist=true</value> <description> JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. </description> </property>
搜索javax.jdo.option.ConnectionDriverName,將該name對應的value修改成MySQL驅動類路徑
<property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property>
搜索javax.jdo.option.ConnectionUserName,將對應的value修改成MySQL數據庫登陸名
<property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>Username to use against metastore database</description> </property>
搜索javax.jdo.option.ConnectionPassword,將對應的value修改成MySQL數據庫的登陸密碼
<property> <name>javax.jdo.option.ConnectionPassword</name> <value>mima</value> <description>password to use against metastore database</description> </property>
mkdir /home/hadoop/hive-2.3.0/tmp
並在 hive-site.xml
中修改
把{system:java.io.tmpdir}
改爲 /home/hadoop/hive-2.3.0/tmp
把 {system:user.name}
改爲 {user.name}
cp hive-env.sh.template hive-env.sh vi hive-env.sh HADOOP_HOME=/home/hadoop/hadoop-2.7.4/ export HIVE_CONF_DIR=/home/hadoop/hive-2.3.0/conf export HIVE_AUX_JARS_PATH=/home/hadoop/hive-2.3.0/lib
cd /home/hadoop/hive-2.3.0/lib wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
首先確保 mysql 中已經建立 hive
庫
cd /home/hadoop/hive-2.3.0/bin ./schematool -initSchema -dbType mysql
若是看到以下,表示初始化成功
Starting metastore schema initialization to 2.3.0 Initialization script hive-schema-2.3.0.mysql.sql Initialization script completed schemaTool completed
/usr/local/mysql/bin/mysql -uroot -p
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)
mysql> use hive; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +---------------------------+ | Tables_in_hive | +---------------------------+ | AUX_TABLE | | BUCKETING_COLS | | CDS | | COLUMNS_V2 | | COMPACTION_QUEUE | | COMPLETED_COMPACTIONS | | COMPLETED_TXN_COMPONENTS | | DATABASE_PARAMS | | DBS | | DB_PRIVS | | DELEGATION_TOKENS | | FUNCS | | FUNC_RU | | GLOBAL_PRIVS | | HIVE_LOCKS | | IDXS | | INDEX_PARAMS | | KEY_CONSTRAINTS | | MASTER_KEYS | | NEXT_COMPACTION_QUEUE_ID | | NEXT_LOCK_ID | | NEXT_TXN_ID | | NOTIFICATION_LOG | | NOTIFICATION_SEQUENCE | | NUCLEUS_TABLES | | PARTITIONS | | PARTITION_EVENTS | | PARTITION_KEYS | | PARTITION_KEY_VALS | | PARTITION_PARAMS | | PART_COL_PRIVS | | PART_COL_STATS | | PART_PRIVS | | ROLES | | ROLE_MAP | | SDS | | SD_PARAMS | | SEQUENCE_TABLE | | SERDES | | SERDE_PARAMS | | SKEWED_COL_NAMES | | SKEWED_COL_VALUE_LOC_MAP | | SKEWED_STRING_LIST | | SKEWED_STRING_LIST_VALUES | | SKEWED_VALUES | | SORT_COLS | | TABLE_PARAMS | | TAB_COL_STATS | | TBLS | | TBL_COL_PRIVS | | TBL_PRIVS | | TXNS | | TXN_COMPONENTS | | TYPES | | TYPE_FIELDS | | VERSION | | WRITE_SET | +---------------------------+ 57 rows in set (0.00 sec)
啓動Hive
cd /home/hadoop/hive-2.3.0/bin ./hive
建立 hive 庫
hive> create database ymq; OK Time taken: 0.742 seconds
選擇庫
hive> use ymq; OK Time taken: 0.036 seconds
建立表
hive> create table test (mykey string,myval string); OK Time taken: 0.569 seconds
插入數據
hive> insert into test values("1","www.ymq.io"); WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = hadoop_20170922011126_abadfa44-8ebe-4ffc-9615-4241707b3c03 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1506006892375_0001, Tracking URL = http://node1:8088/proxy/application_1506006892375_0001/ Kill Command = /home/hadoop/hadoop-2.7.4//bin/hadoop job -kill job_1506006892375_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2017-09-22 01:12:12,763 Stage-1 map = 0%, reduce = 0% 2017-09-22 01:12:20,751 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.24 sec MapReduce Total cumulative CPU time: 1 seconds 240 msec Ended Job = job_1506006892375_0001 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to directory hdfs://node1:9000/user/hive/warehouse/ymq.db/test/.hive-staging_hive_2017-09-22_01-11-26_242_8022847052615616955-1/-ext-10000 Loading data to table ymq.test MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 1.24 sec HDFS Read: 4056 HDFS Write: 77 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 240 msec OK Time taken: 56.642 seconds
查詢數據
hive> select * from test; OK 1 www.ymq.io Time taken: 0.253 seconds, Fetched: 1 row(s)
在界面上查看剛剛寫入的hdfs數據