yum安裝CDH5.5 hive、impala

1、安裝hive

組件安排以下:java

172.16.57.75  bd-ops-test-75  mysql-server
172.16.57.77  bd-ops-test-77  Hiveserver2 HiveMetaStore

1.安裝hive

在77上安裝hivenode

# yum install hive hive-metastore hive-server2 hive-jdbc hive-hbase -y

在其餘節點上能夠安裝客戶端:mysql

# yum install hive hive-server2 hive-jdbc hive-hbase -y

2.安裝mysql

yum方式安裝mysql:sql

# yum install mysql mysql-devel mysql-server mysql-libs -y

啓動數據庫shell

# 配置開啓啓動
# chkconfig mysqld on
# service mysqld start

安裝jdbc驅動:數據庫

# yum install mysql-connector-java
 # ln -s /usr/share/java/mysql-connector-java.jar /usr/lib/hive/lib/mysql-connector-java.jar

設置mysql初始密碼爲bigdata:apache

# mysqladmin -uroot password 'bigdata'

進入數據庫後執行以下:vim

CREATE DATABASE metastore;
USE metastore;
SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-1.1.0.mysql.sql;
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'localhost';
GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%';
FLUSH PRIVILEGES;

注意:建立的用戶爲 hive,密碼爲 hive ,你能夠按本身須要進行修改。bash

修改 hive-site.xml 文件中如下內容:app

<property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://172.16.57.75:3306/metastore?useUnicode=true&amp;characterEncoding=UTF-8</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>

3.配置hive

修改/etc/hadoop/conf/hadoop-env.sh,添加環境變量 HADOOP_MAPRED_HOME,若是不添加,則當你使用 yarn 運行 mapreduce 時候會出現 UNKOWN RPC TYPE 的異常

export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce

在 hdfs 中建立 hive 數據倉庫目錄:

  • hive 的數據倉庫在 hdfs 中默認爲 /user/hive/warehouse,建議修改其訪問權限爲 1777,以便其餘全部用戶均可以建立、訪問表,但不能刪除不屬於他的表。
  • 每個查詢 hive 的用戶都必須有一個 hdfs 的 home 目錄( /user 目錄下,如 root 用戶的爲 /user/root)
  • hive 所在節點的 /tmp 必須是 world-writable 權限的。

建立目錄並設置權限:

# sudo -u hdfs hadoop fs -mkdir /user/hive
# sudo -u hdfs hadoop fs -chown hive /user/hive

# sudo -u hdfs hadoop fs -mkdir /user/hive/warehouse
# sudo -u hdfs hadoop fs -chmod 1777 /user/hive/warehouse
# sudo -u hdfs hadoop fs -chown hive /user/hive/warehouse

修改hive-env設置jdk環境變量 :

# vim /etc/hive/conf/hive-env.sh
export JAVA_HOME=/opt/programs/jdk1.7.0_67

啓動hive-server和metastore:

# service hive-metastore start
# service hive-server2 start

四、測試

$ hive -e'create table t(id int);'
$ hive -e'select * from t limit 2;'
$ hive -e'select id from t;'

訪問beeline:

$ beeline
beeline> !connect jdbc:hive2://localhost:10000;

五、與hbase集成

先安裝 hive-hbase:

# yum install hive-hbase -y

若是你是使用的 cdh4,則須要在 hive shell 裏執行如下命令添加 jar:

$ ADD JAR /usr/lib/hive/lib/zookeeper.jar;
$ ADD JAR /usr/lib/hive/lib/hbase.jar;
$ ADD JAR /usr/lib/hive/lib/hive-hbase-handler-<hive_version>.jar
# guava 包的版本以實際版本爲準。
$ ADD JAR /usr/lib/hive/lib/guava-11.0.2.jar;

若是你是使用的 cdh5,則須要在 hive shell 裏執行如下命令添加 jar:

ADD JAR /usr/lib/hive/lib/zookeeper.jar;
ADD JAR /usr/lib/hive/lib/hive-hbase-handler.jar;
ADD JAR /usr/lib/hbase/lib/guava-12.0.1.jar;
ADD JAR /usr/lib/hbase/hbase-client.jar;
ADD JAR /usr/lib/hbase/hbase-common.jar;
ADD JAR /usr/lib/hbase/hbase-hadoop-compat.jar;
ADD JAR /usr/lib/hbase/hbase-hadoop2-compat.jar;
ADD JAR /usr/lib/hbase/hbase-protocol.jar;
ADD JAR /usr/lib/hbase/hbase-server.jar;

以上你也能夠在 hive-site.xml 中經過 hive.aux.jars.path 參數來配置,或者你也能夠在 hive-env.sh 中經過 export HIVE_AUX_JARS_PATH= 來設置。

 

2、安裝impala

與Hive相似,Impala也能夠直接與HDFS和HBase庫直接交互。只不過Hive和其它創建在MapReduce上的框架適合須要長時間運行的批處理任務。例如:那些批量提取,轉化,加載(ETL)類型的Job,而Impala主要用於實時查詢。

組件分配以下:

172.16.57.74  bd-ops-test-74  impala-state-store impala-catalog impala-server 
172.16.57.75  bd-ops-test-75  impala-server
172.16.57.76  bd-ops-test-76  impala-server
172.16.57.77  bd-ops-test-77  impala-server

一、安裝

在74節點安裝:

yum install impala-state-store impala-catalog impala-server -y

在7五、7六、77節點上安裝:

yum install  impala-server -y

二、配置

2.1修改配置文件

查看安裝路徑:

# find / -name impala
	/var/run/impala
	/var/lib/alternatives/impala
	/var/log/impala
	/usr/lib/impala
	/etc/alternatives/impala
	/etc/default/impala
	/etc/impala
	/etc/default/impala

impalad的配置文件路徑由環境變量IMPALA_CONF_DIR指定,默認爲/usr/lib/impala/conf,impala 的默認配置在/etc/default/impala,修改該文件中的 IMPALA_CATALOG_SERVICE_HOSTIMPALA_STATE_STORE_HOST

IMPALA_CATALOG_SERVICE_HOST=bd-ops-test-74
IMPALA_STATE_STORE_HOST=bd-ops-test-74
IMPALA_STATE_STORE_PORT=24000
IMPALA_BACKEND_PORT=22000
IMPALA_LOG_DIR=/var/log/impala

IMPALA_CATALOG_ARGS=" -log_dir=${IMPALA_LOG_DIR} -sentry_config=/etc/impala/conf/sentry-site.xml"
IMPALA_STATE_STORE_ARGS=" -log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}"
IMPALA_SERVER_ARGS=" \
    -log_dir=${IMPALA_LOG_DIR} \
    -use_local_tz_for_unix_timestamp_conversions=true \
    -convert_legacy_hive_parquet_utc_timestamps=true \
    -catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \
    -state_store_port=${IMPALA_STATE_STORE_PORT} \
    -use_statestore \
    -state_store_host=${IMPALA_STATE_STORE_HOST} \
    -be_port=${IMPALA_BACKEND_PORT} \
    -server_name=server1\
    -sentry_config=/etc/impala/conf/sentry-site.xml"

ENABLE_CORE_DUMPS=false

# LIBHDFS_OPTS=-Djava.library.path=/usr/lib/impala/lib
# MYSQL_CONNECTOR_JAR=/usr/share/java/mysql-connector-java.jar
# IMPALA_BIN=/usr/lib/impala/sbin
# IMPALA_HOME=/usr/lib/impala
# HIVE_HOME=/usr/lib/hive
# HBASE_HOME=/usr/lib/hbase
# IMPALA_CONF_DIR=/etc/impala/conf
# HADOOP_CONF_DIR=/etc/impala/conf
# HIVE_CONF_DIR=/etc/impala/conf
# HBASE_CONF_DIR=/etc/impala/conf

設置 impala 可使用的最大內存:在上面的 IMPALA_SERVER_ARGS 參數值後面添加 -mem_limit=70% 便可。

若是須要設置 impala 中每個隊列的最大請求數,須要在上面的 IMPALA_SERVER_ARGS 參數值後面添加 -default_pool_max_requests=-1 ,該參數設置每個隊列的最大請求數,若是爲-1,則表示不作限制。

在節點74上建立hive-site.xmlcore-site.xmlhdfs-site.xml的軟連接至/etc/impala/conf目錄並做下面修改在hdfs-site.xml文件中添加以下內容:

<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
</property>
 
<property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hadoop-hdfs/dn._PORT</value>
</property>

<property>
  <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
  <value>true</value>
</property>

同步以上文件到其餘節點。

2.2建立socket path

在每一個節點上建立/var/run/hadoop-hdfs:

# mkdir -p /var/run/hadoop-hdfs

2.3用戶要求

impala 安裝過程當中會建立名爲 impala 的用戶和組,不要刪除該用戶和組。

若是想要 impala 和 YARN 和 Llama 合做,須要把 impala 用戶加入 hdfs 組。

impala 在執行 DROP TABLE 操做時,須要把文件移到到 hdfs 的回收站,因此你須要建立一個 hdfs 的目錄 /user/impala,並將其設置爲impala 用戶可寫。一樣的,impala 須要讀取 hive 數據倉庫下的數據,故須要把 impala 用戶加入 hive 組。

impala 不能以 root 用戶運行,由於 root 用戶不容許直接讀。

建立 impala 用戶家目錄並設置權限:

sudo -u hdfs hadoop fs -mkdir /user/impala
sudo -u hdfs hadoop fs -chown impala /user/impala

查看 impala 用戶所屬的組:

# groups impala
impala : impala hadoop hdfs hive

由上可知,impala 用戶是屬於 imapal、hadoop、hdfs、hive 用戶組的 。

2.4啓動服務

在 74節點啓動:

# service impala-state-store start
# service impala-catalog start

2.5使用impala-shell

使用impala-shell啓動Impala Shell,鏈接 74,並刷新元數據

#impala-shell 
Starting Impala Shell without Kerberos authentication
Connected to bd-dev-hadoop-70:21000
Server version: impalad version 2.3.0-cdh5.5.1 RELEASE (build 73bf5bc5afbb47aa7eab06cfbf6023ba8cb74f3c)
***********************************************************************************
Welcome to the Impala shell. Copyright (c) 2015 Cloudera, Inc. All rights reserved.
(Impala Shell v2.3.0-cdh5.5.1 (73bf5bc) built on Wed Dec  2 10:39:33 PST 2015)

After running a query, type SUMMARY to see a summary of where time was spent.
***********************************************************************************
[bd-dev-hadoop-70:21000] > invalidate metadata;

當在 Hive 中建立表以後,第一次啓動 impala-shell 時,請先執行 INVALIDATE METADATA 語句以便 Impala 識別出新建立的表(在 Impala 1.2 及以上版本,你只須要在一個節點上運行 INVALIDATE METADATA ,而不是在全部的 Impala 節點上運行)。

你也能夠添加一些其餘參數,查看有哪些參數:

#impala-shell -h
Usage: impala_shell.py [options]

Options:
  -h, --help            show this help message and exit
  -i IMPALAD, --impalad=IMPALAD
                        <host:port> of impalad to connect to
                        [default: bd-dev-hadoop-70:21000]
  -q QUERY, --query=QUERY
                        Execute a query without the shell [default: none]
  -f QUERY_FILE, --query_file=QUERY_FILE
                        Execute the queries in the query file, delimited by ;
                        [default: none]
  -k, --kerberos        Connect to a kerberized impalad [default: False]
  -o OUTPUT_FILE, --output_file=OUTPUT_FILE
                        If set, query results are written to the given file.
                        Results from multiple semicolon-terminated queries
                        will be appended to the same file [default: none]
  -B, --delimited       Output rows in delimited mode [default: False]
  --print_header        Print column names in delimited mode when pretty-
                        printed. [default: False]
  --output_delimiter=OUTPUT_DELIMITER
                        Field delimiter to use for output in delimited mode
                        [default: \t]
  -s KERBEROS_SERVICE_NAME, --kerberos_service_name=KERBEROS_SERVICE_NAME
                        Service name of a kerberized impalad [default: impala]
  -V, --verbose         Verbose output [default: True]
  -p, --show_profiles   Always display query profiles after execution
                        [default: False]
  --quiet               Disable verbose output [default: False]
  -v, --version         Print version information [default: False]
  -c, --ignore_query_failure
                        Continue on query failure [default: False]
  -r, --refresh_after_connect
                        Refresh Impala catalog after connecting
                        [default: False]
  -d DEFAULT_DB, --database=DEFAULT_DB
                        Issues a use database command on startup
                        [default: none]
  -l, --ldap            Use LDAP to authenticate with Impala. Impala must be
                        configured to allow LDAP authentication.
                        [default: False]
  -u USER, --user=USER  User to authenticate with. [default: root]
  --ssl                 Connect to Impala via SSL-secured connection
                        [default: False]
  --ca_cert=CA_CERT     Full path to certificate file used to authenticate
                        Impala's SSL certificate. May either be a copy of
                        Impala's certificate (for self-signed certs) or the
                        certificate of a trusted third-party CA. If not set,
                        but SSL is enabled, the shell will NOT verify Impala's
                        server certificate [default: none]
  --config_file=CONFIG_FILE
                        Specify the configuration file to load options. File
                        must have case-sensitive '[impala]' header. Specifying
                        this option within a config file will have no effect.
                        Only specify this as a option in the commandline.
                        [default: /root/.impalarc]
  --live_summary        Print a query summary every 1s while the query is
                        running. [default: False]
  --live_progress       Print a query progress every 1s while the query is
                        running. [default: False]
  --auth_creds_ok_in_clear
                        If set, LDAP authentication may be used with an
                        insecure connection to Impala. WARNING: Authentication
                        credentials will therefore be sent unencrypted, and
                        may be vulnerable to attack. [default: none]

使用 impala 導出數據:

impala-shell -i '172.16.57.74:21000' -r -q "select * from test" -B --output_delimiter="\t" -o result.txt
相關文章
相關標籤/搜索