hiveserver2鏈接出錯以下:Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:100...

hiveserver2鏈接出錯以下:Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)

1.看hiveserver2服務是否啓動

[root@hadoop01 ~]# jps 
5101 RunJar            # 啓動正常

2.看Hadoop安全模式是否關閉

[root@hadoop01 ~]# hdfs dfsadmin -safemode get
Safe mode is OFF     # 表示正常

若是爲:Safe mode is ON 處理方法見https://www.cnblogs.com/-xiaoyu-/p/11399287.htmlhtml

3.瀏覽器打開http://hadoop01:50070/看Hadoop集羣是否正常啓動

4.看MySQL服務是否啓動

[root@hadoop01 ~]# service mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL 8.0 database server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago
  Process: 5463 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS)
  Process: 5381 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mysqld.service (code=exited, status=0/SUCCESS)
  Process: 5357 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS)
 Main PID: 5418 (mysqld)
   Status: "Server is operational"
    Tasks: 46 (limit: 17813)
   Memory: 512.5M
   CGroup: /system.slice/mysqld.service
           └─5418 /usr/libexec/mysqld --basedir=/usr

Jan 05 23:29:55 hadoop01 systemd[1]: Starting MySQL 8.0 database server...
Jan 05 23:30:18 hadoop01 systemd[1]: Started MySQL 8.0 database server.

Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago 表示啓動正常java

如沒有啓動則:service mysqld start 啓動mysqlnode

注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意:mysql

​ 必定要用本地mysql工具鏈接mysql服務器,看是否能正常進行鏈接!!!!!(只是檢查)linux

如不能鏈接看下:web

配置只要是root用戶+密碼,在任何主機上都能登陸MySQL數據庫。
1.進入mysql
[root@hadoop102 mysql-libs]# mysql -uroot -p000000
2.顯示數據庫
mysql>show databases;
3.使用mysql數據庫
mysql>use mysql;
4.展現mysql數據庫中的全部表
mysql>show tables;
5.展現user表的結構
mysql>desc user;
6.查詢user表
mysql>select User, Host, Password from user;
7.修改user表,把Host表內容修改成%
mysql>update user set host='%' where host='localhost';
8.刪除root用戶的其餘host
mysql>delete from user where Host='hadoop102';
mysql>delete from user where Host='127.0.0.1';
mysql>delete from user where Host='::1';
9.刷新
mysql>flush privileges;
10.退出
mysql>quit;

檢查mysql-connector-java-5.1.27.tar.gz驅動包是否一句放入:/root/servers/hive-apache-2.3.6/lib下面sql

<value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>

#查看mysql裏面是否有上面指定的庫hive   若是 mysql中沒有庫請看 第 7 步
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| hive               |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

3306後面的hive是元數據庫,能夠本身指定 好比:

<value>jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true</value>

5.看Hadoop配置文件core-site.xml有沒有加以下配置

<property>
		<name>hadoop.proxyuser.root.hosts</name>  -- root爲當前Linux的用戶,個人是root用戶
		<value>*</value> 
	</property>
	<property>
		<name>hadoop.proxyuser.root.groups</name>
		<value>*</value>
	</property>
	


若是linux用戶爲本身名字 如:xiaoyu
則配置以下:

	<property>
		<name>hadoop.proxyuser.xiaoyu.hosts</name>  
		<value>*</value> 
	</property>
	<property>
		<name>hadoop.proxyuser.xiaoyu.groups</name>
		<value>*</value>
	</property>

6.其餘問題

# HDFS文件權限問題

  <property>
    <name>dfs.permissions</name>
    <value>false</value>
  </property>

7.org.apache.hadoop.hive.metastore.hivemetaexception: failed to get schema version.

schematool -dbType mysql -initSchema

8.最後一句 別下載錯包

apache hive-2.3.6下載地址: http://mirror.bit.edu.cn/apache/hive/hive-2.3.6/數據庫

Index of /apache/hive/hive-2.3.6 Icon Name Last modified Size Description [DIR] Parent Directory -
[ ] apache-hive-2.3.6-bin.tar.gz 23-Aug-2019 02:53 221M (下載這個) [ ] apache-hive-2.3.6-src.tar.gz 23-Aug-2019 02:53 20Mapache

9.重要

全部東西都檢查啦,仍是出錯!!! jps查看全部機器開啓的進程所有關閉,而後 重啓 設備,再瀏覽器

開啓zookeeper(若是有)

開啓hadoop集羣

開啓mysql服務

開啓hiveserver2

beeline鏈接

配置文件以下,僅供參考,以實際本身配置爲準

hive-site.xml

<configuration>
        <property>
                <name>javax.jdo.option.ConnectionURL</name>
                <value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>
        </property>

        <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
                <name>javax.jdo.option.ConnectionUserName</name>
                <value>root</value>
        </property>
        <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                <value>12345678</value>
        </property>
        <property>
                <name>hive.cli.print.current.db</name>
                <value>true</value>
        </property>
        <property>
                <name>hive.cli.print.header</name>
                <value>true</value>
        </property>
        <property>
                <name>hive.server2.thrift.bind.host</name>
                <value>hadoop01</value>
		</property>

		<property>
			<name>hive.metastore.schema.verification</name>
			<value>false</value>
        </property>
		<property>
			<name>datanucleus.schema.autoCreateAll</name>
			<value>true</value>
		</property>
<!--
        <property>
                <name>hive.metastore.uris</name>
                <value>thrift://node03.hadoop.com:9083</value>
        </property>
-->
</configuration>

core-site.xml

<configuration>
	<!-- 指定HDFS中NameNode的地址 -->
	<property>
			<name>fs.defaultFS</name>
		  <value>hdfs://hadoop01:9000</value>
	</property>

	<!-- 指定Hadoop運行時產生文件的存儲目錄 -->
	<property>
			<name>hadoop.tmp.dir</name>
			<value>/root/servers/hadoop-2.8.5/data/tmp</value>
	</property>
	
	<property>
		<name>hadoop.proxyuser.root.hosts</name>
		<value>*</value>
	</property>
	<property>
		<name>hadoop.proxyuser.root.groups</name>
		<value>*</value>
	</property>

</configuration>

hdfs-site.xml

<configuration>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>

	<!-- 指定Hadoop輔助名稱節點主機配置 第三臺 -->
	<property>
		  <name>dfs.namenode.secondary.http-address</name>
		  <value>hadoop03:50090</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
</configuration>

mapred-site.xml

<configuration>
	<!-- 指定MR運行在Yarn上 -->
	<property>
			<name>mapreduce.framework.name</name>
			<value>yarn</value>
	</property>
	
	<!-- 歷史服務器端地址 第三臺 -->
	<property>
	<name>mapreduce.jobhistory.address</name>
	<value>hadoop03:10020</value>
	</property>
	<!-- 歷史服務器web端地址 -->
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>hadoop03:19888</value>
	</property>
</configuration>

yarn-site.xml

<configuration>

	<!-- Site specific YARN configuration properties -->
	<!-- Reducer獲取數據的方式 -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

	<!-- 指定YARN的ResourceManager的地址 第二臺 -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop02</value>
	</property>
	
	<!-- 日誌彙集功能使能 -->
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>

	<!-- 日誌保留時間設置7天 -->
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>604800</value>
	</property>
</configuration>

原創地址:https://www.cnblogs.com/-xiaoyu-/p/12158984.html

相關文章
相關標籤/搜索