java.lang.ExceptionInInitializerError at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.init(JDBCStatsPublisher.java:265) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:412) Caused by: java.lang.SecurityException: sealing violation: package org.apache.derby.impl.jdbc.authentication is sealed at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:388) at java.net.URLClassLoader.defineClass(URLClassLoader.java:417)
將mysql-connector-java-5.1.6-bin.jar包導入到$HIVE_HOME/lib目錄下
</br>
</br>html
[ERROR] Terminal initialization failed; falling back to unsupported java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at jline.TerminalFactory.create(TerminalFactory.java:101) at jline.TerminalFactory.get(TerminalFactory.java:158) Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected at jline.console.ConsoleReader.<init>(ConsoleReader.java:230) at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
將當前hive版本的$HIVE_HOME/lib目錄下的jline-2.12.jar包拷貝到$HADOOP_HOME/share/hadoop/yarn/lib目錄下, 並將舊版本的Hive的Jline包從$HADOOP_HOME/etc/hadoop/yarn/lib目錄下刪除
</br>
</br>java
Exception in thread "main" java.lang.RuntimeException: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677) Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.fs.Path.initialize(Path.java:206) at org.apache.hadoop.fs.Path.<init>(Path.java:172) Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at java.net.URI.checkPath(URI.java:1804) at java.net.URI.<init>(URI.java:752) at org.apache.hadoop.fs.Path.initialize(Path.java:203) ... 11 more
1.查看hive-site.xml配置,會看到配置值含有"system:java.io.tmpdir"的配置項 2.新建文件夾${HIVE_HOME}/hive/logs 3.將含有"system:java.io.tmpdir"的配置項的值修改成${HIVE_HOME}/hive/logs 即: 新添屬性爲 <property> <name>hive.exec.local.scratchdir</name> <value>${HIVE_HOME}/logs/HiveJobsLog</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>${HIVE_HOME}/logs/ResourcesLog</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.querylog.location</name> <value>${HIVE_HOME}/logs/HiveRunLog</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>${HIVE_HOME}/logs/OpertitionLog</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property>
</br>
</br>node
Caused by: java.sql.SQLException: Access denied for user 'root'@'master' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:812) at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3269)
mysql密碼不正確, 查看hive-site.xml配置與mysql的密碼是否一致
</br>
</br>mysql
FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: Permission denied: user=services02, access=EXECUTE, inode="/tmp":services01:supergroup:drwx------
user=services02與inode="/tmp":services01:supergroup不一樣時,說明hive登陸的主機與HDFS的active狀態的主機不同 應把user=services02的主機變爲HDFS的active狀態的主機.
</br>
</br>
</br>web
create table parquet_test(x int, y string) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
FAILED: SemanticException [Error 10055]: Output Format must implement HiveOutputFormat, otherwise it should be either IgnoreKeyTextOutputFormat or SequenceFileOutputFormat
由於parquet.hive.DeprecatedParquetOutputFormat類並無在Hive的CLASSPATH中配置 單獨下載parquet-hive-1.2.5.jar包(此類屬於$IMPALA_HOME/lib目錄下), 在$HIVE_HOME/lib目錄下創建個軟鏈就能夠了
cd $HIVE_HOME/lib ln -s $/home/hadoop/soft/gz.zip/parquet-hive-1.2.5.jar
</br>
</br>sql
create table parquet_test(x int, y string) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
Exception in thread "main" java.lang.NoClassDefFoundError: parquet/hadoop/api/WriteSupport at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247)
經過yum下載Parquet sodu yum -y install parquet
下載parquet的jar包在/usr/lib/parquet目錄下, 將/usr/lib/parquet目錄下的全部jar(除javadoc.jar和sources.jar外)拷貝到$HIVE_HOME/lib目錄下. 若yum沒法下載parquet資源包, 則這是須要配置yum源, 請自行百度尋找資料
</br>
</br>數據庫
create table parquet_test(x int, y string) row format serde 'parquet.hive.serde.ParquetHiveSerDe' stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
先啓動metastore服務: hive --service metastore
</br>
</br>apache
Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426)) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ... 9 more Caused by: java.lang.RuntimeException: Map operator initialization failed at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:134) ... 22 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:386) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377) ... 22 more Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:323) ... 34 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec
在hive-env.sh中添加: JAVA_HOME=/home/hadoop/soft/jdk1.7.0_67 HADOOP_HOME=/home/hadoop/soft/hadoop-2.4.1 HIVE_HOME=/home/hadoop/soft/hive-0.12.0 export HIVE_CONF_DIR=$HIVE_HOME/conf export HIVE_AUX_JARS_PATH=$HIVE_HOME/lib export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
</br>
</br>centos
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"time":1471928602,"uid":687994,"billid":1004,"archiveid":null,"year":"2016","mouth":"2016-08","day":"2016-08-23","hour":"2016-08-23-04"} Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.ArrayWritable
hive-env.xml
文件,添加:JAVA_HOME=/home/hadoop/soft/jdk1.7.0_67 HADOOP_HOME=/home/hadoop/soft/hadoop-2.4.1 HIVE_HOME=/home/hadoop/soft/hive-0.12.0 export HIVE_CONF_DIR=$HIVE_HOME/conf export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
sodu yum -y install parquet
下載parquet
的jar
包在/usr/lib/parquet
目錄下, 將/usr/lib/parquet
目錄下的全部jar
(除javadoc.jar
和sources.jar
外)拷貝到$HIVE_HOME/lib
目錄下.api
文件壓縮格式問題:
在insert插入數據以前, 要先設置壓縮格式, 要三種格式可選, 一般選用SNAPPY
:
CompressionCodecName.UNCOMPRESSED CompressionCodecName.SNAPPY CompressionCodecName.GZIP
5.在Hive
命令行中執行set parquet.compression = SNAPPY
語句; 而後再進行insert
數據插入操做
</br>
</br>
FAILED: SemanticException [Error 10055]: Output Format must implement HiveOutputFormat, otherwise it should be either IgnoreKeyTextOutputFormat or SequenceFileOutputFormat
查看hive版本, 若是hive是0.13版本如下的, 則創表時用: stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' outputformat 'parquet.hive.DeprecatedParquetOutputFormat'; 若hive是0.13版本以上, 則創表時用: stored as parquet;
</br>
</br>
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Caused by: MetaException(message:Hive Schema version 2.1.0 does not match metastore schema version 1.2.0 Metastore is not upgraded or corrupt) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7768) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7731)
(1)刪除HDFS上的hive數據與hive數據庫 hadoop fs -rm -r -f /tmp/hive hadoop fs -rm -r -f /user/hive (2)刪除MySQL上的hive的元數據信息 mysql -uroot -p drop database hive
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Caused by: MetaException(message:Version information not found in metastore. ) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7753) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7731)
初始化hive, 將mysql做爲hive的元數據庫 schematool -dbType mysql -initSchema
</br>
</br>
java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused
方案一:
啓動hiveServer2服務,
cd $HIVE_HOME/bin ./hiveserver2
方案二:
<property> <name>hplsql.conn.default</name> <value>hive2conn</value> <description>The default connection profile</description> </property> <property> <name>hplsql.conn.hive2conn</name> <value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://m1:10000</value> <description>HiveServer2 JDBC connection</description> </property>
cd $HIVE_HOME/bin ./hiveserver2
cd $HIVE_HOME/bin ./beeline !connect jdbc:hive2://m1:10000
</br>
</br>
java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:209) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:266) Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:336) at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:279) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:526) Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:User: centos is not allowed to impersonate hive at org.apache.hadoop.ipc.Client.call(Client.java:1470) at org.apache.hadoop.ipc.Client.call(Client.java:1401)
core-site.xml
文件$HADOOP_HOME/etc/hadoop/core-site.xml
文件中增長如下配置, 注意:Hadoop.proxyuser.centos.hosts
配置項名稱中centos
部分爲報錯User:*
中的用戶名部分<property> <name>hadoop.proxyuser.centos.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.centos.hosts</name> <value>*</value> </property>
core-site.xml
文件分發到其餘主機</br>
</br>
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":centos:supergroup:drwx------ at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
1. 修改有問題的目錄權限 hadoop fs -chmod 755 /tmp 2.在hive-site.xml文件中修改如下配置項 <property> <name>hive.scratch.dir.permission</name> <value>755</value> </property>
</br>
</br>
在HPL/SQL2.2.1
版本中, 使用from語句時, 老是從配置文件裏找hplsql.dual.table
配置項的值, 老是報錯找不到dual
表, 或找不到select
語句中指定cloumn
HPL/SQL官網
http://www.hplsql.org/doc
注意: 必定得是0.3.17或以上版本的HPL/SQL 下載一個HPL/SQL 0.3.17的tan.gz文件, 解壓後將hplsql-0.3.17.jar包放入$HIVE_HOME包下, 並更名爲hive-hplsql-*.jar格式的包,如:hive-hplsql-0.3.17.jar
</br>
</br>
</br>
官方參考配置項 https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-Spark 遇坑解決方案指南: http://www.cnblogs.com/breg/p/5552342.html 搭建教程及部分遇坑解決指南 http://www.cnblogs.com/linbingdong/p/5806329.html
</br>
</br>
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused at org.apache.thrift.transport.TSocket.open(TSocket.java:226) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:477) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ``` #####解決方案: ``` 啓動metastore服務: hive --service metastore ``` --- </br> </br> #####2.調用spark引擎計算時, 報錯: ``` Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask ``` ######解決方案: ``` 錯誤日誌中出現Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 該錯誤, 大部分是由於spark的安裝包是包含Hive引用包的, 出現此類問題應本身手動編譯一個spark包 ``` --- </br> </br> #####3.調用spark引擎計算時, 報錯: ``` 2016-12-19T20:19:15,491 ERROR [main] client.SparkClientImpl: Error while waiting for client to connect. java.util.concurrent.ExecutionException: java.lang.RuntimeException: Cancel client 'dcee57ba-ea77-4e92-bd43-640e8385e2e7'. Error: Child process exited before connecting back with error log Warning: Ignoring non-spark config property: hive.spark.client.server.connect.timeout=200000 Warning: Ignoring non-spark config property: hive.spark.client.rpc.threads=8 Warning: Ignoring non-spark config property: hive.spark.client.connect.timeout=1000 Warning: Ignoring non-spark config property: hive.spark.client.secret.bits=256 Warning: Ignoring non-spark config property: hive.spark.client.rpc.max.size=52428800 16/12/19 20:19:15 INFO client.RemoteDriver: Connecting to: m1:48286 Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45) at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:134) Caused by: java.lang.RuntimeException: Cancel client 'dcee57ba-ea77-4e92-bd43-640e8385e2e7'. Error: Child process exited before connecting back with error log Warning: Ignoring non-spark config property: hive.spark.client.server.connect.timeout=200000 Warning: Ignoring non-spark config property: hive.spark.client.rpc.threads=8 Warning: Ignoring non-spark config property: hive.spark.client.connect.timeout=1000 Warning: Ignoring non-spark config property: hive.spark.client.secret.bits=256 Warning: Ignoring non-spark config property: hive.spark.client.rpc.max.size=52428800 16/12/19 20:19:15 INFO client.RemoteDriver: Connecting to: m1:48286 Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45) at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:134) ``` ######解決方案: ``` 參考資料: http://www.cnblogs.com/breg/p/5552342.html ``` 日誌中出現````Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS```該錯誤, 大部分是由於**spark**的安裝包是包含**Hive**引用包的, 出現此類問題應本身手動編譯一個**spark**包 --- </br> </br> #####4.使用編譯好的spark安裝包, 安裝好後, 啓動master時報錯: ``` Spark Command: /home/centos/soft/jdk1.7.0_67/bin/java -cp /home/centos/soft/spark/conf/:/home/centos/soft/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/home/centos/soft/hadoop/etc/hadoop/:/home/centos/soft/hadoop/etc/hadoop/:/home/centos/soft/hadoop/lib/spark-assembly-1.6.0-hadoop2.6.0.jar -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --ip m1 --port 7077 --webui-port 8080 ======================================== Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2570) at java.lang.Class.getMethod0(Class.java:2813) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) ``` ######解決方案: ``` 出現此類問題, 均是spark源碼編譯時出錯, 推薦使用Maven編譯 ``` --- </br> </br> #####5.使用編譯好的spark安裝包, 安裝好後, 啓動master時報錯: ``` Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2570) at java.lang.Class.getMethod0(Class.java:2813) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) ``` ######解決方案: ``` 出現此類問題, 均是spark源碼編譯時出錯, 推薦使用Maven編譯 ``` --- </br> </br> #####6.Hive On Spark使用hplsql存儲過程時報錯, **Beeline**中的報錯信息: ``` Unhandled exception in HPL/SQL java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://m1:10000: null at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:209) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:571) Caused by: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ``` **Hive-log.log**的報錯信息: ``` ERROR [HiveServer2-Handler-Pool: Thread-43] server.TThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client? at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ``` ######解決方案: 將```hive.server2.authentication```配置項改成```NONE``` ``` <property> <name>hive.server2.authentication</name> <value>NONE</value> </property> ``` --- </br> </br> #####7.使用spark-submit --master yarn提交任務時沒問題, 以HPL/SQL用存儲過程提交任務執行Spark任務時卻報錯: ``` 16/12/26 16:45:01 WARN cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered 16/12/26 16:45:16 WARN cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered ``` ######解決方案: 出現這種問題的緣由有不少種, 我所遇到的問題的解決方案是: 將```/etc/hosts```下的```127.0.0.1```註釋, 而後設置一個新值 ``` sudo vi /etc/hosts #127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 127.0.0.1 localhost ``` --- </br> </br> #####8.報錯信息以下: ``` java.lang.StackOverflowError at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53) ``` ######解決方案: SQL語句的`where`條件過長,字符串棧溢出 --- </br> </br> #####9.報錯信息以下: ``` Error: Could not find or load main class org.apache.hive.beeline.BeeLine ``` ######解決方案: 從新編譯Hive,並帶上參數`-Phive-thriftserver` --- </br> </br> #####10.報錯信息以下: ``` check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 ``` ######解決方案: 用新版`mysql-connector` --- </br> </br> #####11.報錯信息以下: ``` java.lang.NoSuchMethodError: org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$BaseGroupBuilder; ``` ######解決方案: 版本衝突所致,統一`hive`和`spark`中`parquet`組件版本 --- </br> </br> #####12.報錯信息以下: ``` Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' ``` ######解決方案: 不要官網編譯好的spark,本身下載Spark源碼從新編譯,並保證編譯的Spark版本知足Hive源碼中pom.xml文件中對spark的一個大版本要求, 若使用Hive on Spark,則在編譯過程當中不帶Phive有關的任何參數 --- </br> </br> #####13.報錯信息以下: ``` java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45) ``` ######解決方案: 不要官網編譯好的spark,本身下載Spark源碼從新編譯,並保證編譯的Spark版本知足Hive源碼中pom.xml文件中對spark的一個大版本要求, 若使用Hive on Spark,則在編譯過程當中不帶Phive有關的任何參數 --- </br> </br> </br>