【轉】安裝ambari的時候遇到的ambari和hadoop問題集

5.在安裝的時候遇到的問題

5.1使用ambari-server start的時候出現ERROR: Exiting with exit code -1.

5.1.1REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information

 

解決:html

因爲是從新安裝,因此在使用/etc/init.d/postgresql  initdb初始化數據庫的時候會出現這個錯誤,因此須要java

先用yum –y remove postgresql*命令把postgresql卸載node

而後把/var/lib/pgsql/data目錄下的文件所有刪除python

而後再配置postgresql數據庫(執行1.6章節內容)mysql

而後再次安裝(3章節內容)linux

 

5.1.2在日誌中有以下錯誤:ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

 

com.google.inject.ProvisionException: Guice provision errors:web

 

1) Error injecting method, java.lang.NullPointerExceptionsql

  at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)shell

  at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)數據庫

  while locating org.apache.ambari.server.api.services.AmbariMetaInfo

    for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

  at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

  while locating org.apache.ambari.server.controller.AmbariServer

 

1 error

        at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

        at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

        at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

        at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

        at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

        at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

        at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

        at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

        at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

        at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

        at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

        at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

        at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

        at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

        at com.sun.proxy.$Proxy26.create(Unknown Source)

        at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

5.2安裝HDFS和HBASE的時候出現/usr/hdp/current/hadoop-client/conf  doesn't exist

5.2.1/etc/Hadoop/conf文件連接存在

是因爲/etc/hadoop/conf和/usr/hdp/current/hadoop-client/conf目錄互相連接,形成死循環,因此要改變一個的連接

cd /etc/hadoop

rm -rf conf

ln -s /etc/hadoop/conf.backup /etc/hadoop/conf

 

HBASE也會遇到一樣的問題,解決方式同上

cd /etc/hbase

rm -rf conf

ln -s /etc/hbase/conf.backup /etc/hbase/conf

 

ZooKeeper也會遇到一樣的問題,解決方式同上

cd /etc/zookeeper

rm -rf conf

ln -s /etc/zookeeper/conf.backup /etc/zookeeper/conf

 

5.2.2/etc/Hadoop/conf文件連接不存在

查看正確的配置,發現缺乏兩個目錄文件config.backup和2.4.0.0-169,把文件夾拷貝到/etc/hadoop目錄下

 

 

從新建立/etc/hadoop目錄下的conf連接:

cd /etc/hadoop

rm -rf conf

ln -s /usr/hdp/current/hadoop-client/conf conf

 

問題解決

 

5.3在認證機器(Confirm Hosts)的時候出現錯誤Ambari agent machine hostname (localhost) does not match expected ambari server hostname

Ambari配置時在Confirm Hosts的步驟時,中間遇到一個很奇怪的問題:老是報錯誤:

Ambari agent machine hostname (localhost.localdomain) does not match expected ambari server hostname (xxx).

後來修改的/etc/hosts文件中

 

修改前:

127.0.0.1   localhost dsj-kj1
::1         localhost dsj-kj1

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

修改後:

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1          localhost localhost.localdomain localhost6 localhost6.localdomain6

 

10.13.39.32     dsj-kj1

10.13.39.33     dsj-kj2

10.13.39.34     dsj-kj3

10.13.39.35     dsj-kj4

10.13.39.36     dsj-kj5

感受應該是走的ipv6協議,很奇怪,不過修改後就能夠了。

5.4ambary-server重裝

刪除使用腳本刪除

注意刪除後要安裝兩個系統組件

yum -y install ruby*

yum -y install redhat-lsb*

yum -y install snappy*

 

安裝參考3

 

5.5Ambari鏈接mysql設置

在主節點把mysql數據庫鏈接包拷貝在/var/lib/ambary-server/resources目錄下並更名爲mysql-jdbc-driver.jar

cp /usr/share/java/mysql-connector-java-5.1.17.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar

 

再在圖形界面下啓動hive

5.6在註冊機器(Confirm Hosts)的時候出現錯誤Failed to start ping port listener of: [Errno 98] Address already in use

 

某個端口或者進程一直陪佔用

解決方法:
發現df命令一直執行沒有完成,

[root@testserver1 ~]# netstat -lanp|grep 8670
tcp        0      0 0.0.0.0:8670                0.0.0.0:*                   LISTEN      2587/df

[root@testserver1 ~]# kill -9 2587
kill後,再重啓ambari-agent問題解決

[root@testserver1 ~]# service ambari-agent restart
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
ambari-agent is not running. No PID found at /var/run/ambari-agent/ambari-agent.pid
Verifying Python version compatibility...
Using python  /usr/bin/python2.6
Checking for previously running Ambari Agent...
Starting ambari-agent
Verifying ambari-agent process status...
Ambari Agent successfully started
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log

5.7在註冊機器(Confirm Hosts)的時候出現錯誤The following hosts have Transparent HugePages (THP) enabled。THP should be disabled to avoid potential Hadoop performance issues


解決方法:
在Linux下執行:

echo never >/sys/kernel/mm/redhat_transparent_hugepage/defrag

echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/enabled

echo never >/sys/kernel/mm/transparent_hugepage/defrag

 

5.8啓動hive的時候出現錯誤unicodedecodeerror ambari in position 117

 

查看/etc/sysconfig/i18n文件,發現內容以下:

LANG=」zh_CN.UTF8」

原來系統字符集設置成了中文,改爲以下內容,問題解決:

LANG="en_US.UTF-8"

 

 

5.9安裝Metrics的時候報以下錯誤,安裝包找不到

1.failure: Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

 

在ftp源服務器上執行命令:

cd /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6

mkdir Updates-ambari-2.2.1.0

cp -r /var/www/html/ambari/Updates-ambari-2.2.1.0/ambari /var/www/html/ambari/HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0

 

而後從新生成repodata

cd /var/www/html/ambari

rm -rf repodata

createrepo ./

 

2.failure: HDP-UTILS-1.1.0.20/repos/centos6/Updates-ambari-2.2.1.0/ambari/ambari-metrics-monitor-2.2.1.0-161.x86_64.rpm from HDP-UTILS-1.1.0.20: [Errno 256] No more mirrors to try.

 

在/etc/yum.repos.d目錄下刪除mnt.repo,並使用yum clean all命令來清空yum的緩存

cd /ec/yum.repos.d

rm -rf mnt.repo

yum clean all

 

5.11jps 報process information unavailable解決辦法

4791 -- process information unavailable

 

解決辦法:

 

進入tmp目錄,

 

cd /tmp

 

刪除該目錄下

 

名稱爲hsperfdata_{ username}的文件夾

 

而後jps,清淨了。

 

腳本:

cd /tmp

ls -l | grep hsperf | xargs rm -rf

ls -l | grep hsperf

 

5.12namenode啓動報錯在日誌文件中ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode

日誌中還有java.net.BindException: Port in use: gmaster:50070

Caused by: java.net.BindException: Address already in use

判斷緣由是50070上一次沒有釋放,端口占用

 

netstat下time_wait狀態的tcp鏈接: 
1.這是一種處於鏈接徹底關閉狀態前的狀態; 
2.一般要等上4分鐘(windows server)的時間才能徹底關閉; 
3.這種狀態下的tcp鏈接佔用句柄與端口等資源,服務器也要爲維護這些鏈接狀態消耗資源; 
4.解決這種time_wait的tcp鏈接只有讓服務器可以快速回收和重用那些TIME_WAIT的資源:修改註冊表[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters]添加dword值TcpTimedWaitDelay=30(30也爲微軟建議值;默認爲2分鐘)和MaxUserPort:65534(可選值5000 - 65534); 
5.具體tcpip鏈接參數配置還可參照這裏:http://technet.microsoft.com/zh-tw/library/cc776295%28v=ws.10%29.aspx 
6.linux下: 
vi /etc/sysctl.conf 
新增以下內容: 
net.ipv4.tcp_tw_reuse = 1 
net.ipv4.tcp_tw_recycle = 1 
net.ipv4.tcp_syncookies=1 

net.ipv4.tcp_fin_timeout=30

net.ipv4.tcp_keepalive_time=1800

net.ipv4.tcp_max_syn_backlog=8192


使內核參數生效: 
[root@web02 ~]# sysctl -p 
readme: 
net.ipv4.tcp_syncookies=1 打開TIME-WAIT套接字重用功能,對於存在大量鏈接的Web服務器很是有效。 
net.ipv4.tcp_tw_recyle=1 
net.ipv4.tcp_tw_reuse=1 減小處於FIN-WAIT-2鏈接狀態的時間,使系統能夠處理更多的鏈接。 
net.ipv4.tcp_fin_timeout=30 減小TCP KeepAlive鏈接偵測的時間,使系統能夠處理更多的鏈接。 
net.ipv4.tcp_keepalive_time=1800 增長TCP SYN隊列長度,使系統能夠處理更多的併發鏈接。 
net.ipv4.tcp_max_syn_backlog=8192

 

5.13在啓動的時候報錯誤resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh

在日誌中有以下內容:

2016-03-31 13:55:28,090 INFO  security.ShellBasedIdMapping (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(92)) - Stream timeout is 600000ms.

2016-03-31 13:55:28,096 INFO  nfs3.WriteManager (WriteManager.java:(100)) - Maximum open streams is 256

2016-03-31 13:55:28,096 INFO  nfs3.OpenFileCtxCache (OpenFileCtxCache.java:(54)) - Maximum open streams is 256

2016-03-31 13:55:28,259 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:(205)) - Configured HDFS superuser is

2016-03-31 13:55:28,261 INFO  nfs3.RpcProgramNfs3 (RpcProgramNfs3.java:clearDirectory(231)) - Delete current dump directory /tmp/.hdfs-nfs

2016-03-31 13:55:28,269 WARN  fs.FileUtil (FileUtil.java:deleteImpl(187)) - Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists.

說明hdfs這個用戶對/tmp沒有權限

賦予權限給hdfs用戶:

chown  hdfs:hadoop /tmp

 

再啓動問題解決

5.14在安裝ranger組件的時候出現錯誤鏈接不上mysql數據庫rangeradmin用戶和不能賦權的問題

在數據庫中先刪除全部rangeradmin用戶,注意使用drop user命令:

drop user 'rangeradmin'@'%';

drop user 'rangeradmin'@'localhost';

drop user 'rangeradmin'@'gmaster';

drop user 'rangeradmin'@'gslave1';

drop user 'rangeradmin'@'gslave2';

FLUSH PRIVILEGES;

 

再建立用戶(注意gmaster是ranger安裝的服務器機器名)

CREATE USER 'rangeradmin'@'%' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'%'  with grant option;

CREATE USER 'rangeradmin'@'localhost' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'localhost'  with grant option;

CREATE USER 'rangeradmin'@'gmaster' IDENTIFIED BY 'rangeradmin';

GRANT ALL PRIVILEGES ON *.* TO 'rangeradmin'@'gmaster'  with grant option;

FLUSH PRIVILEGES;

 

再查看權限:

SELECT DISTINCT CONCAT('User: ''',user,'''@''',host,''';') AS query FROM mysql.user

 

select * from mysql.user where user='rangeradmin' \G;

 

問題解決

5.15在ambari啓動的時候出現錯誤:AmbariServer:820 - Failed to run the Ambari Server

這個問題困擾了我好久,最後經過查看源碼找到了問題所在:

在/var/log/ambari-server/ambary-server.log文件中報有錯誤:

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:458 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain an upgrade directory

13 Apr 2016 14:16:01,723  INFO [main] StackDirectory:468 - Stack '/var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS' doesn't contain config upgrade pack file

13 Apr 2016 14:16:01,744  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.1.GlusterFS/role_command_order.json

13 Apr 2016 14:16:01,840  INFO [main] StackDirectory:484 - Role command order info was loaded from file: /var/lib/ambari-server/resources/stacks/HDP/2.4/role_command_order.json

13 Apr 2016 14:16:01,927 ERROR [main] AmbariServer:820 - Failed to run the Ambari Server

com.google.inject.ProvisionException: Guice provision errors:

 

1) Error injecting method, java.lang.NullPointerException

  at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:243)

  at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:125)

  while locating org.apache.ambari.server.api.services.AmbariMetaInfo

    for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:145)

  at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:145)

  while locating org.apache.ambari.server.controller.AmbariServer

 

1 error

         at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)

         at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)

         at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:813)

Caused by: java.lang.NullPointerException

         at org.apache.ambari.server.stack.StackModule.processRepositories(StackModule.java:665)

         at org.apache.ambari.server.stack.StackModule.resolve(StackModule.java:158)

         at org.apache.ambari.server.stack.StackManager.fullyResolveStacks(StackManager.java:201)

         at org.apache.ambari.server.stack.StackManager.(StackManager.java:119)

         at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance()

         at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)

         at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)

         at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)

         at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

         at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

         at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

         at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

         at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)

         at com.sun.proxy.$Proxy26.create(Unknown Source)

         at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:247)

         at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke()

         at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)

         at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)

         at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)

         at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)

         at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)

         at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

         at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)

         at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

         at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)

         at com.google.inject.Scopes$1$1.get(Scopes.java:65)

         at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)

         at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)

         at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)

         at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)

         at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)

         at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)

         at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)

         at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)

         at com.google.inject.Scopes$1$1.get(Scopes.java:65)

         at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)

         at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)

         at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)

         at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)

         ... 2 more

 

解決方法:

後來發如今文件/var/lib/ambari-server/resources/stacks/HDP/2.4/repos/repoinfo.xml中的內容os這行原來的以下:

 

改爲:

 

問題解決

 

5.16在啓動hive的時候出現錯誤Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)

在HIVE安裝的機器cd /var/lib/ambari-agent/data目錄下有日誌,日誌中報以下錯誤:

Traceback (most recent call last):

  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in

    HiveMetastore().execute()

  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute

    method(env)

  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start

    self.configure(env)

  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure

    hive(name = 'metastore')

  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk

    return fn(*args, **kwargs)

  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive

    user = params.hive_user

  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__

    self.env.run()

  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run

    self.run_action(resource, action)

  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action

    provider_action()

  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run

    tries=self.resource.tries, try_sleep=self.resource.try_sleep)

  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner

    result = function(command, **kwargs)

  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call

    tries=tries, try_sleep=try_sleep)

  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper

    result = _call(command, **kwargs_copy)

  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call

    raise Fail(err_msg)

resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.

Metastore connection URL:        jdbc:mysql://a2slave1/hive?createDatabaseIfNotExist=true

Metastore Connection Driver :    com.mysql.jdbc.Driver

Metastore connection User:       hive

Starting metastore schema initialization to 1.2.1000

Initialization script hive-schema-1.2.1000.mysql.sql

Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)

org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!

*** schemaTool failed ***

 

解決方法:

HIVE安裝的機器拷貝腳本hive-schema-1.2.1000.mysql.sql到:

[root@a2master /]# scp /usr/hdp/2.4.0.0-169/hive/scripts/metastore/upgrade/mysql/hive-schema-1.2.1000.mysql.sql root@a2slave1:/usr/local/mysql

hive-schema-1.2.1000.mysql.sql                                                                   100%   34KB  34.4KB/s   00:00

 

在HIVE安裝機器使用hive用戶登錄,進入hive數據庫,執行這個腳本

[root@a2slave1 conf.server]# mysql -uhive -p

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 505

Server version: 5.6.26-log MySQL Community Server (GPL)

 

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

 

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

 

mysql> use hive;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> source /usr/local/mysql/hive-schema-1.2.1000.mysql.sql;

問題解決

 

5.17在啓動hive的時候出現錯誤Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

在/var/log/hive中的日誌文件hiveserver.log中記錄有:

2016-04-15 10:45:20,446 INFO  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(405)) - Starting HiveServer2

2016-04-15 10:45:20,573 INFO  [main]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called

2016-04-15 10:45:20,585 INFO  [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:(140)) - Using direct SQL, underlying DB is MYSQL

2016-04-15 10:45:20,585 INFO  [main]: metastore.ObjectStore (ObjectStore.java:setConf(277)) - Initialized ObjectStore

2016-04-15 10:45:20,590 WARN  [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException

2016-04-15 10:45:20,591 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000

2016-04-15 10:45:20,600 WARN  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultDB(623)) - Retrying creating default database after error: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

         at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)

         at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)

         at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

         at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

         at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

         at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

         at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

         at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

         at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

         at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

         at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

         at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

         at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

         at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

         at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

         at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

         at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

         at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

NestedThrowablesStackTrace:

Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

         at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)

         at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)

         at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)

         at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)

         at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)

         at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)

         at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)

         at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)

         at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)

         at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)

         at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)

         at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)

         at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)

         at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)

         at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)

         at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)

         at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

         at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

         at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:621)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

         at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

         at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

         at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

         at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

         at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

         at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

         at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

         at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

         at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

         at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

         at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

         at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

         at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

2016-04-15 10:45:20,607 WARN  [main]: metastore.ObjectStore (ObjectStore.java:getDatabase(577)) - Failed to get database default, returning NoSuchObjectException

2016-04-15 10:45:20,609 ERROR [main]: bonecp.ConnectionHandle (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000

2016-04-15 10:45:20,617 INFO  [main]: server.HiveServer2 (HiveServer2.java:stop(371)) - Shutting down HiveServer2

2016-04-15 10:45:20,618 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(442)) - Error starting HiveServer2 on attempt 29, will retry in 60 seconds

java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

         at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)

         at org.apache.hive.service.CompositeService.init(CompositeService.java:59)

         at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:104)

         at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:411)

         at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:78)

         at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:654)

         at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:527)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Caused by: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:494)

         at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)

         at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)

         ... 12 more

Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

         at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)

         at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)

         at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)

         at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)

         at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)

         ... 14 more

Caused by: java.lang.reflect.InvocationTargetException

         at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown Source)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

         at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)

         ... 20 more

Caused by: javax.jdo.JDOUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

NestedThrowables:

org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

         at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:549)

         at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732)

         at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752)

         at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:530)

         at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)

         at com.sun.proxy.$Proxy6.createDatabase(Unknown Source)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:605)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:625)

         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66)

         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)

         at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5789)

         at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199)

         at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)

         ... 24 more

Caused by: org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

         at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.createRepository(TableGenerator.java:261)

         at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:162)

         at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:197)

         at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:105)

         at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2005)

         at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1386)

         at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3827)

         at org.datanucleus.state.JDOStateManager.setIdentity(JDOStateManager.java:2571)

         at org.datanucleus.state.JDOStateManager.initialiseForPersistentNew(JDOStateManager.java:513)

         at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:232)

         at org.datanucleus.ExecutionContextImpl.newObjectProviderForPersistentNew(ExecutionContextImpl.java:1414)

         at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2218)

         at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)

         at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913)

         at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)

         at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727)

         ... 39 more

最後發現是mysql數據庫的binlog_format參數設置不正確,

原來設置的是STATEMENT,修改成MIXED

修改方法,在/etc/my.cnf文件中加上binlog_format=MIXED

而後重啓mysql數據庫

問題解決

 

5.18在使用hive命令進入hive的時候報錯誤Permission denied: user=root, access=WRITE, inode="/user/root":hdfs:hdfs:drwxr-xr-x  hive

解決方法:

1.       使用HDFS的命令行接口修改相應目錄的權限,hadoop fs -chmod 777 /user,後面的/user是要上傳文件的路徑,不一樣的狀況可能不同,好比要上傳的文件路徑爲hdfs://namenode/user/xxx.doc,則這樣的修改能夠,若是要上傳的文件路徑爲hdfs://namenode/java/xxx.doc,則要修改的爲hadoop fs -chmod 777 /java或者hadoop fs -chmod 777 /,java的那個須要先在HDFS裏面創建Java目錄,後面的這個是爲根目錄調整權限。

 

腳本

su - hdfs

hadoop fs -chmod 777 /user

 

 

2.       在/etc/profile文件中加上系統的環境變量或java JVM變量裏面添加export HADOOP_USER_NAME=hdfs(ambari使用的hadoop用戶是hdfs),這個值具體等於多少看本身的狀況,之後會運行HADOOP上的Linux的用戶名。

 

export HADOOP_USER_NAME=hdfs

5.19Spark Thrift Server組件啓動後自動關閉

Spark Thrift Server組件在啓動後自動關閉,查看/var/log/spark下的日誌文件spark-hive-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-a2master.out,中有以下內容:

16/04/18 10:26:10 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.

16/04/18 10:26:10 INFO Client: Requesting a new application from cluster with 3 NodeManagers

16/04/18 10:26:10 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (512 MB per container)

16/04/18 10:26:10 ERROR SparkContext: Error initializing SparkContext.

java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (512 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

        at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)

        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)

        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)

        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)

        at org.apache.spark.SparkContext.(SparkContext.scala:530)

        at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:56)

        at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:76)

        at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

16/04/18 10:26:11 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}

分析:緣由是spark的內存設置的過小

解決方法:在前臺界面修改內存爲1536M,並把配置在其它機器上更新,重啓spark服務

 

 

5.20hbase在啓動的時候出現錯誤Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461130860883":hdfs:hdfs:drwxr-xr-x

在日誌中有以下內容:

2016-04-20 15:42:11,640 INFO  [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: Allocating LruBlockCache size=401.60 MB, blockSize=64 KB

2016-04-20 15:42:11,648 INFO  [regionserver/gslave2/192.168.1.253:16020] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=433080, freeSize=420675048, maxSize=421108128, heapSize=433080, minSize=400052704, minFactor=0.95, multiSize=200026352, multiFactor=0.5, singleSize=100013176, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false

2016-04-20 15:42:11,704 INFO  [regionserver/gslave2/192.168.1.253:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider

2016-04-20 15:42:11,729 INFO  [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: STOPPED: Failed initialization

2016-04-20 15:42:11,729 ERROR [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Failed init

org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:415)

         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

 

         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)

         at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)

         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)

         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)

         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)

         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)

         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)

         at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)

         at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)

         at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)

         at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)

         at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:415)

         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

 

         at org.apache.hadoop.ipc.Client.call(Client.java:1411)

         at org.apache.hadoop.ipc.Client.call(Client.java:1364)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

         at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

         at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

         at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)

         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)

         ... 15 more

2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: ABORTING region server gslave2,16020,1461138130424: Unhandled: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:415)

         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:415)

         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

 

         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)

         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)

         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2589)

         at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)

         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)

         at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)

         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)

         at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)

         at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)

         at org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:488)

         at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)

         at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)

         at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:179)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1624)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1362)

         at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:899)

         at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=WRITE, inode="/apps/hbase/data/WALs/gslave2,16020,1461138130424":hdfs:hdfs:drwxr-xr-x

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)

         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)

         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)

         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3905)

         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1048)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)

         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:415)

         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

 

         at org.apache.hadoop.ipc.Client.call(Client.java:1411)

         at org.apache.hadoop.ipc.Client.call(Client.java:1364)

         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

         at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

         at com.sun.proxy.$Proxy15.mkdirs(Unknown Source)

         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)

         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:606)

         at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

         at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)

         at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)

         ... 15 more

2016-04-20 15:42:11,732 FATAL [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []

2016-04-20 15:42:11,744 INFO  [regionserver/gslave2/192.168.1.253:16020] regionserver.HRegionServer: Dump of metrics as JSON on abort: {

 

解決方法:

su - hdfs

hdfs dfs -chown -R hbase:hbase /apps/hbase

相關文章
相關標籤/搜索