大數據之---hadoop問題排查彙總終極篇---持續更新中

一、軟件環境

RHEL6 角色 jdk-8u45
hadoop-2.8.1.tar.gz   ssh
xx.xx.xx.xx ip地址 NN hadoop1
xx.xx.xx.xx ip地址 DN hadoop2
xx.xx.xx.xx ip地址 DN hadoop3
xx.xx.xx.xx ip地址 DN hadoop4
xx.xx.xx.xx ip地址 DN hadoop5

本次涉及僞分佈式部署只是要主機hadoop1java

 

二、啓動密鑰互信問題

HDFS啓動node

[hadoop@hadoop01 hadoop]$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop01]
The authenticity of host 'hadoop01 (172.16.18.133)' can't be established.
RSA key fingerprint is 8f:e7:6c:ca:6e:40:78:b8:df:6a:b4:ca:52:c7:01:4b.
Are you sure you want to continue connecting (yes/no)? yes
hadoop01: Warning: Permanently added 'hadoop01' (RSA) to the list of known hosts.
hadoop01: chown: changing ownership of `/opt/software/hadoop-2.8.1/logs': Operation not permitted
hadoop01: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out
hadoop01: /opt/software/hadoop-2.8.1/sbin/hadoop-daemon.sh: line 159: apache

/opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out: Permission deniedssh

啓動若是有交互輸入密碼,不輸入報錯權限限制,這是由於咱們沒有配置互信,分佈式

僞分佈式即使在同一臺機器上面咱們也須要配置ssh登錄互信。ide

非root用戶公鑰文件權限必須是600權限(root除外)oop

在hadoop用戶配置ssh免密碼登錄spa

[hadoop@hadoop01 .ssh]$ cat id_rsa.pub  > authorized_keys
[hadoop@hadoop01 .ssh]$ chmod 600 authorized_keys.net

[hadoop@hadoop01 hadoop]$ ssh hadoop01 date
[hadoop@hadoop01 .ssh]$ orm

[hadoop@hadoop01 hadoop]$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out
hadoop01: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop01.out
Starting secondary namenodes [hadoop01]
hadoop01: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop01.out
[hadoop@hadoop01 hadoop]$ jps
1761 Jps
1622 SecondaryNameNode
1388 DataNode
1276 NameNode

 

三、進程process information unavailable 問題

分兩種狀況:一、進程不存在,且process information unavailable

                              二、進程存在  報process information unavailable

對於第一種狀況:

[hadoop@hadoop01 sbin]$ jps
3108 DataNode
4315 Jps
4156 SecondaryNameNode
2990 NameNode

[hadoop@hadoop01 hsperfdata_hadoop]$ ls
5295  5415  5640
[hadoop@hadoop01 hsperfdata_hadoop]$ ll
total 96
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5295
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5415
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5640
[hadoop@hadoop01 hsperfdata_hadoop]$ pwd
/tmp/hsperfdata_hadoop

/tmp/hsperfdata_hadoop

裏面記錄jps顯示的進程號,若是此時jps看到報錯

[hadoop@hadoop01 tmp]$ jps
3330 SecondaryNameNode -- process information unavailable
3108 DataNode                         -- process information unavailable
3525 Jps
2990 NameNode                      -- process information unavailable

查詢異常進程是否存在

[hadoop@hadoop01 tmp]$ ps -ef |grep 3330
hadoop    3845  2776  0 09:29 pts/6    00:00:00 grep 3330

對於進程不存在了,ok去/tmp/hsperfdata_xxx刪除文件, 直接從新啓動進程。。

 

jps查詢的是當前用戶的 hsperfdata_當前用戶/文件
[root@hadoop01 ~]# jps
7153 -- process information unavailable
8133 -- process information unavailable
7495 -- process information unavailable
8489 Jps
[root@hadoop01 ~]# ps -ef |grep 7153   ---查看異常進程存在
hadoop    7153     1  2 09:47 ?        00:00:17 /usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/software/hadoop-2.8.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/software/hadoop-2.8.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/software/hadoop-2.8.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/software/hadoop-2.8.1/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop01.log -Dhadoop.home.dir=/opt/software/hadoop-2.8.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/software/hadoop-2.8.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
root      8505  2752  0 09:58 pts/6    00:00:00 grep 7153

假如存在,當前用戶查看就是process information unavailable ,這時候查看是否進程是否存在,當前用戶  ps –ef |grep  進程號,看進程運行用戶,不是切換用戶

[hadoop@hadoop01 hadoop]$ jps             -----切換hadoop用戶查看進程
7153 NameNode
8516 Jps
8133 DataNode
7495 SecondaryNameNode

切換用戶發現進程都正常。
這個狀況是查看的用戶不對,hadoop查看jps不是運行用戶查看,這個狀況是不須要進行任何處理,服務運行正常

總結:對應process information unavailable報錯,處理:

1.查看進程是否存在 (進程不存在,刪/tmp/hsperfdata_xxx,從新啓動進程)

2.若是進程存在,查看存在的進程運行用戶,若是不是當前用戶 切換用戶後從新運行jps

相關文章
相關標籤/搜索