Hadoop運行mapreduce實例時,拋出錯誤 All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…

Hadoop運行mapreduce實例時,拋出錯誤 All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…java

 

java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

java.io.IOException: Could not get block locations. Aborting…

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

經查明,問題緣由是linux機器打開了過多的文件致使。node

用命令ulimit -n能夠發現linux默認的文件打開數目爲1024linux

修改/ect/security/limit.conf,增長hadoop soft 65535
再從新運行程序(最好全部的datanode都修改),問題解決apache

相關文章
相關標籤/搜索