1.Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:java
2016-01-05 23:03:32,967 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.10.31:8485, 192.168.10.32:8485, 192.168.10.33:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.10.31:8485: Call From bdata4/192.168.10.34 to bdata1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.33:8485: Call From bdata4/192.168.10.34 to bdata3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.32:8485: Call From bdata4/192.168.10.34 to bdata2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1394)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1151)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1658)
at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1536)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1335)
at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)node
2016-01-05 23:03:32,968 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1web
錯誤緣由:apache
咱們在執行start-dfs.sh的時候,默認啓動順序是namenode>datanode>journalnode>zkfc,若是journalnode和namenode不在一臺機器啓動的話,很容易由於網絡延遲問題致使NN沒法鏈接JN,沒法實現選舉,最後致使剛剛啓動的namenode會忽然掛掉一個主的,留下一個standy的,雖然有NN啓動時有重試機制等待JN的啓動,可是因爲重試次數限制,可能網絡狀況很差,致使重試次數用完了,也沒有啓動成功,windows
A:此時須要手動啓動主的那個namenode,避免了網絡延遲等待journalnode的步驟,一旦兩個namenode連入journalnode,實現了選舉,則不會出現失敗狀況,服務器
B:先啓動JournalNode而後再運行start-dfs.sh,網絡
C:把nn對jn的容錯次數或時間調成更大的值,保證可以對正常的啓動延遲、網絡延遲能容錯app
在hdfs-site.xml中加入,nn對jn檢測的重試次數,默認爲10次,每次1000ms,故網絡狀況差須要增長,這裏設置爲30次eclipse
<property>
<name>ipc.client.connect.max.retries</name>
<value>30</value>ide
</property>
二、org.apache.hadoop.security.AccessControlException: Permission denied
在master節點上修改hdfs-site.xml加上如下內容
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
旨在取消權限檢查,緣由是爲了解決我在windows機器上配置eclipse鏈接hadoop服務器時,配置map/reduce鏈接後報錯
三、運行報:[org.apache.hadoop.security.ShellBasedUnixGroupsMapping]-[WARN] got exception trying to get groups for user bdata
在master節點上修改hdfs-site.xml加上如下內容
<property><name>dfs.web.ugi</name><value>bdata,supergroup</value></property>