在測試環境部署了ambari hdp集羣,hdfs 2.7nameserver作了高可用journal +zkfailover實現,偶遇一次nameserver本身死掉,沒有自動恢復,查找緣由:java
nameserver死亡最後日誌:
2018-10-17 02:02:05,981 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (JournalAndStream(mgr=QJM to [10.213.33.71:8485, 10.213.33.72:8485, 10.213.33.70:8485], stream=QuorumOutputStream starting at txid 15384179))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:707)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:641)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4192)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
2018-10-17 02:02:05,982 WARN client.QuorumJournalManager (QuorumOutputStream.java:abort(72)) - Aborting QuorumOutputStream starting at txid 15384179node
表面緣由是nameserver向journalnode寫editlog的時候超時,本身關閉;進一步查找journal超時緣由:
2018-10-17 01:37:04,312 INFO ipc.Server (Server.java:logException(2435)) - IPC Server handler 2 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 10.213.33.71:47256 Call#1181544 Retry#0
java.io.IOException: IPC's epoch 16 is less than the last promised epoch 17
at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:430)
at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:153)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
2018-10-17 01:37:04,454 INFO namenode.TransferFsImage (TransferFsImage.java:copyFileToStream(395)) - Sending fileName: /hadoop/hdfs/journal/NN1/current/edits_0000000000015383967-0000000000015384178, fileSize: 27886. Sent total: 27886 bytes. Size of last segment intended to send: -1 bytes.
2018-10-17 01:58:36,681 WARN server.Journal (Journal.java:journal(398)) - Sync of transaction range 15389759-15389759 took 6360ms
2018-10-17 02:00:34,791 WARN server.Journal (Journal.java:journal(398)) - Sync of transaction range 15390699-15390699 took 6783ms
2018-10-17 02:02:04,867 WARN server.Journal (Journal.java:journal(398)) - Sync of transaction range 15391171-15391171 took 18885ms
分析日誌得出:發生了nameserver的主備切換(主nameserver不能鏈接zk,zk重啓了,緣由後面再查找),致使了新的主nameserver恢復後寫journal信息時,journal服務發現nameserver上送過來的epoch時間戳落後於當前他本身的epoch時間戳,因而發起同步流程,nameserverjournalnode節點同步,同步過程花費18.885s,致使對外服務等待超時apache
018-10-17 02:02:04,191 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2018-10-17 02:02:04,191 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1033] - Closed socket connection for client /10.213.33.71:36564 (no session established for client)
2018-10-17 02:02:04,765 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.213.33.70:38640
2018-10-17 02:02:04,765 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2018-10-17 02:02:04,765 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1033] - Closed socket connection for client /10.213.33.70:38640 (no session established for client)
2018-10-17 02:02:04,867 - WARN [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 24133ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide
2018-10-17 02:02:04,868 - WARN [SyncThread:1:SendAckRequestProcessor@64] - Closing connection to leader, exception during packet send
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:116)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.zookeeper.server.quorum.Learner.writePacket(Learner.java:139)
at org.apache.zookeeper.server.quorum.SendAckRequestProcessor.flush(SendAckRequestProcessor.java:62)
at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:204)
at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131)
2018-10-17 02:02:04,869 - INFO [SyncThread:1:SyncRequestProcessor@187] - SyncRequestProcessor exited!
2018-10-17 02:02:04,891 - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:SendAckRequestProcessor@64] - Closing connection to leader, exception during packet send
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:116)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
網絡不穩定致使zk之間通信中斷,服務不能提供了promise
解決辦法:網絡
1,其實在實際的生產環境中,也很容易發生相似的這種超時狀況,因此咱們須要把默認的20s超時改爲更大的值,好比60s。session
咱們能夠在hadoop/etc/hadoop下的hdfs-site.xml中,加入一組配置:app
<property> <name>dfs.qjournal.write-txns.timeout.ms</name> <value>60000</value> </property>
2,把zkfailovercontroller去掉,改成手動切換主備nameserver.less
修改參數dfs.ha.automatic-failover.enabled=falsesocket
當須要切換主備時,使用Hdfs用戶(su hdfs), 執行命令hdfs haadmin -failover --forcefence --forceactive nn2 nn1
切換以前同步主備的fsimage和editlogide