HDFS的DATANODE的剩餘空間具體要到多大?關於這個問題,下面記錄下對這個問題的調查
昨天,討論羣裏面給出了一個異常: java
- op@odbtest bin]$ hadoop fs -put ../tmp/file3 /user/hadoop/in2
- 14/01/15 02:14:09 WARN hdfs.DFSClient: DataStreamer Exception
- org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/in2/file3._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
- at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
- at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
- at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
- at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:396)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
該異常的只在NN的日誌中拋出,而DN中沒有相關內容,這說明,這是在NN進行block分配的時候作了檢查。
這種狀況通常都是DATANODE 變成dead,或者是datanode的磁盤容量不夠了。
因此建議問題提出者,給DN的datadir空出一部分空間以後,操做正常
可是,該問題的提出者,給出report 數據: node
- [hadoop@odbtest bin]$ hdfs dfsadmin -report
- Configured Capacity: 8210259968 (7.65 GB)
- Present Capacity: 599728128 (571.95 MB)
- DFS Remaining: 599703552 (571.92 MB)
- DFS Used: 24576 (24 KB)
- DFS Used%: 0.00%
- Under replicated blocks: 0
- Blocks with corrupt replicas: 0
- Missing blocks: 0
-
- -------------------------------------------------
- Datanodes available: 1 (1 total, 0 dead)
-
- Live datanodes:
- Name: 192.168.136.128:50010 (odbtest)
- Hostname: odbtest
- Decommission Status : Normal
- Configured Capacity: 8210259968 (7.65 GB)
- DFS Used: 24576 (24 KB)
- Non DFS Used: 7610531840 (7.09 GB)
- DFS Remaining: 599703552 (571.92 MB)
- DFS Used%: 0.00%
- DFS Remaining%: 7.30%
- Last contact: Tue Jan 14 23:47:26 PST 2014
按照report的數據DFS還剩下(571.92 MB)的大小,應該是能夠建立的,可是拋出了這個異常,確定是對DATANODE的剩餘最小容量作了限制。查了一下HADOOP 2.2.0的源碼,
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault的方法isGoodTarget中,會對DATANODE的剩餘容量進行判斷: apache
- long remaining = node.getRemaining() -
- (node.getBlocksScheduled() * blockSize);
-
- if (blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE>remaining) {
- if(LOG.isDebugEnabled()) {
- threadLocalBuilder.get().append(node.toString()).append(": ")
- .append("Node ").append(NodeBase.getPath(node))
- .append(" is not chosen because the node does not have enough space ");
- }
- return false;
- }
代碼中說了,當剩餘容量小於blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE的時候,會返回false,而默認狀況下 blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE=128M*5=640M> 571.92 MB,這就解釋了這個異常發生的緣由。app