java客戶端沒法上傳文件到hdfs

019-07-01 16:45:24,933 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 58.211.111.42:63048 Call#3 Retry#0
java.io.IOException: File /a1.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1620)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3350)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:678)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2141)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1835)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2135)

學些hadoop。遇到這個問題,查找網上好多資料,通常都是說namenode和datanode不一樣步致使的,或者防火牆沒開50010端口,或者nameNode和datanode沒法通訊致使的。java

其實經過命令行都是能夠正常操做的,遠程調用的時候能夠建立目錄和文件,可是像文件寫內容的時候,就寫不進去,報如上錯誤。node

本地host須要配置好,而後加上下面這句代碼apache

configuration = new Configuration();
configuration.set("dfs.client.use.datanode.hostname", "true");

意思大概就是僞分佈式hdfs,datanode註冊到namenode的ip是本機的127.0.0.1,當遠程客戶端鏈接到namenode獲得datanode的ip的時候,獲得的是127.0.0.1,這天然是鏈接不上的。這裏的意思大概就是強制本地java客戶端使用hostname去鏈接datanode,能夠鏈接成功分佈式

防火牆端口50010也是必須打開的,由於數據節點須要使用這個端口ide

參考自連接描述oop

相關文章
相關標籤/搜索