Hadoop 搭建全分佈模式子節點的datanode未起來的解決辦法

                搭建全分佈模式hadoop的時候,子節點的datanode沒有起來:java

 

解決辦法參考以下網站: https://blog.csdn.net/u013310025/article/details/52796233 node

 

總結:在全分佈模式下,將hadoop文件用scp -r ~/training/hadoop2.7.3 root@bigdata112 ~/training/後,須要在各節點也執行hdfs namenode -format才行,不然啓動hadoop,節點的datanode起不了會報以下的錯誤。(此結論須要後期再進行驗證)apache

 

解決辦法(選擇了方法二,方法一嘗試了無效):oop

方法1.進入tmp/dfs,修改VERSION文件便可,將nameNode裏version文件夾裏面的內容修改爲和master一致的。網站

方法2.直接刪除tmp/dfs,而後格式化hdfs便可(./hdfs namenode -format)從新在tmp目錄下生成一個dfs文件ui

 

 

ervices: 2018-04-20 23:41:33,881 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool (Datanode Uuid unassigned) service to bigdata111/169.254.169.111:9000 starting to offer service 2018-04-20 23:41:34,013 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2018-04-20 23:41:34,072 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-04-20 23:41:36,251 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data direc tories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2018-04-20 23:41:36,290 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /root/training/hadoop-2.7.3/t mp/dfs/data/in_use.lock acquired by nodename 48801@bigdata111 2018-04-20 23:41:36,293 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK ]file:/root/training/hadoop-2.7.3/tmp/dfs/data/ java.io.IOException: Incompatible clusterIDs in /root/training/hadoop-2.7.3/tmp/dfs/data: namenode clusterID = C ID-53071357-d7bd-4fd4-badc-b7b9851c3c82; datanode clusterID = CID-0c92e0ca-b7c2-4a66-ad48-842788bbe4d3 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:3 17) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223 ) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802) at java.lang.Thread.run(Thread.java:745) 2018-04-20 23:41:36,296 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block p ool (Datanode Uuid unassigned) service to bigdata111/169.254.169.111:9000. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) --More--(98%)spa

相關文章
相關標籤/搜索