eclipse操做僞分佈式集羣報錯

最近在初學hadoop時,用java往hdfs中寫入文件時報錯,現把報錯問題整理,以避免從此出現同樣的問題。java

先上簡單的代碼,代碼只是簡單的往hdfs中寫入文件和判斷文件是否存在:node

package com.xiaoxing.hadoop;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HdfsDemo {
	
	public static void main(String[] args) throws IOException {
		writeFile("helloWorldTest","hello world,hadoop!");
		if (exist("helloWorldTest")) {
			System.out.println("文件存在");
		} else {
			System.out.println("文件不存在");
		}
	}
	
	
	/**
	 * 寫入往fileName中寫入數據
	 * @param fileName
	 * @param context
	 * @throws IOException 
	 */
	private static void writeFile(String fileName,String context) throws IOException {
		Configuration conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.1.109:9000");
		conf.set("fs.hdfs.impl","org.apache.hadoop.hdfs.DistributedFileSystem");
		FileSystem fs = FileSystem.get(conf);
		byte[] buffer = context.getBytes();
		FSDataOutputStream os = fs.create(new Path(fileName));
		os.write(buffer,0,buffer.length);
		os.close();
		fs.close();
	}
	
	/**
	 * 判斷文件是否存在
	 * @param fileName
	 * @return
	 * @throws IOException 
	 */
	private static boolean exist(String fileName) throws IOException {
		Configuration conf = new Configuration();
		conf.set("fs.defaultFS", "hdfs://192.168.1.109:9000");
		conf.set("fs.hdfs.impl","org.apache.hadoop.hdfs.DistributedFileSystem");
		FileSystem fs = FileSystem.get(conf);
		boolean isExist = fs.exists(new Path(fileName));
		fs.close();
		return isExist;
	}
	
}

執行碰到的第一個問題:鏈接被拒。個人環境是在一臺Unbutu機器上裝了一個hadoop的僞分佈式集羣(ip爲192.168.1.109),本機是win10(ip爲192.168.1.109)。apache

Exception in thread "main" java.net.ConnectException: Call From AFAAW-704030720/192.168.1.108 to 192.168.1.109:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

發現是在hadoop中配置的core-site.xml文件中配置fs.defaultFS值爲的是hdfs://localhost:9000,把其修改成對應集羣的IP便可,如個人Hadoop機器的IP是192.168.1.109就修改成hdfs://192.168.1.109:9000便可,這個問題就解決了。安全

第二個問題:eclipse

解決方法:分佈式

1.在執行eclipse的機器上增長系統變量:HADOOP_USER_NAME=hadoop(hadoop爲有權限執行hadoop集羣的用戶)oop

2.修改權限:hadoop fs -chmod 777 /user/hadoop,/user/hadoop這個路徑爲HDFS中的文件路徑  。this

上面二種方法選擇一種,建議選擇第一種,每二種修改權限會引發安全問題。.net

第三個問題:There are 0 datanode(s) running and no node(s) are excluded in this operationcode

解決方法 :

1.執行./stop-dfs.sh停掉hadoop

2.刪除core-site.xml文件中配置的dfs.namenode.name.dir設置的文件夾下的current文件夾

3.執行格式化:./bin/hdfs namenode -format

4.重啓:./start-dfs.sh

相關文章
相關標籤/搜索