官方文檔:
https://hadoop.apache.org/docs/r2.9.2/hadoop-project-dist/hadoop-common/SingleCluster.htmlhtml
配置免密登陸,用於 nameNode 與 dataNode 通訊java
ssh-keygen -t rsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
驗證ssh,不須要輸入密碼便可登陸。登陸後執行 exit 退出。node
ssh localhost
exist
etc/hadoop/core-site.xmllinux
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.3.127:8020</value> </property> </configuration>
etc/hadoop/hdfs-site.xmlapache
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/home/hdfs/name</value>
<description>namenode上存儲hdfs名字空間元數據 </description>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/home/hdfs/data</value>
<description>datanode上數據塊的物理存儲位置</description>
</property>
</configuration>
開放端口app
firewall-cmd --add-port=8020/tcp --permanent firewall-cmd --add-port=50010/tcp --permanent firewall-cmd --add-port=50070/tcp --permanent firewall-cmd --reload
1. java.lang.IllegalArgumentException: URI has an authority component
在執行 `bin/hdfs namenode -format` 的時候報錯。
檢查 hdfs-site.xml 配置是否正確ssh
<property> <name>dfs.name.dir</name> <value>file:/home/hdfs/name</value> <description>namenode上存儲hdfs名字空間元數據 </description> </property> <property> <name>dfs.data.dir</name> <value>file:/home/hdfs/data</value> <description>datanode上數據塊的物理存儲位置</description> </property>
2. java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
解壓 hadoop-2.9.2.tar.gz 到 D:\app\tcp
System.setProperty("hadoop.home.dir", "D:\\app\\hadoop-2.9.2");
3. java.io.FileNotFoundException: Could not locate Hadoop executable: D:\app\hadoop-2.9.2\bin\winutils.exe
下載 winutils.exe 放到 {HADOOP_HOME}\bin\ 下oop
4. Permission denied: user=xxx, access=WRITE, inode="/":root:supergroup:drwxr-xr-xthis
/** * 解決無權限訪問,設置遠程hadoop的linux用戶名稱 */ private static final String USER = "root"; fileSystem = FileSystem.get(new URI(HDFS_PATH), configuration, USER);
5. java.net.ConnectException: Connection timed out: no further information 與 org.apache.hadoop.ipc.RemoteException: File /hello-hadoop.md could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
# 開放 dataNode端口 firewall-cmd --add-port=50010/tcp --permanent firewall-cmd --reload
6. No FileSystem for scheme "hdfs"
<dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>${org.apache.hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>${org.apache.hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${org.apache.hadoop.version}</version> </dependency>
有問題歡迎留言交流。
技術交流羣:282575808
--------------------------------------
聲明: 原創文章,未經容許,禁止轉載!
--------------------------------------