今天在公司服務器centos7上安裝hadoop,參考了這個安裝教程,同時參考這個博客。html
安裝的流程大體以下:java
1.單機安裝node
> mkdir /opt/hadoop/input > cp $HADOOP_HOME/*.txt /opt/hadoop/input > hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar wordcount /opt/hadoop/input /opt/hadoop/ouput
2.僞分佈式安裝(在單機基礎上修改配置便可)python
export JAVA_HOME=/usr/local/jdk1.8.0_181
<configuration> <property> <name>fs.default.name </name> <value> hdfs://localhost:9000 </value> </property> </configuration>
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name.dir</name> <value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value> </property> <property> <name>dfs.data.dir</name> <value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value> </property> </configuration>
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
hadoop fs -mkdir /user/hadoop/inputs #建立目錄 hadoop fs -put /opt/hadoop/input/*.txt /user/hadoop/inputs #上傳文件到HDFS文件系統 hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar wordcount inputs outputs # inputs outputs 是在HDFS文件系統下的目錄
3.徹底分佈式(在僞分佈式基礎上操做便可,已兩臺電腦爲例)linux
電腦01:ip:10.12.28.27 電腦02:ip:10.12.28.144 爲了方便輸入,用域名代替ip,能夠分別都兩臺機器上: sudo vi /etc/hosts 都添加: 10.12.28.27 master 10.12.28.144 slave1 此時,從01登入到02能夠: ssh hadoop@slave1 至關於ssh hadoop@10.12.28.144
01登入02須要輸入用戶密碼。無祕鑰通訊:apache
生成ssh祕鑰:ssh-keygen -t rsa 此時~/.ssh目錄下會生成公鑰和私鑰 發送公鑰到02:ssh-copy-id hadoop@10.12.28.144 一樣,在02上操做能夠,從02無祕鑰登入01
安裝java或hadoop能夠直接從01拷貝到02上 scp -r /opt/hadoop/hadoop-2.8.5 hadoop@10.12.28.144:/opt/hadoop/
hadoop namenode -format 從新進行格式化命令 start-dfs.sh 啓動hdfs 使用jps會看見,在master上面啓動了NameNode,SecondaryNameNode 在slave1上使用jps查看,會看見上面啓動了 DataNode start-yarn.sh 啓動yarn master端多了一個ResourceManager,slave1端多了一個NodeManager。
下面就記錄安裝過程當中遇到的一些問題:centos
1.jdk版本問題bash
jdk用最新的11版本後,運行hadoop會出現以下信息,可能形成hadoop運行不成功。建議jdk安裝8版本及以前版本服務器
WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.ibatis.reflection.Reflector (file:/C:/Users/jiangcy/.m2/repository/org/mybatis/mybatis/3.4.5/mybatis-3.4.5.jar) to method java.lang.Object.finalize() WARNING: Please consider reporting this to the maintainers of org.apache.ibatis.reflection.Reflector WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release
2.~/.bashrc文件配置問題(環境變量配置)網絡
在安裝完jdk後,須要配置~/.bashrc文件,按照上面參考教程提供的配置:
export JAVA_HOME=/usr/local/jdk1.8.0_181 export PATH=PATH:$JAVA_HOME/bin export HADOOP_HOME=/opt/hadoop/hadoop-2.8.5
執行source ~/.bashrc生效,會出現其它命令不能用的狀況如:‘-bash: ls: 未找到命令’(由於PATH路徑被修改了)
應該改成:
export JAVA_HOME=/usr/local/jdk1.8.0_181 export PATH=$JAVA_HOME/bin:$PATH export HADOOP_HOME=/opt/hadoop/hadoop-2.8.5 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
3.SSH設置和密鑰生成
配置爲能夠免密碼登錄本機。實際上,在hadoop的安裝過程當中,是否免密碼登錄是可有可無的,可是若是不配置免密碼登錄,每次啓動hadoop都須要輸入密碼以登錄到每臺DataNode上。不配置免密碼登錄時,當集羣大了,就會很麻煩。
4.hadoop fs -mkdir /user/input 報錯
將數據插入到HDFS須要建立一個目錄,執行上面語句會出現hadoop fs -mkdir:No such file or directory的錯誤。
緣由:HDFS默認的工做目錄是/user/<你的登陸用戶名>,可是HDFS文件系統可能只有根目錄,注意:HDFS文件系統目錄和本地目錄不是一個。參考
因此能夠以下操做:
hdfs經常使用操做 1.對hdfs操做的命令格式是hadoop fs 1.1 -ls 表示對hdfs下一級目錄的查看 1.2 -lsr 表示對hdfs目錄的遞歸查看 1.3 -mkdir 建立目錄 1.4 -put 從linux上傳文件到hdfs 1.5 -get 從hdfs下載文件到linux 1.6 -text 查看文件內容 1.7 -rm 表示刪除文件 -rm -r 刪除文件夾 1.7 -rmr 表示遞歸刪除文件
5.用mapreduce的stream流操做執行python腳本
首先用命令:find / -name 'hadoop-streaming*.jar' 找到hadoop安裝目錄中streaming的java應用程序,不一樣版本肯能存放的目錄不同,個人在目錄下:/opt/hadoop/hadoop-2.8.5/share/hadoop/tools/lib/hadoop-streaming-2.8.5.jar
執行命令:
hadoop jar hadoop-streaming-2.8.5.jar -input inputs -output py_outs -mapper /opt/hadoop/mapper.py -reducer /opt/hadoop/reducer.py
python計數腳本:
mapper.py
#!/usr/bin/env python # -*- coding:UTF-8 -*- import sys #輸入爲標準輸入stdin for line in sys.stdin: #刪除開頭和結尾的空格 line = line.strip() #以默認空格分隔行單詞到words列表 words = line.split() for word in words: #輸出全部單詞,格式爲「單詞,1」以便做爲Reduce的輸入 print ('%s\t%s' % (word, 1))
reducer.py
#!/usr/bin/env python # -*- coding:UTF-8 -*- # from operator import itemgetter import sys current_word = None current_count = 0 word = None # 獲取標準輸入,即mapper.py的輸出 for line in sys.stdin: # 刪除開頭和結尾的空格 line = line.strip() # 解析mapper.py輸出做爲程序的輸入,以tab做爲分隔符 word, count = line.split('\t', 1) # 轉換count從字符型成整型 try: count = int(count) except ValueError: # count不是數據時,忽略此行 continue # 要求mapper.py的輸出作排序操做,以便對連續的word作判斷,hadoop會自動排序 if current_word == word: current_count += count else: if current_word: # 輸出當前word統計結果到標準輸出 print('%s\t%s' % (current_word, current_count)) current_count = count current_word = word # 輸出最後一個word統計 if current_word == word: print('%s\t%s' % (current_word, current_count))
6.集羣安裝後,啓動報錯
slave1: /opt/hadoop/hadoop-2.8.5/bin/hdfs: line 305: /usr/local/jdk1.8.0_181/bin/java: 沒有那個文件或目錄 slave1: /opt/hadoop/hadoop-2.8.5/bin/hdfs: line 305: exec: /usr/local/jdk1.8.0_181/bin/java: cannot execute: 沒有那個文件或目錄
從節點slave1機器上hadoop配置須要修改,我這裏由於jdk安裝目錄不同致使的。修改hadoop-env.sh、mapred-env.sh、yarn-env.sh文件中的jdk路徑
export JAVA_HOME=/usr/local/java/jdk1.8.0_131
7.集羣安裝完,數據上傳到HDFS出錯
[hadoop@localhost ~]$ hadoop fs -put /opt/hadoop/input/*.txt /user/hadoop/inputs 18/10/18 16:14:24 WARN hdfs.DataStreamer: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/inputs/LICENSE.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1726) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2567) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829) . . . put: File /user/hadoop/inputs/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
緣由:參考