目錄html
1 環境準備... 4java
1.1 硬件配置... 4node
1.2 軟件... 4linux
1.5 虛擬機配置... 5shell
1.6 SSH免密碼登陸... 5apache
1.7 JDK安裝... 7bootstrap
6.2.1 建立文件夾及將本地文件複製到:hdfs系統中... 40
7.1 hadoop2.2.0 沒法鏈接ResourceManager問題... 42
Dell 960 CPU英特爾 酷睿2 四核 Q8300 @ 2.50GHz
內存:4GB
硬盤:320GB
安裝Window7,安裝虛擬機
ü CentOS6.5
ü VMware Workstation 10.0.2
ü Secure CRT 7.0
ü WinSCP 5.5.3
ü JDK 1.6.0_43
ü Hadoop1.2.1
ü Eclipse3.6
ü Window7
192.168.1.53 namenode53
192.168.1.54 datanode54
192.168.1.55 datanode55
192.168.1.56 datanode56
/dev/sda6 4.0G 380M 3.4G 10% /
tmpfs 495M 72K 495M 1% /dev/shm
/dev/sda2 7.9G 419M 7.1G 6% /app
/dev/sda3 7.9G 146M 7.4G 2% /applog
/dev/sda1 194M 30M 155M 16% /boot
/dev/sda5 5.8G 140M 5.3G 3% /data
/dev/sda8 2.0G 129M 1.8G 7% /home
/dev/sda9 2.0G 68M 1.9G 4% /opt
/dev/sda12 2.0G 36M 1.9G 2% /tmp
/dev/sda7 4.0G 3.3G 509M 87% /usr
/dev/sda10 2.0G 397M 1.5G 21% /var
4臺虛擬機,配置爲:1 CPU*1GBMEM*40GB硬盤 安裝CentOS6.5 64位
圖1.5-1
分別建立兩個用戶:root ,hadoop
root 爲管理員用戶
hadoop 爲Hadoop運行用戶
1、建立在用戶的home目錄下建立 .ssh文件夾
mkdir .ssh
能夠隱藏文件夾或文件內容
ls -a
2、 生成證書
證書分爲:dsa和rsa
ssh-keygen -t rsa -P '' -b 1024
ssh-keygen 生成命令
-t 表示證書 rsa
-p 密碼提示語 ''
-b 證書大小 爲:1024
執行後 將會生成密鑰文件和私鑰文件
ll
-rwx------ 1 apch apache 883 May 20 15:13 id_rsa
-rwx------ 1 apch apache 224 May 20 15:13 id_rsa.pub
3、 把公鑰信息寫入 authorized_keys 文檔中
cat id_rsa.pub >> authorized_keys
(將生成的公鑰文件寫入 authorized_keys 文件)
4、設置文件和目錄權限
設置authorized_keys權限
$ chmod 600 authorized_keys
設置.ssh目錄權限
$ chmod 700 -R .ssh
五 修改/etc/ssh/sshd_config (須要使用root用戶登陸)
vi /etc/ssh/sshd_config
Protocol 2 (僅使用SSH2)
PermitRootLogin yes (容許root用戶使用SSH登錄,根據登陸帳戶設置)
ServerKeyBits 1024 (將serverkey的強度改成1024)
PasswordAuthentication no (不容許使用密碼方式登錄)
PermitEmptyPasswords no (禁止空密碼進行登錄)
RSAAuthentication yes (啓用 RSA 認證)
PubkeyAuthentication yes (啓用公鑰認證)
AuthorizedKeysFile .ssh/authorized_keys
6、重啓sshd 服務 (須要使用root用戶登陸)
service sshd restart
7、本地驗證測試
ssh -v localhost (開啓登陸調試模式)
從Oracle官網下-jdk-6u43-linux-x64.bin
安裝到:/usr/java目錄
配置JAVA_HOME:
export JAVA_HOME=/usr/java/jdk1.6.0_43
export JAVA_BIN=/usr/java/jdk1.6.0_43/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
生效配置文件
source /etc/profile
驗證安裝:java -version
下載地址:http://mirrors.hust.edu.cn/apache/hadoop/common/
並上傳到Linux系統 rz(前置條件必須安裝rz,sz包)
將hadoop安裝到/app/hadoop/的目錄下:
tar -zcvf /home/hadoop/hadoop-1.2.1.tar.gz
/app/hadoop/的權限爲:hadoop:hadoop(用戶、用戶組)
須要配置修改配置文件以下:
ü hadoop-env.sh
ü core-site.xml
ü hdfs-site.xml
ü mapred-site.xml
ü masters
ü slaves
vi hadoop-env.sh
export JAVA_HOME/usr/java/jdk1.6.0_43
vi core-site.xml
vi hdfs-site.xml
添加如下內容:
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
vi mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://192.168.1.53:9001</value>
</property>
</configuration>
vi masters
192.168.1.53
vi slaves
192.168.1.54
192.168.1.55
192.168.1.56
通過2.3 節配置,將192.168.1.53服務上已配置完成hadoop文件,打包
tar -zcvf hadoop.tar.gz ./hadoop/
再將hadoop.tar.gz 複製到192.168.1.54-56
scp hadoop.tar.gz hadoop@192.168.1.54:/app/hadoop/
在分別登陸到192.168.1.54~56 上,
cd /app/hadoop/
tar –zxvf hadoop.tar.gz
192.68.1.53-56 配置HADOOP_HOME
vi /etc/profile
# set hadoop path
export HADOOP_HOME=/app/hadoop/hadoop
export HADOOP_HOME_WARN_SUPPRESS=1
export PATH=$PATH:$HADOOP_HOME/bin
source /etc/profile
生效配置文件
export HADOOP_HOME_WARN_SUPPRESS=1
解決 啓動Hadoop時報了一個警告信息
驗證配置:
hadoop version
格式化HDFS文件系統
hadoop namenode -format
使用SecureCRT 7.0登陸到192.168.1.53
命令:
cd /app/hadoop/hadoop/bin/
./start-all.sh
查看Hadoop的nameNode進程(192.168.1.53)
說明已啓動NameNode,JobTracker,SecondaryNameNode
查看Hadoop的datanode進程:(192.168.1.54)
說明已啓動:DataNode,TaskTracker
http://192.168.1.53:50070/dfshealth.jsp
http://192.168.1.53:50030/jobtracker.jsp
hadoop jar hadoop-examples-1.2.1.jar wordcount /tmp/input /tmp/output
經過jobtracker控制檯監控結果以下:
從CentOS-53克隆CentOS-57-60
分別虛擬機名稱、主機名、IP
CentOS-57 192.168.1.57 namenode57
CentOS-58 192.168.1.58 datanode58
CentOS-59 192.168.1.59 datanode59
CentOS-60 192.168.1.60 datanode60
(會使用VMware虛擬機的的克隆功能,快速的複製已安裝好的系統。但是克隆完以後,會發現沒有eth0網,切換到root用戶下,才能修改,不然權限不足)
配置網卡:
vi /etc/sysconfig/network-scripts/ifcfg-eth0
修改網絡卡配置-MAC地址
vi /etc/sysconfig/network-scripts/ifcfg-eth0
修改主機名:
vi /etc/sysconfig/network
重啓系統,reboot
192.168.1.57-60 修改hosts
添加如下內容:
192.168.1.57 namenode57
192.168.1.58 datanode58
192.168.1.59 datanode59
192.168.1.60 datanode60
su –
切換到root用戶
vi /etc/hosts
192.168.1.57 namenode57
192.168.1.58 datanode58
192.168.1.59 datanode59
192.168.1.60 datanode60
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
jdk-7u55-linux-x64.rpm
安裝JDK(192.168.1.57-60)
將JDK7安裝介質上傳到192.168.1.57上「jdk-7u55-linux-x64.rpm」
rz
rpm -i jdk-7u55-linux-x64.rpm
安裝成功
cd /usr/java
檢查是否安裝成功
安裝目錄:/usr/java/jdk1.7.0_55
配置JAVA_HOME
vi /etc/profile
添加如下內容
export JAVA_HOME=/usr/java/jdk1.7.0_55
export JAVA_BIN=/usr/java/jdk1.7.0_55/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
生效配置文件
source /etc/profile
檢查安裝版本:
java -version
表示安裝成功
http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/
並上傳到Linux系統 rz命令上傳(前置條件必須安裝rz,sz包)
登陸到192.168.1.57
安裝目錄爲:/app/hadoop/hadoop2.4
分別在192.168.1.57-60服務器,以hadoop用戶登陸,並建立目錄:hadoop2.4
cd /app/hadoop/
tar -zxvf /home/hadoop/hadoop-2.4.0.tar.gz
注:Hadoop-2.4.0 變化太大了,配置目錄都變了
cd /app/hadoop/hadoop-2.4.0/lib/native
file libhadoop.so.1.0.0
libhadoop.so.1.0.0: ELF 32-bit LSB shared object,
環境變量配置:
su - (切換到root下)
vi /etc/profile
添加如下內容:
export HADOOP_HOME=/app/hadoop/hadoop-2.4.0
export HADOOP_HOME_WARN_SUPPRESS=1
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
生效配置文件
source /etc/profile
基於CentOS6.5 64位操做編譯
主要涉及到工具備:hadoop-2.4.0-src.tar.gz、Ant、Maven、JDK、GCC、CMake、openssl
第一步升級系統相關編譯所需的軟件(升級最新版):
yum install lzo-devel zlib-devel gcc autoconf automake libtool ncurses-devel openssl-devel
wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz (源代版)
tar -zxvf hadoop-2.4.0-src.tar.gz
wget http://apache.fayea.com/apache-mirror//ant/binaries/apache-ant-1.9.4-bin.tar.gz
tar -xvf apache-ant-1.9.4-bin.tar.gz
wget http://apache.fayea.com/apache-mirror/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz
tar -xvf apache-maven-3.0.5-bin.tar.gz
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_55
export JAVA_BIN=/usr/java/jdk1.7.0_55/bin
export ANT_HOME=/home/hadoop/ant
export MVN_HOME=/home/hadoop/maven
export FINDBUGS_HOME=/home/hadoop/findbugs-2.0.3
export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin:$MVN_HOME/bin:$FINDBUGS_HOME/bin
生產配置文件:
source /etc/profile
驗證是否配置成功
ant –version
mvn -version
findbugs –version
驗證結果:
安裝protobuf(以root用戶登陸)
wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
tar zxf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure
make
make install
protoc --version
安裝cmake(以root用戶登陸)
wget http://www.cmake.org/files/v2.8/cmake-2.8.12.2-Linux-i386.tar.gz
./bootstrap
make
make install
cmake –version
爲了加速編譯,將maven鏡像庫指向:開源中國
cd maven/conf
vi settings.xml
添加如下內容:
<mirror>
<id>nexus-osc</id>
<mirrorOf>*</mirrorOf>
<name>Nexus osc</name>
<url>http://maven.oschina.net/content/groups/public/</url>
</mirror>
<mirror>
<id>nexus-osc-thirdparty</id>
<mirrorOf>thirdparty</mirrorOf>
<name>Nexus osc thirdparty</name>
<url>http://maven.oschina.net/content/repositories/thirdparty/</url>
</mirror>
<profile>
<id>jdk-1.4</id>
<activation>
<jdk>1.4</jdk>
</activation>
<repositories>
<repository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>nexus</id>
<name>local private nexus</name>
<url>http://maven.oschina.net/content/groups/public/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
詳細說明可參見:
http://maven.oschina.net/help.html
mvn package -DskipTests -Pdist,native –Dtar
此時在下載maven依賴全部包及插件
慢慢等待中……(花6個小時,終於看到一編譯錯誤)
編譯成功,檢查nativelib 是否編譯成功
cd hadoop-dist/target/hadoop-2.4.0/lib/native
file libhadoop.so.1.0.0
libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
表明編譯成功
錯誤1
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 46.796s
[INFO] Finished at: Wed Jun 04 13:28:37 CST 2014
[INFO] Final Memory: 36M/88M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project hadoop-common: Could not resolve dependencies for project org.apache.hadoop:hadoop-common:jar:2.4.0: Failure to find org.apache.commons:commons-compress:jar:1.4.1 in https://repository.apache.org/content/repositories/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of apache.snapshots.https has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common
解決方法:
根據上面日誌提示說找不到「org.apache.commons:commons-compress:jar:1.4.1」,
直接將本地(Windows)包複製到Linux系統中,解決了。
錯誤2
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:16.693s
[INFO] Finished at: Wed Jun 04 13:56:31 CST 2014
[INFO] Final Memory: 48M/239M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-common: An Ant BuildException has occured: Execute failed: java.io.IOException: Cannot run program "cmake" (in directory "/home/hadoop/hadoop-2.4.0-src/hadoop-common-project/hadoop-common/target/native"): error=2, 沒有那個文件或目錄
[ERROR] around Ant part ...<exec dir="/home/hadoop/hadoop-2.4.0-src/hadoop-common-project/hadoop-common/target/native" executable="cmake" failonerror="true">... @ 4:133 in /home/hadoop/hadoop-2.4.0-src/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-common
解決方法:
是沒有安裝cmake致使的,再從新安裝cmake;參考《5.3.1編譯環境準備》
錯誤3
錯誤提示是找不到相應的文件和不能建立目錄,在網上沒有相關錯誤(根據本身經驗修改目錄權限爲:775,讓目錄有建立文件或文件夾的權限,另外最好保證hadoop編譯目錄有2.5G至4G的空間)
chmod -Rf 775 ./ hadoop-2.4.0-src
main:
[mkdir] Created dir: /data/hadoop/hadoop-2.4.0-src/hadoop-tools/hadoop-pipes/target/test-dir
[INFO] Executed tasks
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-pipes ---
[INFO] Executing tasks
錯誤3
main:
[mkdir] Created dir: /data/hadoop/hadoop-2.4.0-src/hadoop-tools/hadoop-pipes/target/native
[exec] -- The C compiler identification is GNU 4.4.7
[exec] -- The CXX compiler identification is GNU 4.4.7
[exec] -- Check for working C compiler: /usr/bin/cc
[exec] -- Check for working C compiler: /usr/bin/cc -- works
[exec] -- Detecting C compiler ABI info
[exec] -- Detecting C compiler ABI info - done
[exec] -- Check for working CXX compiler: /usr/bin/c++
[exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
[exec] -- Detecting CXX compiler ABI info
[exec] -- Detecting CXX compiler ABI info - done
[exec] CMake Error at /usr/local/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
[exec] Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
[exec] system variable OPENSSL_ROOT_DIR (missing: OPENSSL_LIBRARIES
[exec] OPENSSL_INCLUDE_DIR)
[exec] Call Stack (most recent call first):
[exec] /usr/local/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
[exec] /usr/local/share/cmake-2.8/Modules/FindOpenSSL.cmake:313 (find_package_handle_standard_args)
[exec] CMakeLists.txt:20 (find_package)
[exec]
[exec]
[exec] -- Configuring incomplete, errors occurred!
[exec] See also "/data/hadoop/hadoop-2.4.0-src/hadoop-tools/hadoop-pipes/target/native/CMakeFiles/CMakeOutput.log".
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [13.745s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [5.538s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [7.296s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.568s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [5.858s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [8.541s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [8.337s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [7.348s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [4.926s]
[INFO] Apache Hadoop Common .............................. SUCCESS [2:35.956s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [18.680s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.059s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [5:03.525s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [38.335s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [23.780s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [8.769s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.159s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.134s]
[INFO] hadoop-yarn-api ................................... SUCCESS [2:07.657s]
[INFO] hadoop-yarn-common ................................ SUCCESS [1:10.680s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.165s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [24.174s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [27.293s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [5.177s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [11.399s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [28.384s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [1.346s]
[INFO] hadoop-yarn-client ................................ SUCCESS [12.937s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.108s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [5.303s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [3.212s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.050s]
[INFO] hadoop-yarn-project ............................... SUCCESS [8.638s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.135s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [43.622s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [36.329s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [6.058s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [20.058s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [16.493s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [11.685s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [3.222s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [12.656s]
[INFO] hadoop-mapreduce .................................. SUCCESS [8.060s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [8.994s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [15.886s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [6.659s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [15.722s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [11.778s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [5.953s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [6.414s]
[INFO] Apache Hadoop Pipes ............................... FAILURE [3.746s]
[INFO] Apache Hadoop OpenStack support ................... SKIPPED
[INFO] Apache Hadoop Client .............................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
[INFO] Apache Hadoop Tools Dist .......................... SKIPPED
[INFO] Apache Hadoop Tools ............................... SKIPPED
[INFO] Apache Hadoop Distribution ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19:43.155s
[INFO] Finished at: Wed Jun 04 17:40:17 CST 2014
[INFO] Final Memory: 79M/239M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-pipes: An Ant BuildException has occured: exec returned: 1
[ERROR] around Ant part ...<exec dir="/data/hadoop/hadoop-2.4.0-src/hadoop-tools/hadoop-pipes/target/native" executable="cmake" failonerror="true">... @ 5:123 in /data/hadoop/hadoop-2.4.0-src/hadoop-tools/hadoop-pipes/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
根據網上提示( 下面須要再安裝openssl-devel,安裝命令yum install openssl-devel,此步不作的話會報以下錯誤
[exec] CMake Error at /usr/share/cmake/Modules/FindOpenSSL.cmake:66 (MESSAGE):
[exec] Could NOT find OpenSSL
[exec] Call Stack (most recent call first):
[exec] CMakeLists.txt:20 (find_package)
[exec]
[exec]
[exec] -- Configuring incomplete, errors occurred!
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-pipes: An Ant BuildException has occured: exec returned: 1 -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluen ... oExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-pipes
)
錯誤鏈接:http://f.dataguru.cn/thread-189176-1-1.html
緣由是:在安裝openssl-devel,少寫一個l,從新安裝一下
解決方法:從新安裝openssl-devel
yum install openssl-devel
一、 必須安裝(yum install lzo-devel zlib-devel gcc autoconf automake libtool ncurses-devel openssl-devel)
二、 必須安裝(protobuf,CMake)編譯工具
三、 必須配置(ANT、MAVEN、FindBugs)
四、 將maven庫指向開源中國,這樣就能夠加快編譯速度,即加快下載依賴jar包速度
五、 編譯出錯需求詳細觀察出錯日誌,根據錯誤日誌分析緣由再結束百度和Google解決錯誤;
cd hadoop-2.4.0
cd etc/hadoop
core-site.xml
yarn-site.xml
hdfs-site.xml
mapred-site.xml
hadoop-env.sh
vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode57:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/current/tmp</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/app/hadoop/current/data</value>
</property>
</configuration>
vi yarn-site.xml
<property>
<name>yarn.resourcemanager.address</name>
<value>namenode57:18040</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>namenode57:18030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>namenode57:18088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>namenode57:18025</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>namenode57:18141</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.rpc-address</name>
<value>namenode57:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/app/hadoop/current/dfs</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/app/hadoop/current/data</value>
</property>
</configuration>
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_55
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
scp hadoop-2.4.0.tar.gz hadoop@192.168.1.58:/app/hadoop/
分別複製到:192.168.1.58-60
./hdfs namenode -format
cd /app/hadoop/hadoop-2.4.0/
./start-all.sh
查看進程hadoop進程:
jps
ps –ef|grep java
14/06/04 07:48:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on
啓動時一直報錯,準備編譯Hadoop64位依賴包(5.3編譯Hadoop本地庫),再進行替換
替換掉32位的native庫
刪除原32位的native庫
cd /app/hadoop/hadoop-2.4.0/lib
rm -rf native/
將5.3節編譯好native 64位的庫複製到:/app/hadoop/hadoop-2.4.0/lib
cd /data/hadoop/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0/lib
cp -r ./native /app/hadoop/hadoop-2.4.0/lib/
錯誤1:
2014-06-04 18:30:57,450 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
java.lang.IllegalArgumentException: The ServiceName: mapreduce.shuffle set in yarn.nodemanager.aux-services is invalid.The valid service name should only contain a-zA-Z0-9_ and can not start with numbers
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:98)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:220)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:186)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:357)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:404)
2014-06-04 18:30:57,458 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG:
解決方法:
vi /app/hadoop/hadoop-2.4.0/etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>
改成
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
總結:
從2.0之後啓動和中止hadoop的命令start-all.sh和stop-all.sh不建議使用,開始摒棄掉取而代之的將使用start-dfs.sh和start-yarn.sh啓動hadoop,詳細請看官方說明。
在192.168.1.57檢查
jps
分別檢查192.168.1.58~60
Jps
HDFS
http://192.168.1.57:50070/dfshealth.html#tab-overview
http://namenode57:8088/cluster/
一、建立/tmp/input文件夾
hadoop fs -mkdir /tmp
hadoop fs -mkdir /tmp/input
二、將本地文件複製到hdfs系統中
hadoop fs -put /usr/hadoop/file* /tmp/input
三、查看test.txt文件是否成功上傳到hdfs上
hadoop fs -ls /tmp/input
hadoop jar hadoop-mapreduce-examples-2.4.0.jar wordcount /tmp/input /tmp/output
錯誤日誌:
ountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:07,154 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:08,156 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:09,159 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:10,161 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:11,164 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:12,166 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:13,169 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-06-05 07:01:14,171 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
解決方法,在yarn.xml文件裏配置,全部機器都修改
<property>
<name>yarn.resourcemanager.hostname</name>
<value>namenode57</value>
</property>