在Linux上編譯Hadoop-2.4.0

在CentOS release 6.3(64) 上編譯 Hadoop-2.4.0

  1. 前言
  2. 安裝依賴
  3. 編譯Hadoop源代碼
  4. 建立用戶組
  5. 附1:編譯環境
  6. 附2:版本信息
  7. 附3:常見錯誤

1. 前言

Hadoop-2.4.0的源碼目錄下有個BUILDING.txt文件,它介紹瞭如何在Linux和Windows下編譯源代碼,本文基本是遵守BUILDING.txt指示來操做的,這裏再作一下簡單的提煉。
第一次編譯要求可以訪問互聯網,Hadoop的編譯依賴很是多的東西,必定要保證機器可訪問互聯網,不然難逐一解決全部的編譯問題,但第一次以後的編譯則不用再下載了。

2. 安裝依賴

在編譯Hadoop 2.4.0源碼以前,須要將下列幾個依賴的東西安裝好:
1) JDK 1.6或更新版本(本文使用JDK1.7,請不要安裝JDK1.8版本,JDK1.8和Hadoop 2.4.0不匹配,編譯Hadoop 2.4.0源碼時會報不少錯誤)
2) Maven 3.0或更新版本
3) ProtocolBuffer 2.5.0
4) CMake 2.6或更新版本

5) Findbugs 1.3.9,可選的(本文編譯時未安裝)  java

1.安裝ProtocolBuffer node

# wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
tar xzvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
#./configure --prefix=/usr/local/protobuf-2.5.0
./configure 
make & make install
#Oh,shit, 提示報錯」configure: error: C++ preprocessor "/lib/cpp" failssanity check」
#安裝gcc
#yum install gcc

2.安裝JDK linux

#下載JDK
wget http://pan.baidu.com/s/1c0vhsWC
#安裝
cd /tmp
tar xzvf jdk-7u55-linux-x64.tar.gz
cd jdk1.7.0_55/
ln -s  jdk1.7.0_55 jdk
3.安裝Maven
#wget http://apache.fayea.com/apache-mirror/maven/maven-3/3.2.1/binaries/apache-maven-3.2.1-bin.tar.gz
cd /tmp
2) tar xzvf apache-maven-3.2.1-bin.tar.gz
3) ln -s  cd apache-maven-3.2.1 maven

4.安裝cmake web

tar -zxvf cmake-2.8.12.1.tar.gz
cd cmake-2.8.12.1
./bootstrap
make & make install
#檢查安裝是否正確
cmake --version
5.安裝Ant
tar -zxvf apache-ant-1.8.1-bin.tar.gz
cp apache-ant-1.8.1-bin /usr/local/
vi /etc/profile
ANT_HOME=/usr/local/apache-ant-1.8.1
PATH=$JAVA_HOME/bin:$ANT_HOME/bin:$PATH
source /etc/profile
#檢驗版本
ant -version

4.配置文件conf/setting.xml, 增長國內的源。 shell

(1)配置節增長 <mirrors> express

<mirror>
		<id>nexus-osc</id>
		<mirrorOf>*</mirrorOf>
		<name>Nexusosc</name>
		<url>http://maven.oschina.net/content/groups/public/</url>
	</mirror>

(2)配置節增長 <profiles> apache

<profile>
	  <id>jdk-1.7</id>
	  <activation>
		<jdk>1.7</jdk>
	  </activation>
	  <repositories>
		<repository>
		  <id>nexus</id>
		  <name>local private nexus</name>
		  <url>http://maven.oschina.net/content/groups/public/</url>
		  <releases>
			<enabled>true</enabled>
		  </releases>
		  <snapshots>
			<enabled>false</enabled>
		  </snapshots>
		</repository>
	  </repositories>
	  <pluginRepositories>
		<pluginRepository>
		  <id>nexus</id>
		  <name>local private nexus</name>
		  <url>http://maven.oschina.net/content/groups/public/</url>
		  <releases>
			<enabled>true</enabled>
		  </releases>
		  <snapshots>
			<enabled>false</enabled>
		  </snapshots>
		</pluginRepository>
	  </pluginRepositories>
	</profile>

setting.xml bootstrap

<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

<!--
 | This is the configuration file for Maven. It can be specified at two levels:
 |
 |  1. User Level. This settings.xml file provides configuration for a single user, 
 |                 and is normally provided in ${user.home}/.m2/settings.xml.
 |
 |                 NOTE: This location can be overridden with the CLI option:
 |
 |                 -s /path/to/user/settings.xml
 |
 |  2. Global Level. This settings.xml file provides configuration for all Maven
 |                 users on a machine (assuming they're all using the same Maven
 |                 installation). It's normally provided in 
 |                 ${maven.home}/conf/settings.xml.
 |
 |                 NOTE: This location can be overridden with the CLI option:
 |
 |                 -gs /path/to/global/settings.xml
 |
 | The sections in this sample file are intended to give you a running start at
 | getting the most out of your Maven installation. Where appropriate, the default
 | values (values used when the setting is not specified) are provided.
 |
 |-->
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" 
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <!-- localRepository
   | The path to the local repository maven will use to store artifacts.
   |
   | Default: ${user.home}/.m2/repository
  <localRepository>/path/to/local/repo</localRepository>
  -->

  <!-- interactiveMode
   | This will determine whether maven prompts you when it needs input. If set to false,
   | maven will use a sensible default value, perhaps based on some other setting, for
   | the parameter in question.
   |
   | Default: true
  <interactiveMode>true</interactiveMode>
  -->

  <!-- offline
   | Determines whether maven should attempt to connect to the network when executing a build.
   | This will have an effect on artifact downloads, artifact deployment, and others.
   |
   | Default: false
  <offline>false</offline>
  -->

  <!-- pluginGroups
   | This is a list of additional group identifiers that will be searched when resolving plugins by their prefix, i.e.
   | when invoking a command line like "mvn prefix:goal". Maven will automatically add the group identifiers
   | "org.apache.maven.plugins" and "org.codehaus.mojo" if these are not already contained in the list.
   |-->
  <pluginGroups>
    <!-- pluginGroup
     | Specifies a further group identifier to use for plugin lookup.
    <pluginGroup>com.your.plugins</pluginGroup>
    -->
  </pluginGroups>

  <!-- proxies
   | This is a list of proxies which can be used on this machine to connect to the network.
   | Unless otherwise specified (by system property or command-line switch), the first proxy
   | specification in this list marked as active will be used.
   |-->
  <proxies>
    <!-- proxy
     | Specification for one proxy, to be used in connecting to the network.
     |
    <proxy>
      <id>optional</id>
      <active>true</active>
      <protocol>http</protocol>
      <username>proxyuser</username>
      <password>proxypass</password>
      <host>proxy.host.net</host>
      <port>80</port>
      <nonProxyHosts>local.net|some.host.com</nonProxyHosts>
    </proxy>
    -->
  </proxies>

  <!-- servers
   | This is a list of authentication profiles, keyed by the server-id used within the system.
   | Authentication profiles can be used whenever maven must make a connection to a remote server.
   |-->
  <servers>
    <!-- server
     | Specifies the authentication information to use when connecting to a particular server, identified by
     | a unique name within the system (referred to by the 'id' attribute below).
     | 
     | NOTE: You should either specify username/password OR privateKey/passphrase, since these pairings are 
     |       used together.
     |
    <server>
      <id>deploymentRepo</id>
      <username>repouser</username>
      <password>repopwd</password>
    </server>
    -->
    
    <!-- Another sample, using keys to authenticate.
    <server>
      <id>siteServer</id>
      <privateKey>/path/to/private/key</privateKey>
      <passphrase>optional; leave empty if not used.</passphrase>
    </server>
    -->
  </servers>

  <!-- mirrors
   | This is a list of mirrors to be used in downloading artifacts from remote repositories.
   | 
   | It works like this: a POM may declare a repository to use in resolving certain artifacts.
   | However, this repository may have problems with heavy traffic at times, so people have mirrored
   | it to several places.
   |
   | That repository definition will have a unique id, so we can create a mirror reference for that
   | repository, to be used as an alternate download site. The mirror site will be the preferred 
   | server for that repository.
   |-->
  <mirrors>
    <!-- mirror
     | Specifies a repository mirror site to use instead of a given repository. The repository that
     | this mirror serves has an ID that matches the mirrorOf element of this mirror. IDs are used
     | for inheritance and direct lookup purposes, and must be unique across the set of mirrors.
     |
    <mirror>
      <id>mirrorId</id>
      <mirrorOf>repositoryId</mirrorOf>
      <name>Human Readable Name for this Mirror.</name>
      <url>http://my.repository.com/repo/path</url>
    </mirror>
     -->
	<mirror>
		<id>nexus-osc</id>
		<mirrorOf>*</mirrorOf>
		<name>Nexusosc</name>
		<url>http://maven.oschina.net/content/groups/public/</url>
	</mirror>
  </mirrors>
  
  <!-- profiles
   | This is a list of profiles which can be activated in a variety of ways, and which can modify
   | the build process. Profiles provided in the settings.xml are intended to provide local machine-
   | specific paths and repository locations which allow the build to work in the local environment.
   |
   | For example, if you have an integration testing plugin - like cactus - that needs to know where
   | your Tomcat instance is installed, you can provide a variable here such that the variable is 
   | dereferenced during the build process to configure the cactus plugin.
   |
   | As noted above, profiles can be activated in a variety of ways. One way - the activeProfiles
   | section of this document (settings.xml) - will be discussed later. Another way essentially
   | relies on the detection of a system property, either matching a particular value for the property,
   | or merely testing its existence. Profiles can also be activated by JDK version prefix, where a 
   | value of '1.4' might activate a profile when the build is executed on a JDK version of '1.4.2_07'.
   | Finally, the list of active profiles can be specified directly from the command line.
   |
   | NOTE: For profiles defined in the settings.xml, you are restricted to specifying only artifact
   |       repositories, plugin repositories, and free-form properties to be used as configuration
   |       variables for plugins in the POM.
   |
   |-->
  <profiles>
    <!-- profile
     | Specifies a set of introductions to the build process, to be activated using one or more of the
     | mechanisms described above. For inheritance purposes, and to activate profiles via <activatedProfiles/>
     | or the command line, profiles have to have an ID that is unique.
     |
     | An encouraged best practice for profile identification is to use a consistent naming convention
     | for profiles, such as 'env-dev', 'env-test', 'env-production', 'user-jdcasey', 'user-brett', etc.
     | This will make it more intuitive to understand what the set of introduced profiles is attempting
     | to accomplish, particularly when you only have a list of profile id's for debug.
     |
     | This profile example uses the JDK version to trigger activation, and provides a JDK-specific repo.
    <profile>
      <id>jdk-1.4</id>

      <activation>
        <jdk>1.4</jdk>
      </activation>

      <repositories>
        <repository>
          <id>jdk14</id>
          <name>Repository for JDK 1.4 builds</name>
          <url>http://www.myhost.com/maven/jdk14</url>
          <layout>default</layout>
          <snapshotPolicy>always</snapshotPolicy>
        </repository>
      </repositories>
    </profile>
    -->
	<profile>
	  <id>jdk-1.7</id>
	  <activation>
		<jdk>1.7</jdk>
	  </activation>
	  <repositories>
		<repository>
		  <id>nexus</id>
		  <name>local private nexus</name>
		  <url>http://maven.oschina.net/content/groups/public/</url>
		  <releases>
			<enabled>true</enabled>
		  </releases>
		  <snapshots>
			<enabled>false</enabled>
		  </snapshots>
		</repository>
	  </repositories>
	  <pluginRepositories>
		<pluginRepository>
		  <id>nexus</id>
		  <name>local private nexus</name>
		  <url>http://maven.oschina.net/content/groups/public/</url>
		  <releases>
			<enabled>true</enabled>
		  </releases>
		  <snapshots>
			<enabled>false</enabled>
		  </snapshots>
		</pluginRepository>
	  </pluginRepositories>
	</profile>
    <!--
     | Here is another profile, activated by the system property 'target-env' with a value of 'dev',
     | which provides a specific path to the Tomcat instance. To use this, your plugin configuration
     | might hypothetically look like:
     |
     | ...
     | <plugin>
     |   <groupId>org.myco.myplugins</groupId>
     |   <artifactId>myplugin</artifactId>
     |   
     |   <configuration>
     |     <tomcatLocation>${tomcatPath}</tomcatLocation>
     |   </configuration>
     | </plugin>
     | ...
     |
     | NOTE: If you just wanted to inject this configuration whenever someone set 'target-env' to
     |       anything, you could just leave off the <value/> inside the activation-property.
     |
    <profile>
      <id>env-dev</id>

      <activation>
        <property>
          <name>target-env</name>
          <value>dev</value>
        </property>
      </activation>

      <properties>
        <tomcatPath>/path/to/tomcat/instance</tomcatPath>
      </properties>
    </profile>
    -->
  </profiles>

  <!-- activeProfiles
   | List of profiles that are active for all builds.
   |
  <activeProfiles>
    <activeProfile>alwaysActiveProfile</activeProfile>
    <activeProfile>anotherAlwaysActiveProfile</activeProfile>
  </activeProfiles>
  -->
</settings>
  修改系統環境變量:
vi /etc/profile
#==================================
JAVA_HOME=/usr/local/java/jdk1.7
MAVEN_HOME=/usr/local/apache-maven
JRE_HOME=/usr/local/java/jdk1.7/jre
HADOOP_HOME=/usr/local/hadoop-2.4.0
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH
#==================================
source /etc/profile

3. 編譯Hadoop源代碼

1.安裝編譯環境

yum install lzo-devel  zlib-devel  gcc autoconf automake libtool ncurses-devel openssl-deve cmake
# 查找軟件是否安裝 
yum list installed | grep lzo-devel
yum list installed | grep zlib-devel
yum list installed | grep gcc
yum list installed | grep autoconf
yum list installed | grep automake
yum list installed | grep libtool
yum list installed | grep ncurses-devel
yum list installed | grep openssl-deve 
yum list installed | grep cmake
#編譯依賴
yum install cmake
yum install openssl-devel
yum install ncurses-devel

2.下載 Hadoop

wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz

3.編譯

cd /tmp
tar -xzvf hadoop-2.4.0-src.tar.gz 
cd hadoop-2.4.0-src
#執行命令  |  啓動對Hadoop源代碼的編譯。請注意必定不要使用JDK1.8。
mvn package -Pdist,native -DskipTests -Dtar
編譯成功後,會生成Hadoop二進制安裝包hadoop-2.4.0.tar.gz,放在源代碼的hadoop-dist/target子目錄下:
[INFO] Executing tasks

main:
     [exec] $ tar cf hadoop-2.4.0.tar hadoop-2.4.0
     [exec] $ gzip -f hadoop-2.4.0.tar
     [exec] 
     [exec] Hadoop dist tar available at: /tmp/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0.tar.gz
     [exec] 
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /tmp/hadoop-2.4.0-src/hadoop-dist/target/hadoop-dist-2.4.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [03:01 min]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 55.551 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [ 32.127 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [  0.360 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 41.013 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 37.242 s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [ 54.081 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 12.409 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [  5.654 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [02:23 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 21.001 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [  0.035 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [03:24 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 34.233 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 21.465 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [  5.849 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.039 s]
[INFO] hadoop-yarn ....................................... SUCCESS [  0.130 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [01:05 min]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 42.397 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [  0.133 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 10.662 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [01:29 min]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [  6.342 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [ 37.569 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 32.210 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [  6.565 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [  8.046 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [  0.302 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [  5.085 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [  2.989 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [  0.159 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 26.604 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [  0.204 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [01:08 min]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 43.039 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [  4.956 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 15.028 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 11.245 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 36.403 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 11.345 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 16.516 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 16.566 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 17.663 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 46.422 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [  6.415 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [  8.993 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [  6.431 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [  3.124 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [  4.719 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [  0.034 s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [  8.374 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 12.531 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [  0.314 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [ 15.639 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [  7.511 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [  0.022 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [02:01 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 28:53 min
[INFO] Finished at: 2014-05-12T12:16:38+08:00
[INFO] Final Memory: 94M/239M
[INFO] ------------------------------------------------------------------------

附1:編譯環境 api

uname -a
#Linux lq227 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
#==============================================
cat /etc/redhat-release
#CentOS release 6.5 (Final)

4.建立工做組和用戶

1.建立用戶組:hadoop,而後在此用戶組下建立hadoop用戶。 tomcat

groupadd  hadoop
#「hadoop」是所建立的用戶名, -d指明「 hadoop」用戶的home目錄是/home/hadoop
useradd -g hadoop  -d /home/hadoop  hadoop
# 設置hadoop 密碼
passwd hadoop
2.正式安裝

內存 ip地址 other
Master 1G 192.168.55.222
Slavel 1G 192.168.55.227

3.修改主機名

vi /etc/sysconfig/network 
#---------------add-------------------
NETWORKING=yes
HOSTNAME=master
#=====================
vi /etc/hosts 
#--------------add--------------------
192.168.55.222 master
192.168.55.227 slaver1

2.5配置Hadoop

cd /usr/local/hadoop-2.4.0/etc/hadoop
vi hadoop-env.sh
# 修改 export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/local/java/jdk1.7/

增長 configuration 節點
vi core-site.xml

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.55.222:9000/</value>
  </property>
</configuration>
增長 configuration 節點
vi hdfs-site.xml
<configuration>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/hadoop/hdfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/hadoop/hdfs/data</value>
  </property>
</configuration>
增長 configuration 節點
vi yarn-site.xml
<configuration>
  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/home/hadoop/yarn/log</value>
  </property>
</configuration>
vi slaves
# 增長如下內容 如下兩個節點是在 vi /etc/hosts 配置的
master
slave1
設置SSH無密碼登陸
ssh 
#若是提示Bash :  ssh  command  not  found
yum install openssh-clients
# 生成dsa 祕鑰文件
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/id_dsa.pub slave1:~/.ssh/authorized_keys
#修改文件 authority_keys 權限爲600
chmod 600 /home/hadoop/.ssh/authority_keys
ssh localhost
ssh slave1
#若是仍然不成功:tail /var/log/secure -n 20  查看登陸日誌

#Permission denied, please try again. 
#若是出現這個異常,請刪除.ssh目錄 ,使用 ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa 建立 .ssh目錄
啓動Hadoop
# 執行 hadoop 格式化
hadoop namenode -format
#在master虛擬機中
sh  /usr/local/hadoop-2.4.0/sbin/start-all.sh 
#jps 查看進程
#27305 SecondaryNameNode
#27123 DataNode
#27029 NameNode
#27848 Jps
#27439 ResourceManager
開設端口
iptables -I INPUT 5 -p tcp -m state --state NEW -m tcp --dport 50070 -j ACCEPT
iptables -I INPUT 5 -p tcp -m state --state NEW -m tcp --dport 8088 -j ACCEPT
/etc/init.d/iptables save
/etc/init.d/iptables restart
/etc/init.d/fdfs_trackerd status

測試

http://localhost:50070
http://localhost:8088
相關文章
相關標籤/搜索