ApacheHadoop集羣安裝

3臺機器
xupan001               xupan002               xupan003
HDFS:
NameNode
DataNode               DataNode               DataNode
                                              SecondaryNameNode
YARN:
                       ResourceManager
NodeManager            NodeManager            NodeManager


MapReduce:
JonHistory

配置:
hdfs:
hadoop-env.sh
core-site.xml
hdfs-site.xml
slaves[dataServer]

yarn:
yarn-site.xml
yarn.env.sh
slaves[resourceManager]

MapReduce:
mapred-env.sh
mapred-site.xml





1.修改java_home的文件:hadoop-env.sh,yarn-env.sh,mapreduce.env.sh
export JAVA_HOME=/usr/local/devtools/jdk/jdk1.7.0_45

2.修改core-site.xml
<configuration>

     <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>

     <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/devtools/hadoop/hadoop-2.5.0/data/tmp</value>
    </property>

     <property>
        <name>fs.trash.interval</name>
        <value>420</value>
    </property>

   <property>
        <name>ha.zookeeper.quorum</name>
        <value>xupan001:2181,xupan002:2181,xupan003:2181</value>
    </property>

</configuration>


3.修改hdfs-site.xml    
<configuration>
  <property>
    <name>dfs.nameservices</name>
    <value>ns1</value>
  </property>

<property>
  <name>dfs.ha.namenodes.ns1</name>
  <value>nn1,nn2</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.ns1.nn1</name>
  <value>xupan001:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.ns1.nn2</name>
  <value>xupan002:8020</value>
</property>

<property>
  <name>dfs.namenode.http-address.ns1.nn1</name>
  <value>xupan001:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.ns1.nn2</name>
  <value>xupan002:50070</value>
</property>


<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://xupan001:8485;xupan002:8485;xupan003:8485/ns1</value>
</property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/usr/local/devtools/hadoop/hadoop-2.5.0/data/dfs/jn</value>
</property>


<property>
  <name>dfs.client.failover.proxy.provider.ns1</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>

<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/root/.ssh/id_rsa</value>
</property>

 <property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>

</configuration>



4.修改mapred-site.xml
<configuration>
    <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
    </property>

    <property>
         <name>mapreduce.jobhistory.address</name>
         <value>xupan001:10020</value>
    </property>

    <property>
         <name>mapreduce.jobhistory.webapp.address</name>
         <value>xupan001:19888</value>
    </property>
</configuration>



5.修改yarn-site.xml
<configuration>
  <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
  </property>

  <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>xupan002</value>
  </property>
</configuration>

6.修改slaves
xupan001
xupan002
xupan003





ZK:每臺機器在/usr/local/devtools/zookeeper/zookeeper-3.4.5/data/zkData目錄下建立myid裏面的內容分別爲1,2,3

7.修改zoo.cfg
dataDir=/usr/local/devtools/zookeeper/zookeeper-3.4.5/data/zkData
server.1=xupan001:2888:3888
server.2=xupan002:2888:3888
server.3=xupan003:2888:3888






各個節點啓動
./sbin/hadoop-daemon.sh start journalnode


在xupan001上,格式化而後啓動
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode

在xupan002上,同步元數據而後啓動
bin/hdfs namenode -bootstrapStandby
sbin/hadoop-daemon.sh start namenode

在xupan001上,將nn1變爲Activity
bin/hdfs haadmin -transitionToActive nn1
相關文章
相關標籤/搜索