Hadoop集羣徹底分佈式安裝

1、安裝準備

1.安裝包:

Hadoop-2.8.1.tar.gz 下載地址:http://hadoop.apache.org/releases.html#Downloadhtml

JDK1.8 下載地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html  java

2、環境配置

1.靜態IP配置

https://my.oschina.net/u/1765168/blog/1571584node

2.免密碼登陸

http://www.javashuo.com/article/p-hrnvzsgv-dd.htmlweb

3.JDK安裝

安裝 tar -zxvf  jdk1.8.0.tar.gzapache

配置環境變量oracle

export JAVA_HOME=/opt/soft/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH

3、Hadoop安裝

1.解壓下載好的安裝包

tar -zxvf hadoop-2.8.1.tar.gzapp

2.配置環境變量

export HADOOP_HOME=/usr/hadoop 
export PATH=$PATH:$HADOOP_HOME/bin

3.修改配置文件

位置your_hadoop_dir/etc/hadoopwebapp

(1)hadoop-env.shoop

export JAVA_HOME=/opt/soft/jdk1.8

即與系統的環境變量JAVA_HOME保持一致。spa

(2)core-site.xml

    <property>   
        <name>hadoop.tmp.dir</name>   
        <value>/opt/soft/hadoop-2.8.1/tmp</value>   
        <final>true</final>  
        <description>A base for other temporary directories.</description>   
    </property>   
    <property>   
        <name>fs.default.name</name>   
        <value>hdfs://Hadoop1:9000</value>  
    <!-- hdfs://Master.Hadoop:22-->  
            <final>true</final>   
    </property>   
    <property>    
         <name>io.file.buffer.size</name>    
         <value>131072</value>    
    </property>  

(3)hdfs-site.xml

<property>   
        <name>dfs.replication</name>   
        <value>2</value>   
   </property>   
   <property>   
        <name>dfs.name.dir</name>   
        <value>/usr/local/hadoop/hdfs/name</value>   
   </property>   
   <property>   
        <name>dfs.data.dir</name>   
        <value>/usr/local/hadoop/hdfs/data</value>   
   </property>   
   <property>    
        <name>dfs.namenode.secondary.http-address</name>    
        <value>Hadoop1:9001</value>    
   </property>    
   <property>    
        <name>dfs.webhdfs.enabled</name>    
        <value>true</value>    
   </property>    
   <property>    
        <name>dfs.permissions</name>    
        <value>false</value>    
   </property>

(3)mapred-site.xml

    <property>    
        <name>mapreduce.framework.name</name>    
        <value>yarn</value>    
    </property>

(4)yarn-site.xml

    <property>    
        <name>yarn.resourcemanager.address</name>    
        <value>Hadoop1:18040</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.scheduler.address</name>    
        <value>Hadoop1:18030</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.webapp.address</name>    
        <value>Hadoop1:18088</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.resource-tracker.address</name>    
        <value>Hadoop1:18025</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.admin.address</name>    
        <value>Hadoop1:18141</value>    
    </property>    
    <property>    
        <name>yarn.nodemanager.aux-services</name>    
        <value>mapreduce_shuffle</value>    
    </property>    
    <property>    
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>    
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>    
    </property> 

(5)slaves

Hadoop2
Hadoop3

(6)將配置好的hadoop,發送到其餘兩臺機器

scp –r /opt/soft/hadoop2.8.1 root@Hadoop1 :/opt/soft/

3、啓動

1.格式化namenode

hadoop namenode -format

2.啓動

./sbin/start-all.sh

3.查看節點狀態

hadoop dfsadmin -report

4.web頁面 http://your_ip:50079/

相關文章
相關標籤/搜索