搭建hadoop分佈式環境

Hadoop分佈式安裝

說明java

  1. 本文利用三臺機器,均安裝centos6
  2. 除了最後啓動和中止,全部操做均要在三臺機器上作
  3. 有些配置文件能夠先在一臺機器上完成配置,而後經過scp發送到另外的機器,以減小工做量

1.修改主機名及映射

[root@hadoop1 ~]# vim /etc/sysconfig/network

clipboard.png

[root@hadoop1 ~]# vim /etc/hosts

clipboard.png

2.建立Hadoop用戶

[root@hadoop3 ~]# useradd hadoop   

[root@hadoop2 ~]# passwd hadoop

[root@hadoop3 ~]# vim /etc/sudoers

clipboard.png

3.免密登錄

切換到hadoop用戶
[hadoop@hadoop1 ~]$ ssh-keygen
傳輸密鑰
[hadoop@hadoop1 ~]$ ssh-copy-id hadoop1
[hadoop@hadoop1 ~]$ ssh-copy-id hadoop2
[hadoop@hadoop1 ~]$ ssh-copy-id hadoop3

clipboard.png

驗證ssh無需密碼
[hadoop@hadoop1 ~]$ ssh hadoop1
[hadoop@hadoop1 ~]$ ssh hadoop2
[hadoop@hadoop1 ~]$ ssh hadoop3

4.安裝JDK

[hadoop@hadoop1 ~]$ tar -zxvf jdk-8u161-linux-x64.tar.gz 
[hadoop@hadoop1 ~]$ mv jdk1.8.0_161 jdk8
切換到root
[root@hadoop1 ~]# vim /etc/profile

在文件尾加上
#Java
export JAVA_HOME=/home/hadoop/jdk8
export PATH=$PATH:$JAVA_HOME/bin

切換到hadoop
[hadoop@hadoop1 ~]$ source /etc/profile

驗證
[hadoop@hadoop1 ~]$ java -version

clipboard.png

5.安裝hadoop

1.分佈式系統規劃

hdfs yarn
hadoop1 namenode、datanode nodemanager
hadoop2 datanode、secondarynamenode nodemanager
hadoop3 datanode resourcemanager、nodemanager
[hadoop@hadoop1 ~]$ wget https://www-us.apache.org/dist/hadoop/common/hadoop-2.7.6/hadoop-2.7.6.tar.gz 
[hadoop@hadoop1 ~]$ tar -zxvf hadoop-2.7.6.tar.gz
[root@hadoop1 ~]# vim /etc/profile
[hadoop@hadoop1 ~]$ source /etc/profile

驗證
[hadoop@hadoop1 ~]$ hadoop version

2.配置文件

1.hadoop環境變量配置

#Java
export JAVA_HOME=/home/hadoop/jdk8
export HADOOP_HOME=/home/hadoop/hadoop-2.7.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

2.修改配置文件

配置文件目錄node

hadoop-2.7.6/etc/hadoop/
[hadoop@hadoop3 ~]$ cd hadoop-2.7.6/etc/hadoop/
1.hadoop-env.sh

clipboard.png

2.core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/hadoopdata/tmp</value>
    </property>
</configuration>
3.hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop2:50090</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/hadoopdata/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoop/hadoopdata/data</value>
    </property>
</configuration>
4.yarn-site.xml
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop3</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
5.mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop1:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop1:19888</value>
    </property>
</configuration>
6.slaves
hadoop1
hadoop2
hadoop3

3.格式化

[hadoop@hadoop1 hadoop-2.7.6]$ hadoop namenode -format

4.啓動與中止

[hadoop@hadoop1 hadoop-2.7.6]$ start-dfs.sh

注意,yarn最好在resourcemanager節點上啓動linux

[hadoop@hadoop3 hadoop-2.7.6]$ start-yarn.sh

6.時間同步

[root@hadoop1 ~]# date
Thu Jan 17 00:10:16 EST 2019  //國外vps
[root@hadoop1 ~]# rm -rf /etc/localtime
[root@hadoop1 ~]# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@hadoop1 ~]# yum -y install ntpdate ntp
[root@hadoop1 ~]# ntpdate time.google.com
17 Jan 13:11:43 ntpdate[1691]: step time server 216.239.35.0 offset 2.120207 sec
相關文章
相關標籤/搜索