1)hadoop集羣搭建

操做系統環境

    CentOS7.2java

網絡環境

hostname ip role
hadoop001 192.168.252.164

hdfs:namenode,datanode,sceondnamenodenode

yarn:resourcemanager,nodemanagerlinux

hadoop002 192.168.252.165

hdfs:datanodebash

yarn:nodemanager網絡

hadoop003 192.168.252.166

hdfs:datanodessh

yarn:nodemanageroop

軟件包:

    jdk-7u55-linux-x64.tar.gzspa

    hadoop-2.6.4.tar.gz操作系統

 

1.準備工做

1.1關閉防火牆

systemctl stop firewalld
chkconfig firewalld off

1.2關閉selinux

vi /etc/selinux/config

   SELINUX=disabledrest

1.3設置網絡

vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
NAME=eno16777736
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.252.164
NETMASK=255.255.255.0
GATEWAY=192.168.252.1
systenctl restart network

1.4設置hostname

vi /etc/sysconfig/network

HOSTNAME=hadoop001

1.5設置hosts

vi /etc/hosts
192.168.252.164 hadoop001
192.168.252.165 hadoop002
192.168.252.166 hadoop003

1.6配置互信

生成密鑰文件(~/.ssh目錄下生成id_rsaid_rsa.pub)

ssh-keygen -t rsa

複製公鑰 (~/.ssh目錄下

cp id_rsa.pub authorized_keys

每一個節點執行完畢以後,合併各個節點的authorized_keys,並用合併後的文件覆蓋原有authorized_keys。

1.7安裝jdk

tar zxvf jdk-7u55-linux-x64.tar.gz

配置java環境變量

vi ~/.bashrc
export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source ~/.bashrc

 

2.節點一搭建

2.1解壓hadoop(/opt目錄下)

tar zxvf hadoop-2.6.4.tar.gz
mv hadoop-2.6.4.tar.gz hadoop

2.2配置環境變量

vi /etc/profile
export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile

2.3修改配置

core-site.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop001:9000</value>
</property>

hdfs-site.xml

<property>
  <name>dfs.name.dir</name>
  <value>/usr/local/data/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/usr/local/data/datanode</value>
</property>

<property>
  <name>dfs.tmp.dir</name>
  <value>/usr/local/data/tmp</value>
</property>

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

mapred-site.xml

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>

yarn-site.xml

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>hadoop001</value>
</property>

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>

Slaves

hadoop001
hadoop002
Hadoop003

 

 

3.節點2、三搭建

3.1複製hadoop目錄到2、三節點

scp -r hadoop 192.168.252.165:/opt
scp -r hadoop 192.168.252.166:/opt

3.2複製環境變量文件

scp -r profile 192.168.252.165:/etc
scp -r profile 192.168.252.166:/etc

3.3創建data目錄

mkdir /usr/local/data

 

4.啓動

4.1格式化HDFS

    hdfs namenode -format

4.2啓動hdfs集羣

    start-dfs.sh

4.3驗證

    jps命令或50070端口

        hadoop001:namenode\datanode\sceondnamenode

        hadoop002:datanode

        hadoop003:datanode

4.4啓動yarn

    start-yarn.sh

4.5驗證:

    jps,8088端口

    hadoop001:resourcemanager\nodemanager

    hadoop002:nodemanager

    hadoop003:nodemanager

相關文章
相關標籤/搜索