[Hadoop][筆記]4個節點搭建Hadoop2.x HA測試集羣

搭建Hadoop2.x HA

1.機器準備

虛擬機 4臺java

10.211.55.22 node1node

10.211.55.23 node2linux

10.211.55.24 node3web

10.211.55.25 node4算法

2.四臺主機節點安排

node namenode datanode zk zkfc jn rm applimanager
node1 1 1 1
node2 1 1 1 1 1 1
node3 1 1 1 1 1
node4 1 1 1 1

總結:apache

node 啓動節點數
node1 4
node2 7
node3 6
node4 5

3.全部機器準備工做

3.1主機名及每臺hosts dns文件配置

修改虛擬機的名稱bootstrap

修改mac的node1 node2 node3 node4的dnsbash

hostname
node1 node2 node3 node4

vi /etc/sysconfig/network

宿主機及node1 node2 node3 node4
vi /etc/hosts

10.211.55.22   node1
10.211.55.23   node2
10.211.55.24   node3
10.211.55.25   node4

重啓app

3.2關閉防火牆

service iptables stop && chkconfig iptables off

檢查ssh

service iptables status

3.3配置免密鑰

這裏使用dsa算法

node1 node2 node3 node4自己機器配置免密鑰

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
  $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

從node1拷貝到node2 node3 node4

scp ~/.ssh/id_dsa.pub root@node2:~
scp ~/.ssh/id_dsa.pub root@node3:~
scp ~/.ssh/id_dsa.pub root@node4:~

node2 node3 node4自身追加:
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys

從node2拷貝到node1 node3 node4

scp ~/.ssh/id_dsa.pub root@node1:~
scp ~/.ssh/id_dsa.pub root@node3:~
scp ~/.ssh/id_dsa.pub root@node4:~

node1 node3 node4自身追加:
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys

從node3拷貝到node1 node2 node4

scp ~/.ssh/id_dsa.pub root@node1:~
scp ~/.ssh/id_dsa.pub root@node2:~
scp ~/.ssh/id_dsa.pub root@node4:~

node1 node2 node4自身追加:
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys

從node4拷貝到node1 node2 node3

scp ~/.ssh/id_dsa.pub root@node1:~
scp ~/.ssh/id_dsa.pub root@node2:~
scp ~/.ssh/id_dsa.pub root@node3:~

node1 node2 node3自身追加:
cat ~/id_dsa.pub >> ~/.ssh/authorized_keys

3.4時間同步 ntp

全部機器:

yum install ntp
ntpdate -u s2m.time.edu.cn

在啓動的時候,須要同步一下保險,最好設置局域網時間同步,保持同步

檢查: date

3.5安裝java jdk

安裝jdk,配置環境變量

全部機器:

卸載openjdk:

java -version
rpm -qa | grep jdk
rpm -e --nodeps java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.10.4.el6.x86_64
...
rpm -qa | grep jdk

安裝jdk:
rpm -ivh jdk-7u67-linux-x64.rpm 
vi ~/.bash_profile 

export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$PATH:$JAVA_HOME/bin
source ~/.bash_profile

檢查:

java -version

3.6 上傳軟件及解壓

上傳hadoop-2.5.1_x64.tar.gz

scp /Users/mac/Documents/happyup/study/files/hadoop/hadoop-2.5.1_x64.tar.gz root@node1:/home
node2
node3
node4

上傳zk

scp /Users/mac/Documents/happyup/study/files/hadoop/ha/zookeeper-3.4.6.tar.gz root@node1:/home
node2
node3

解壓:

node1 node2 node3 node4
tar -xzvf /home/hadoop-2.5.1_x64.tar.gz

node1 node2 node3
tar -xzvf /home/zookeeper-3.4.6.tar.gz

3.7快照

hadoop 徹底ha準備工做

3.1主機名及每臺hosts dns文件配置

3.2關閉防火牆

3.3配置全部機器的互相免密鑰

3.4時間同步 ntp

3.5安裝java jdk

3.6上傳解壓軟件hadoop zk

這時候作一個快照,其餘機器也能夠使用

4.zk 安裝配置

4.1 修改配置文件zoo.cfg

ssh root@node1 
cp /home/zookeeper-3.4.6/conf/zoo_sample.cfg /home/zookeeper-3.4.6/conf/zoo.cfg

vi zoo.cfg

其中把dataDir=/opt/zookeeper
另外在最後添加:
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
:wq

4.2修改工做目錄

到datadir目錄下:
mkdir /opt/zookeeper
cd /opt/zookeeper
ls 
vi myid,填寫1 :wq
拷貝相關文件到node2 node3
scp -r /opt/zookeeper/ root@node2:/opt 修改成2
scp -r /opt/zookeeper/ root@node3:/opt 修改成3

4.3同步配置

拷貝zk到node2 node3
scp -r /home/zookeeper-3.4.6/conf root@node2:/home/zookeeper-3.4.6/conf

scp -r /home/zookeeper-3.4.6/conf root@node3:/home/zookeeper-3.4.6/conf

4.4添加環境變量

node1 node2 node3

添加PATH
vi ~/.bash_profile 

export ZOOKEEPER_HOME=/home/zookeeper-3.4.6
PATH 添加 :$ZOOKEEPER_HOME/bin

source ~/.bash_profile

4.5啓動

啓動:
cd zk的bin目錄下:
zkServer.sh start
jps:
3214 QuorumPeerMain

依次啓動 node1 node2 node3

5.hadoop安裝配置

5.1 hadoop-env.sh

cd /home/hadoop-2.5.1/etc/hadoop/
vi hadoop-env.sh 
    改動:export JAVA_HOME=/usr/java/jdk1.7.0_67

5.2 slaves

vi slaves   
node2
node3
node4

5.3 hdfs-site.xml

vi hdfs-site.xml

<property>
    <name>dfs.nameservices</name>
    <value>cluster1</value>
 </property>


<property>
 <name>dfs.ha.namenodes.cluster1</name>
  <value>nn1,nn2</value>
</property>


<property>
 <name>dfs.namenode.rpc-address.cluster1.nn1</name>
 <value>node1:8020</value>
</property>
<property>
 <name>dfs.namenode.rpc-address.cluster1.nn2</name>
 <value>node2:8020</value>
</property>


<property>
 <name>dfs.namenode.http-address.cluster1.nn1</name>
 <value>node1:50070</value>
</property>
<property>
 <name>dfs.namenode.http-address.cluster1.nn2</name>
 <value>node2:50070</value>
</property>



<property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://node2:8485;node3:8485;node4:8485/cluster1</value>
</property>


<property>
 <name>dfs.client.failover.proxy.provider.cluster1</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>


<property>
 <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
 
<property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/root/.ssh/id_dsa</value>
</property>


<property>
 <name>dfs.journalnode.edits.dir</name>
  <value>/opt/journal/data</value>
</property>

<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>

5.4 core-site.xml

vi core-site.xml

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://cluster1</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/opt/hadoop</value>
</property>
<property>
  <name>ha.zookeeper.quorum</name>
  <value>node1:2181,node2:2181,node3:2181</value>
</property>

5.5 mapred-site.xml

vi mapred-site.xml

cp mapred-site.xml.template mapred-site.xml
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

5.6 yarn-site.xml

vi yarn-site.xml 無需配置applicationmanager,由於和datanode相同

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
 <property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>rm</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>node3</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>node4</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>node1:2181,node2:2181,node3:2181</value>
 </property>

5.7 同步配置文件

同步到node2 node3 node4

scp /home/hadoop-2.5.1/etc/hadoop/* root@node2:/home/hadoop-2.5.1/etc/hadoop
scp /home/hadoop-2.5.1/etc/hadoop/* root@node3:/home/hadoop-2.5.1/etc/hadoop
scp /home/hadoop-2.5.1/etc/hadoop/* root@node4:/home/hadoop-2.5.1/etc/hadoop

5.8 修改環境變量

node1 node2 node3 node4

vi ~/.bash_profile
export HADOOP_HOME=/home/hadoop-2.5.1
PATH 添加::$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source ~/.bash_profile

5.8 start

1.啓動node1 node2 node3 的zk

啓動:
cd zk的bin目錄下:
zkServer.sh start
jps:
3214 QuorumPeerMain

依次啓動 node1 node2 node3

2.啓動journalnode,用於格式化namenode 若是是第二次從新配置,刪除 /opt/hadoop /opt/journal/data node1 node2 node3 node4

在node2 node3 node4分別執行:

./hadoop-daemon.sh start journalnode

jps驗證是否有journalnode進程

3.格式化一臺namenode node1

cd bin
./hdfs namenode -format
驗證打印日誌,看工做目錄有無文件生成

4.同步這個namenode的edits文件到另一個node2,要啓動被拷貝的namenode node1

cd sbin
./hadoop-daemon.sh start namenode
驗證log日誌 cd ../logs tail -n50 hadoop-root-namenode

5.執行同步edits文件

在沒有格式化到namenode上進行(node2)
cd bin
./hdfs namenode -bootstrapStandby
在node2上看有無文件生成

6.到node1中止全部服務

cd sbin
./stop-dfs.sh

7.初始化zkfc,zk必定要啓動,在任何一臺namenode上

cd bin
./hdfs zkfc -formatZK

8.啓動

cd sbin:
./start-dfs.sh

sbin/start-yarn.sh
jps:remanager nodemanager
node1:8088

或者start-all.sh
2.x中resourcemanager  須要手動啓動 node3 node4
yarn-daemon.sh start resourcemanager
yarn-daemon.sh stop resourcemanager

9.查看是否啓動成功及測試

jps

hdfs webui:
http://node1:50070
http://node2:50070 standby

rm webui:
http://node3:8088
http://node4:8088

上傳文件:
cd bin
./hdfs dfs -mkdir -p /usr/file
./hdfs dfs -put /usr/local/jdk /usr/file

關閉一個rm,效果
關閉一個namenode效果

10.出現問題解決方法

1.控制檯輸出
2.jps
3.對應節點的日誌

4.格式化以前要刪除hadoop工做目錄,刪除journode的工做目錄
相關文章
相關標籤/搜索