Linux -- 之HDFS實現自動切換HA(全新HDFS)

                Linux -- 之HDFS實現自動切換HA(全新HDFS)php

JDK規劃

                1.7及以上  https://blog.csdn.net/meiLin_Ya/article/details/80650945html

防火牆規劃

    系統防火牆關閉java

SSH免密碼規劃

            hadoop01nn1--> hadoop01nn1 須要免密碼node

                hadoop01nn1--> hadoop02nn2 須要免密碼web

                hadoop01nn1--> hadoop03dn) 須要免密碼數據庫

                hadoop02nn2--> hadoop01nn1 須要免密碼apache

                hadoop02nn2--> hadoop02nn2) 須要免密碼bootstrap

                hadoop02nn2--> hadoop03(dn 須要免密碼app

      若是多節點之間所有免密碼更好(生產環境不建議)  默認環境ssh

 

Zk集羣規劃

    已有可用zk集羣  https://blog.csdn.net/meiLin_Ya/article/details/80654268

開始配置

    首先咱們要將全部的hadoop刪除乾淨。如/temp  /hadoopdata 等等,而後將hadoop的壓縮包導入。你的集羣中的每一個節點也是,都要刪除的。

刪除後解壓hadoop:

tar zxvf hadoop-2.6.0.tar.gz

修改core-site.xml

<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://beiwang</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
 <value>/home/hadoopdata/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>

修改hdfs-site.xml

注意中文註釋不要帶

<configuration>

<!-- 指定hdfs的nameservice爲beiwang,就是那個代理程序,詢問zk集羣哪一個namenode還活着 -->

<property>

<name>dfs.nameservices</name>

<value>beiwang</value>

</property>



<!—指定hdfs的兩個NameNode都是什麼名字(等會兒下面會配置他們所對應的機器的信息)-->

<property>

  <name>dfs.ha.namenodes.beiwang</name>

  <value>nn1,nn2</value>

</property>

 

<!—NameNode1的rpc通信地址-->

<property>

  <name>dfs.namenode.rpc-address.beiwang.nn1</name>

  <value> hadoop01:8020</value>

</property>

 

<!—NameNode2的rpc通信地址-->

<property>

  <name>dfs.namenode.rpc-address.beiwang.nn2</name>

  <value> hadoop02:8020</value>

</property>

 

<!—NameNode1的web界面地址-->

<property>

  <name>dfs.namenode.http-address.beiwang.nn1</name>

  <value> hadoop01:50070</value>

</property>

<!—NameNode2的web界面地址-->

<property>

  <name>dfs.namenode.http-address.beiwang.nn2</name>

  <value> hadoop02:50070</value>

</property>

 

######若是給一個有數據的HDFS添加HA,此處無需更改,保持原有地址便可#####

<!---namenode存放元數據信息的Linux本地地址,這個目錄不須要咱們本身建立->

<property>

  <name>dfs.namenode.name.dir</name>

  <value>file:///home/hdfs/name</value>

</property>

 

<!—datanode存放用戶提交的大文件的本地Linux地址,這個目錄不須要咱們本身建立-->

<property>

  <name>dfs.datanode.data.dir</name>

  <value>file:///home/hdfs/data</value>

</property>

###########################################################

 

<!—QJM存放共享數據的方式-->

<property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal:// hadoop01:8485; hadoop02:8485; hadoop03:8485/beiwang</value>

</property>

 

<!—單個QJM進程(角色)存放本地edits文件的Linux地址-->

<property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/home/bigdata/hadoop/journal1</value>

</property>

 

<!—開啓hdfs的namenode死亡後自動切換-->

<property>

  <name>dfs.ha.automatic-failover.enabled</name>

  <value>true</value>

</property>

 

 

<!-- 指定zookeeper集羣地址,輔助兩個namenode進行失敗切換 -->

<property>

        <name>ha.zookeeper.quorum</name>

        <value> hadoop01:2181, hadoop02:2181, hadoop03:2181</value>

</property>

 

<!—zkfc程序的主類-->

<property>

<name>dfs.client.failover.proxy.provider.beiwang</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

 

<!—防止多個namenode同時active(腦裂)的方式-->

<property>

    <name>dfs.ha.fencing.methods</name>

    <value>sshfence</value>

</property>

 

<!—指定本機的私鑰所在目錄-->

<property>

        <name>dfs.ha.fencing.ssh.private-key-files</name>

        <value>/root/.ssh/id_rsa</value>

</property>

 

<!—指定ssh通信超時時間-->

<property>

        <name>dfs.ha.fencing.ssh.connect-timeout</name>

        <value>30000</value>

</property>

 

</configuration>

hadoop-env.sh

export JAVA_HOME="/home/bigdata/jdk1.8.0_161"

建一個master文本在hadoop-2.6.0/etc/hadoop/

注意

   新建master文件,文件中寫 全部namenode主機

hu-hadoop1
hu-hadoop2
hu-hadoop3

slaves:

hu-hadoop1
hu-hadoop2
hu-hadoop3

開啓日誌文件:

hadoop-daemons.sh start journalnode                                                                              


啓動zookeeper:

  zkServer.sh start 



而後進行格式化:

hadoop namenode -format


在master上開啓namenode

hadoop-daemon.sh start namenode

在salve11機  同步元數據信息

hdfs namenode -bootstrapStandby

格式化ZK(Master上執行便可)

# hdfs zkfc -formatZK


格式化後能夠查看zookeeper存放文件:


啓動dfs:而後再查看zookeeper

start-dfs.sh 


進入網頁:



如今咱們來測試下,殺死hu-hadoop2 ,而後看hu-hadoop1是否 能夠從 standby=>active


而後咱們再啓動下剛剛的hu-hadoop2:查看它的狀態


很明確的看出來了吧

而後咱們看看zookeeper


而後如今咱們去殺掉hu-hadoop1:再看zookeeper

固然此時網頁hu-hadoop2又從剛剛的standby==>active



    這是爲何呢?

記得咱們前面作了一步 hdfs zkfc -formatZK 這一步就是將hdfs信息記錄到zookeeper,還有hdfs-core.xml中的配置。

這就是zookeeper的強大之處。咱們能夠將zookeeper理解爲數據庫,而它和數據庫又不太是由於他是一個樹形,只有在樹枝的末梢纔會存儲數據。它的大小有1MB,不要小看它的1MB,它的做用比你想象的要強大

再記錄下我運行成功後的配置文件吧

hadoop-env.sh

export JAVA_HOME="/home/bigdata/jdk1.8.0_161"

core-site.xml

<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://huhu</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
 <value>/home/hadoopdata/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
	<name>dfs.nameservices</name>
	<value>huhu</value>
</property>
<property>
  <name>dfs.ha.namenodes.huhu</name>
  <value>huhu1,huhu2</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.huhu.huhu1</name>
  <value>hu-hadoop1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.huhu.huhu2</name>
  <value>hu-hadoop2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.huhu.huhu1</name>
  <value>hu-hadoop1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.huhu.huhu2</name>
  <value>hu-hadoop2:50070</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:///home/hdfs/name</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:///home/hdfs/data</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://hu-hadoop1:8485;hu-hadoop2:8485;hu-hadoop3:8485/huhu</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/home/bigdata/hadoop/journal1</value>
</property>
<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>hu-hadoop1:2181,hu-hadoop2:2181,hu-hadoop3:2181</value>
</property>
<property>
	<name>dfs.client.failover.proxy.provider.huhu</name>
	<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
</property>
<property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
</property>
<property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <final>true</final>
    </property>
</configuration>

yarn-site.xml

<configuration>
 <property>
   <name>yarn.resourcemanager.connect.retry-interval.ms</name>
   <value>2000</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>beiwangyarn</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>hu-hadoop1</value>  
 </property>
  <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>hu-hadoop2</value>  
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.class</name>
  <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
 </property>
 <property>
   <name>yarn.resourcemanager.recovery.enabled</name>
   <value>true</value>
 </property>
  <property>
   <name>yarn.resourcemanager.store.class</name>
   <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk.state-store.address</name>
   <value>hu-hadoop1:2181,hu-hadoop2:2181,hu-hadoop3:2181</value>
 </property>
 <property>
   <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
   <value>5000</value>
 </property>
 <property>
   <name>yarn.resourcemanager.address.rm1</name>
   <value>hu-hadoop1:8032</value>
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.address.rm1</name>
   <value>hu-hadoop1:8030</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.https.address.rm1</name>
   <value>hu-hadoop1:23189</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.address.rm1</name>
   <value>hu-hadoop1:8088</value>
 </property>
 <property>
   <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
   <value>hu-hadoop1:8031</value>
 </property>
 <property>
   <name>yarn.resourcemanager.admin.address.rm1</name>
   <value>hu-hadoop1:8033</value>
 </property>
 <property>
   <name>yarn.resourcemanager.address.rm2</name>
   <value>hu-hadoop2:8032</value>
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.address.rm2</name>
   <value>hu-hadoop2:8030</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.https.address.rm2</name>
   <value>hu-hadoop2:23189</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.address.rm2</name>
   <value>hu-hadoop2:8088</value>
 </property>
 <property>
   <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
   <value>hu-hadoop2:8031</value>
 </property>
 <property>
   <name>yarn.resourcemanager.admin.address.rm2</name>
   <value>hu-hadoop2:8033</value>
 </property>
 <property>
   <description>Address where the localizer IPC is.</description>
   <name>yarn.nodemanager.localizer.address</name>
   <value>0.0.0.0:23344</value>
 </property>
 <property>
   <description>NM Webapp address.</description>
   <name>yarn.nodemanager.webapp.address</name>
   <value>0.0.0.0:23999</value>
 </property>
 <property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
 </property>
 <property>
   <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
   <name>yarn.nodemanager.local-dirs</name>
   <value>/tmp/pseudo-dist/yarn/local</value>
 </property>
   <name>yarn.nodemanager.log-dirs</name>
   <value>/tmp/pseudo-dist/yarn/log</value>
 </property>
 <property>
   <name>mapreduce.shuffle.port</name>
   <value>23080</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>hu-hadoop1:2181,hu-hadoop2:2181,hu-hadoop3:2181</value>
 </property>
</configuration>

master

hu-hadoop1
hu-hadoop2
hu-hadoop3

slaves

hu-hadoop1
hu-hadoop2
hu-hadoop3
相關文章
相關標籤/搜索