hadoop2.7.3 HA高可用集羣安裝

hadoop2.7.3 HA高可用集羣安裝html


    1. 環境準備node

    2. Hadoop安裝配置shell


    3. 修改配置apache

    1. 全部節點安裝jdk1.8ssh,配置hosts文件,配置centos680centos681之間的互相免登錄,centos680到其餘全部機器的免登錄,關閉防火牆。centos

    2. centos682centos683centos684上安裝Zookeeper3.4.9ssh


    3. 複製hadoop-2.7.3.tar.gzcentos680/opt目錄下,之後全部的操做都在centos680上進行。ide

    4. 解壓hadoop-2.7.3.tar.gz/opt/bigdata文件夾下:tar –zxvf hadoop-2.7.3.tar.gz –C /opt/bigdata/oop




    1. ResourceManager藉助zk實現熱備,當某個節點失效以後另外一個節點可以被通知,並接管任務。url


    1. NN必須實現共享元數據才能保證無縫切換;共享元數據能夠採用Linux提供的NFS服務,也能夠使用Hadoop提供的JournalNodeJournalNode採用多數派理論,保證半數以上的節點寫成功即認爲成功。spa

    2. NN要實現自動切換(不須要手工切換),就必須實現實時監控每一個節點的狀態;這裏採用DFSFailoverController進行監控,若是某個NN失敗,其餘NN可以經過ZK被通知到,並接替NN的任務。

    1. HDFS高可用性(NameNode

    2. RM高可用性(ResourceManager

    3. Hadoop2.7.3 HA搭建步驟



    1. hadoop-env.sh



  1. graphic

    修改JAVA_HOME



    1. core-site.xml



  2. graphic



    1. hdfs-site.xml



  3. [html] view plain copy

    1. <property>   

    2.        <name>dfs.nameservices</name>   

    3.        <value>ns</value>   

    4.    </property>   

    5.   

    6.    <property>   

    7.        <name>dfs.ha.namenodes.ns</name>   

    8.        <value>nn1,nn2</value>   

    9.    </property>   

    10.   

    11.    <property>   

    12.        <name>dfs.namenode.rpc-address.ns.nn1</name>   

    13.        <value>centos680:9000</value>   

    14.    </property>   

    15.   

    16.    <property>   

    17.        <name>dfs.namenode.http-address.ns.nn1</name>   

    18.        <value>centos680:50070</value>   

    19.    </property>   

    20.   

    21.    <property>   

    22.        <name>dfs.namenode.rpc-address.ns.nn2</name>   

    23.        <value>centos681:9000</value>   

    24.    </property>   

    25.   

    26.    <property>   

    27.        <name>dfs.namenode.http-address.ns.nn2</name>   

    28.        <value>centos681:50070</value>   

    29.    </property>   

    30.   

    31.    <property>   

    32.        <name>dfs.namenode.shared.edits.dir</name>   

    33.        <value>qjournal://zk1:8485;zk2:8485;zk3:8485/ns</value>   

    34.    </property>   

    35.   

    36.    <property>   

    37.        <name>dfs.journalnode.edits.dir</name>   

    38.        <value>/opt/big/hadoop-2.7.3/journaldata</value>   

    39.    </property>   

    40.   

    41.    <property>   

    42.        <name>dfs.ha.automatic-failover.enabled</name>   

    43.        <value>true</value>   

    44.    </property>   

    45.   

    46.    <property>   

    47.        <name>dfs.client.failover.proxy.provider.ns</name>   

    48.        <value>  

    49.                org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider  

    50.        </value>   

    51.    </property>   

    52.    <property>   

    53.        <name>dfs.ha.fencing.methods</name>   

    54.        <value>   

    55.            sshfence   

    56.            shell(/bin/true)   

    57.        </value>   

    58.    </property>   

    59.    <property>   

    60.        <name>dfs.ha.fencing.ssh.private-key-files</name>   

    61.        <value>/root/.ssh/id_rsa</value>   

    62.    </property>   

    63.    <property>   

    64.        <name>dfs.ha.fencing.ssh.connect-timeout</name>   

    65.        <value>30000</value>   

    66.    </property>  



    1. mapred-side.xml



  4. [html] view plain copy



    1. <span style="white-space:pre">  </span><property>  

    2.                 <name>mapreduce.framework.name</name>  

    3.                 <value>yarn</value>  

    4.         </property>  



    1. yarn-site.xml

    2. [html] view plain copy



    3. <configuration>  

    4.                                           <!--開啓RM高可靠-->  

    5.   

    6.                                           <property>  

    7.   

    8.                                              <name>yarn.resourcemanager.ha.enabled</name>  

    9.   

    10.                                              <value>true</value>  

    11.   

    12.                                           </property>  

    13.   

    14.                                           <!--指定RM的cluster id -->  

    15.   

    16.                                           <property>  

    17.   

    18.                                              <name>yarn.resourcemanager.cluster-id</name>  

    19.   

    20.                                              <value>yrc</value>  

    21.   

    22.                                           </property>  

    23.   

    24.                                           <!--指定RM的名字-->  

    25.   

    26.                                           <property>  

    27.   

    28.                                              <name>yarn.resourcemanager.ha.rm-ids</name>  

    29.   

    30.                                              <value>rm1,rm2</value>  

    31.   

    32.                                           </property>  

    33.   

    34.                                           <!--分別指定RM的地址-->  

    35.   

    36.                                           <property>  

    37.   

    38.                                              <name>yarn.resourcemanager.hostname.rm1</name>  

    39.   

    40.                                              <value>h0</value>  

    41.   

    42.                                           </property>  

    43.   

    44.                                           <property>  

    45.   

    46.                                              <name>yarn.resourcemanager.hostname.rm2</name>  

    47.   

    48.                                              <value>h1</value>  

    49.   

    50.                                           </property>  

    51.   

    52.                                           <!--指定zk集羣地址-->  

    53.   

    54.                                           <property>  

    55.   

    56.                                              <name>yarn.resourcemanager.zk-address</name>  

    57.   

    58.                                              <value>h2:2181,h3:2181,h4:2181</value>  

    59.   

    60.                                           </property>  

    61.   

    62.                                           <property>  

    63.   

    64.                                              <name>yarn.nodemanager.aux-services</name>  

    65.   

    66.                                              <value>mapreduce_shuffle</value>  

    67.   

    68.                                           </property>  

    69.   

    70.                             </configuration>  





    1. slaves



  5. graphic



    1. 分發到其餘節點(在centos680上操做)


  6.   scp -r /opt/bigdata/hadoop-2.7.3/ h1:/opt/bigdata/
      scp -r /opt/bigdata/hadoop-2.7.3/ h2:/opt/bigdata/
      scp -r /opt/bigdata/hadoop-2.7.3/ h3:/opt/bigdata/
      scp -r /opt/bigdata/hadoop-2.7.3/ h4:/opt/bigdata/



    1. h2,h3,h4啓動全部的Zookeeper

    2. h2,h3,h4啓動JournalNodehadoop-daemon.sh start journalnode

    3. centos680上執行格式化namenodehdfs namenode -format,並將格式化後的元數據內容複製到另一個namenode節點中(h1:scp -r tmp/ h1:/opt/bigdata/hadoop-2.7.3/(tmp是在core-site中配置的namenode元數據應該所處的位置)

    4. 格式化zkbin/hdfs zkfc –formatZK。該操做在Zookeeper中建立了數據節點:

    5. 初始化和啓動


  7. graphic



    1. 啓動dfsyarn



  8. sbin/start-dfs.sh

    sbin/start-yarn.sh



    1. 驗證


  9.   經過殺死activenamenode進程或者關閉namenode所在機器來驗證切換狀況。

相關文章
相關標籤/搜索