Hadoop分佈式環境高可用配置

前面文章介紹過Hadoop分佈式的配置,可是設計到高可用,此次使用zookeeper配置Hadoop高可用。node

1.環境準備

1)修改IP
2)修改主機名及主機名和IP地址的映射
3)關閉防火牆
4)ssh免密登陸
5)建立hadoop用戶和用戶組
6)安裝更新安裝源、JDK、配置環境變量等apache

2.服務器規劃

Node1bootstrap

Node2 vim

Node3瀏覽器

NameNode    安全

NameNodebash

 

JournalNode  服務器

JournalNode  網絡

JournalNode  ssh

DataNode

DataNode

DataNode

ZK 

ZK 

ZK 

ResourceManager

 

ResourceManager

NodeManager

NodeManager

NodeManager

 3.配置Zookeeper集羣

參考個人以前的文章Zookeeper安裝和配置說明

4.安裝Hadoop

1)官方下載地址:http://hadoop.apache.org/
2)解壓hadoop2.7.2至/usr/local/hadoop2.7
3)修改hadoop2.7的所屬組和所屬者爲hadoop

chown -R hadoop:hadoop /usr/local/hadoop2.7

4)配置HADOOP_HOME

vim /etc/profile
#HADOOP_HOME
export HADOOP_HOME=/usr/local/hadoop2.7
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

5.配置Hadoop集羣高可用

5.1配置HDFS集羣

hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_221

hadoop-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <!-- 徹底分佈式集羣名稱 -->
    <property>
        <name>dfs.nameservices</name>
	<value>hadoopCluster</value>
    </property>
    <!-- 集羣中NameNode節點都有哪些 -->
    <property>
        <name>dfs.ha.namenodes.hadoopCluster</name>
        <value>nn1,nn2</value>
    </property>
     <!-- nn1的RPC通訊地址 -->
    <property>
        <name>dfs.namenode.rpc-address.hadoopCluster.nn1</name>
        <value>node1:9000</value>
    </property>

    <!-- nn2的RPC通訊地址 -->
    <property>
        <name>dfs.namenode.rpc-address.hadoopCluster.nn2</name>
        <value>node2:9000</value>
    </property>

    <!-- nn1的http通訊地址 -->
    <property>
        <name>dfs.namenode.http-address.hadoopCluster.nn1</name>
        <value>node1:50070</value>
    </property>

    <!-- nn2的http通訊地址 -->
    <property>
        <name>dfs.namenode.http-address.hadoopCluster.nn2</name>
        <value>node2:50070</value>
    </property>

    <!-- 指定NameNode元數據在JournalNode上的存放位置 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node1:8485;node2:8485;node3:8485/hadoopCluster</value>
    </property>

    <!-- 配置隔離機制,即同一時刻只能有一臺服務器對外響應 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>

    <!-- 使用隔離機制時須要ssh無祕鑰登陸-->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>

    <!-- 聲明journalnode服務器存儲目錄-->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data_disk/hadoop/jn</value>
    </property>

    <!-- 關閉權限檢查-->
    <property>
        <name>dfs.permissions.enable</name>
        <value>false</value>
    </property>

    <!-- 訪問代理類:client,hadoopCluster,active配置失敗自動切換實現方式-->
    <property>
        <name>dfs.client.failover.proxy.provider.hadoopCluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <property>
         <name>dfs.namenode.name.dir</name>
         <value>file:///data_disk/hadoop/name</value>
         <description>爲了保證元數據的安全通常配置多個不一樣目錄</description>
    </property>

    <property>
         <name>dfs.datanode.data.dir</name>
         <value>file:///data_disk/hadoop/data</value>
         <description>datanode 的數據存儲目錄</description>
     </property>
     <property>
         <name>dfs.replication</name>
         <value>3</value>
         <description>HDFS的數據塊的副本存儲個數,默認是3</description>
     </property>

</configuration>

core-site.xml

<configuration>
    <!-- 指定HDFS中NameNode的地址 -->
    <property>
	    <name>fs.defaultFS</name>
        <value>hdfs://hadoopCluster</value>
    </property>

    <!-- 指定Hadoop運行時產生文件的存儲目錄 -->
    <property>
	    <name>hadoop.tmp.dir</name>
	    <value>file:///data_disk/hadoop/tmp</value>
    </property>
</configuration>

啓動hadoop集羣

(1)在各個JournalNode節點上,輸入如下命令啓動journalnode服務
sbin/hadoop-daemon.sh start journalnode
(2)在[nn1]上,對其進行格式化,並啓動
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
(3)在[nn2]上,同步nn1的元數據信息
bin/hdfs namenode -bootstrapStandby
(4)啓動[nn2]
sbin/hadoop-daemon.sh start namenode
(5)在[nn1]上,啓動全部datanode
sbin/hadoop-daemons.sh start datanode
(6)將[nn1]切換爲Active
bin/hdfs haadmin -transitionToActive nn1
(7)查看是否Active
bin/hdfs haadmin -getServiceState nn1

打開瀏覽器查看namenode的狀態

5.2配置HDFS自動故障轉移

在hdfs-site.xml中增長

<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property>

在core-site.xml文件中增長

<property>
	<name>ha.zookeeper.quorum</name>
	<value>node1:2181,node2:2181,node3:2181</value>
</property>

5.2.1啓動

(1)關閉全部HDFS服務:
sbin/stop-dfs.sh
(2)啓動Zookeeper集羣:
bin/zkServer.sh start
(3)初始化HA在Zookeeper中狀態:
bin/hdfs zkfc -formatZK
(4)啓動HDFS服務:
sbin/start-dfs.sh
(5)在各個NameNode節點上啓動DFSZK Failover Controller,先在哪臺機器啓動,哪一個機器的NameNode就是Active NameNode
sbin/hadoop-daemon.sh start zkfc

5.2.2驗證

(1)將Active NameNode進程kill
kill -9 namenode的進程id
(2)將Active NameNode機器斷開網絡
service network stop

若是kill nn1後nn2沒有變成active,可能有如下緣由

(1)ssh免密登陸沒配置好

(2)未找到fuster程序,致使沒法進行fence,參考博文

5.3YARN-HA配置

yarn-site.xml

<?xml version="1.0"?>
<configuration>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <!--啓用resourcemanager ha-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <!--聲明兩臺resourcemanager的地址-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster-yarn1</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>node1</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>node3</value>
    </property>

    <!--指定zookeeper集羣的地址-->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>

    <!--啓用自動恢復-->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>

    <!--指定resourcemanager的狀態信息存儲在zookeeper集羣-->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
</configuration>

5.3.1啓動HDFS

(1)在各個JournalNode節點上,輸入如下命令啓動journalnode服務:
sbin/hadoop-daemon.sh start journalnode
(2)在[nn1]上,對其進行格式化,並啓動:
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
(3)在[nn2]上,同步nn1的元數據信息:
bin/hdfs namenode -bootstrapStandby
(4)啓動[nn2]:
sbin/hadoop-daemon.sh start namenode
(5)啓動全部DataNode
sbin/hadoop-daemons.sh start datanode
(6)將[nn1]切換爲Active
bin/hdfs haadmin -transitionToActive nn1

5.3.2啓動YARN

(1)在node1中執行:
sbin/start-yarn.sh
(2)在node3中執行:
sbin/yarn-daemon.sh start resourcemanager
(3)查看服務狀態
bin/yarn rmadmin -getServiceState rm1
相關文章
相關標籤/搜索