High Availability With QJM

 

節點及實例規劃:

 

High Availability With QJM 部署要點及注意事項請參考 https://my.oschina.net/u/3862440/blog/2208568  HA 部署小節。java

 

編輯"hdfs-site.xml"

dfs.nameservices 

--配置命名服務,一個集羣一個服務名,服務名下面包含多個服務和幾點,對外統一提供服務。node

<property>
    <name>dfs.nameservices</name>
      <value>mycluster</value>
  </property>

dfs.ha.namenodes.[nameservice ID] 

--配置全部的NN的service id,一個service服務下面有多個NN節點,爲了作NN高可用,集羣必須知道每一個節點的ID,以便區分。我這裏規劃了2個NN,因此NN ID有兩個。web

<property>
    <name>dfs.ha.namenodes.mycluster</name>
      <value>nn1,nn2</value>
   </property>

dfs.namenode.rpc-address.[nameservice ID].[name node ID] 

--配置全部的NN的rpc協議(用於NN與DN或者客戶端之間的數據傳輸),我這裏配置了2個NN 3個DN,因此這裏須要爲這兩個NN配置rpc傳輸協議。apache

<property>
     <name>dfs.namenode.rpc-address.mycluster.nn1</name>
       <value>hd1:8020</value>
    </property>
    <property>
         <name>dfs.namenode.rpc-address.mycluster.nn2</name>
          <value>hd2:8020</value>
     </property>

dfs.namenode.http-address.[nameservice ID].[name node ID] 

--配置全部NN的http傳輸協議,也就是客戶端管理管理的web界面傳輸協議,這個須要鏈接到各個NN,因此須要配置2個NN的http協議。bootstrap

<property>
           <name>dfs.namenode.http-address.mycluster.nn1</name>
           <value>hd1:50070</value>
    </property>
    <property>
            <name>dfs.namenode.http-address.mycluster.nn2</name>
            <value>hd2:50070</value>
     </property>

dfs.namenode.shared.edits.dir 

-- the URI which identifies the group of JNs where the NameNodes will write/read edits。app

<property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://hd2:8485;hd3:8485;hd4:8485/mycluster</value>
     </property>

dfs.journalnode.edits.dir (不存在須要建立)

- the path where the JournalNode daemon will store its local state。dom

這裏須要明白JN在此處的用途,引用官方的一句話「Using the Quorum Journal Manager or Conventional Shared Storage」,能夠很清楚看出JN是管理和轉化共享存儲的,因此既然是共享存儲,確定跟DN有關係,因此也須要配置3個JN,這3個JN存放的是edits日誌,被全部的NN和DN所共享。ssh

<property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/home/hadoop/journal/node/local/data</value>
    </property>

dfs.client.failover.proxy.provider.[nameservice ID] 

--固定配置,配置客戶端鏈接。客戶端經過這個類方法鏈接到集羣,必須配置。ide

<property>
         <name>dfs.client.failover.proxy.provider.mycluster</name>
         <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
     </property>

dfs.ha.fencing.methods

--配置sshfence ,這個跟配置ssh對等性(免密登陸)的功能同樣,可是方法不同。由於作高可用,客戶端經過集羣服務名鏈接集羣,集羣內部有多個NN,咱們只作了NN和DN之間的免密登陸,可是集羣跟NN之間的沒有作,說白了就是ZK跟NN之間須要免密登陸。oop

<property>
         <name>dfs.ha.fencing.methods</name>
         <value>sshfence</value>
     </property>

    <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/home/hadoop/.ssh/id_rsa</value>
    </property>

Configuring automatic failover  1:

--配置自動切換,自動切換須要配置ZK,由ZK統一管理NN。客戶端請求ZK,ZK搜索集羣內活動的NN,而後提供服務。

<property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>

編輯 core-site.xml 

fs.defaultFS

--配置dfs默認目錄

<property>
           <name>fs.defaultFS</name>
           <value>hdfs://mycluster</value>
          </property>

Configuring automatic failover 2:--配置自動切換

<property>
              <name>ha.zookeeper.quorum</name>
              <value>hd1:2181,hd2:2181,hd3:2181</value>
          </property>

配置hadoop臨時目錄(不存在須要建立)

<property>
              <name>hadoop.tmp.dir</name>
              <value>/usr/hadoop/hadoop-2.7.1</value>
         </property>

配置Datanode

[hadoop@hd1 hadoop]$ more slaves 
hd2
hd3
hd4

注意:masters(SNN),yarn,MR不須要配置。

拷貝配置文件到hd2,hd3,hd4節點。

配置zookeep:

解壓  zookeeper-3.4.6.tar.gz

配置

 dataDir=/usr/hadoop/zookeeper-3.4.6/tmp
server.1=hd1:2888:3888
server.2=hd2:2888:3888
server.3=hd3:2888:3888

server.1,.2,.3是zookeeper id

 mkdir -p /usr/hadoop/zookeeper-3.4.6/tmp

mkdir -p /home/hadoop/journal/node/local/data

vi /usr/hadoop/zookeeper-3.4.6/tmp/myid 

[hadoop@hd1 tmp]$ more myid 
1

hd1 的 /usr/hadoop/zookeeper-3.4.6/tmp/myid文件寫入1,hd2寫入2,hd3寫入3,依此類推,,,

拷貝zookeeper 到節點2,節點3    scp -r zookeeper-3.4.6/   hadoop@hd3:/usr/hadoop/

配置JN PATH路徑

啓動ZK(須要在hd1,hd2,hd3節點啓動zk):

sh zkServer.sh start 

JMX enabled by default
Using config: /usr/hadoop/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hd1 bin]$ jps
1777 QuorumPeerMain
1795 Jps

啓動JN(hd2,hd3,hd4):

[hadoop@hd4 ~]$ hadoop-daemon.sh  start journalnode
starting journalnode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-journalnode-hd4.out
[hadoop@hd4 ~]$ jps
1843 JournalNode
1879 Jps

NN格式化(任意一臺NN執行,hd1,hd2 ):

[hadoop@hd1 sbin]$ hdfs namenode -format 
18/10/07 05:54:30 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hd1/192.168.83.11
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.1
........
18/10/07 05:54:34 INFO namenode.FSImage: Allocated new BlockPoolId: BP-841723191-192.168.83.11-1538862874971
18/10/07 05:54:34 INFO common.Storage: Storage directory /usr/hadoop/hadoop-2.7.1/dfs/name has been successfully formatted.
18/10/07 05:54:35 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/10/07 05:54:35 INFO util.ExitUtil: Exiting with status 0
18/10/07 05:54:35 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hd1/192.168.83.11
************************************************************/

元數據文件已經生成 ,目前只是在執行了format節點上有元數據,因此須要拷貝到另一臺NN節點

ls  /usr/hadoop/hadoop-2.7.1/dfs/name/current

total 16
-rw-rw-r-- 1 hadoop hadoop 353 Oct  7 05:54 fsimage_0000000000000000000
-rw-rw-r-- 1 hadoop hadoop  62 Oct  7 05:54 fsimage_0000000000000000000.md5
-rw-rw-r-- 1 hadoop hadoop   2 Oct  7 05:54 seen_txid
-rw-rw-r-- 1 hadoop hadoop 205 Oct  7 05:54 VERSION

NN元數據拷貝(注意:在拷貝元數據以前,須要提早啓動format過的NN,只啓動一個節點)

  • 啓動format的NN 

[hadoop@hd1 current]$ hadoop-daemon.sh start namenode 
starting namenode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hd1.out
[hadoop@hd1 current]$ jps
1777 QuorumPeerMain
2177 Jps
  • 在未format NN節點(hd2)執行元數據拷貝命令

[hadoop@hd2 ~]$ hdfs namenode -bootstrapStandby
18/10/07 06:07:15 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.7.1
。。。。。。。。。。。。。。。。。。。。。。
************************************************************/
18/10/07 06:07:15 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/10/07 06:07:15 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
18/10/07 06:07:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
=====================================================
About to bootstrap Standby ID nn2 from:
           Nameservice ID: mycluster
        Other Namenode ID: nn1
  Other NN's HTTP address: http://hd1:50070
  Other NN's IPC  address: hd1/192.168.83.11:8020
             Namespace ID: 1626081692
            Block pool ID: BP-841723191-192.168.83.11-1538862874971
               Cluster ID: CID-230e9e54-e6d1-4baf-a66a-39cc69368ed8
           Layout version: -63
       isUpgradeFinalized: true
=====================================================
18/10/07 06:07:17 INFO common.Storage: Storage directory /usr/hadoop/hadoop-2.7.1/dfs/name has been successfully formatted.
18/10/07 06:07:18 INFO namenode.TransferFsImage: Opening connection to http://hd1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:1626081692:0:CID-230e9e54-e6d1-4baf-a66a-39cc69368ed8
18/10/07 06:07:18 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
18/10/07 06:07:18 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB/s
18/10/07 06:07:18 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 353 bytes.
18/10/07 06:07:18 INFO util.ExitUtil: Exiting with status 0
18/10/07 06:07:18 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/

檢查hd2節點下是否有元數據存在

[hadoop@hd2 current]$ ls -l /usr/hadoop/hadoop-2.7.1/dfs/name/current
total 16
-rw-rw-r-- 1 hadoop hadoop 353 Oct  7 06:17 fsimage_0000000000000000000
-rw-rw-r-- 1 hadoop hadoop  62 Oct  7 06:17 fsimage_0000000000000000000.md5
-rw-rw-r-- 1 hadoop hadoop   2 Oct  7 06:17 seen_txid
-rw-rw-r-- 1 hadoop hadoop 205 Oct  7 06:17 VERSION

啓動全部服務:

[hadoop@hd1 current]$ start-dfs.sh 
18/10/07 06:30:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hd1 hd2]
hd2: starting namenode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hd2.out
hd1: starting namenode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hd1.out
hd3: starting datanode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd3.out
hd4: starting datanode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd4.out
hd2: starting datanode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd2.out
Starting journal nodes [hd2 hd3 hd4]
hd4: starting journalnode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-journalnode-hd4.out
hd2: starting journalnode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-journalnode-hd2.out
hd3: starting journalnode, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-journalnode-hd3.out
18/10/07 06:30:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hd1 hd2]
hd2: starting zkfc, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-zkfc-hd2.out
hd1: starting zkfc, logging to /usr/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-zkfc-hd1.out
[hadoop@hd1 current]$ jps
1777 QuorumPeerMain
3202 NameNode
3511 DFSZKFailoverController
3548 Jps

hd2:

[hadoop@hd2 ~]$ jps
3216 NameNode
3617 Jps
3364 JournalNode
3285 DataNode
1915 QuorumPeerMain

hd2節點 zkfc未啓動,查看日誌 

2018-10-07 06:30:38,441 ERROR org.apache.hadoop.ha.ActiveStandbyElector: Connection timed out: couldn't connect to ZooKeeper in 5000 milliseconds
2018-10-07 06:30:39,318 INFO org.apache.zookeeper.ZooKeeper: Session: 0x0 closed
2018-10-07 06:30:39,320 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2018-10-07 06:30:39,330 FATAL org.apache.hadoop.ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at hd1:2181,hd2:2181,hd3:2181. Please check the c
onfigured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.

zk 未格式化致使,在其中一臺NN格式化zk從新啓動

退到hd1節點,執行stop-dfs.sh 

hdfs zkfc -formatZK 

相關文章
相關標籤/搜索