大數據教程(11.4)hadoop2.9.1集羣HA聯邦(federation)高可用搭建

    上一篇文章介紹了haoop集羣HA高可用的搭建,相信你們已經掌握了其知識;本篇博客博主將繼續爲小夥伴分享HA聯邦高可用的搭建,雖然,聯邦機制在不少公司可能還達不到這樣的數據集羣規模以致於不少公司都沒用使用;不過,像一些大型的遊戲公司或者BAT這樣的公司他們都採用的,爲了增長小夥伴們的面試信心,博主仍是簡單分享下聯邦的搭建過程。node

    1、概述面試

           因爲聯邦機制其實就是比以前的HA機制時多了幾對namenode而已,故本篇文章就直接在以前搭建的HA機制上改配置來搭建了。若是小夥伴們對上一篇的知識還不熟悉,建議先移步-->大數據教程(11.3)hadoop集羣HA高可用搭建shell

    2、集羣規劃apache

             主機名                           IP                      安裝的軟件                                運行的進程
           centos-aaron-ha-01    192.168.29.149    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-02    192.168.29.150    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-03    192.168.29.151    jdk、hadoop                            ResourceManager NameNode
           centos-aaron-ha-04    192.168.29.152    jdk、hadoop                            ResourceManager NameNode
           centos-aaron-ha-05    192.168.29.153    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-06    192.168.29.154    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-07    192.168.29.155    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMainbootstrap

    3、集羣搭建centos

          (1)修改core-site.xml瀏覽器

<configuration>
<!-- 指定hdfs的nameservice爲viewfs:/// -->
<property>
<name>fs.defaultFS</name>
<value>viewfs:///</value>
</property>
<property>
<name>fs.viewfs.mounttable.default.link./bi</name>
<value>hdfs://bi/</value>
</property>
<property>
<name>fs.viewfs.mounttable.default.link./dt</name>
<value>hdfs://dt/</value>
</property>
<!-- 指定 /tmp 目錄,許多依賴hdfs的組件可能會用到此目錄 -->
<name>fs.viewfs.mounttable.default.link./tmp</name>
<value>hdfs://bi/tmp</value>
</property>
<!-- 指定hadoop臨時目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hdpdata</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>centos-aaron-ha-05:2181,centos-aaron-ha-06:2181,centos-aaron-ha-07:2181</value>
</property>
</configuration>

          (2)修改hdfs-site.xml服務器

<configuration>
<!--指定hdfs的nameservice爲bi,須要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi,dt</value>
</property>
<!-- bi下面有兩個NameNode,分別是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.ha.namenodes.dt</name>
<value>nn3,nn4</value>
</property>
<!-- nn1的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>centos-aaron-ha-01:9000</value>
</property>
<!-- nn1的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>centos-aaron-ha-01:50070</value>
</property>
<!-- nn2的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>centos-aaron-ha-02:9000</value>
</property>
<!-- nn2的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>centos-aaron-ha-02:50070</value>
</property>

<!-- dt的RPC通訊地址 -->
<!-- nn3的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.dt.nn3</name>
<value>centos-aaron-ha-03:9000</value>
</property>
<!-- nn3的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.dt.nn3</name>
<value>centos-aaron-ha-03:50070</value>
</property>
<!-- nn4的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.dt.nn4</name>
<value>centos-aaron-ha-04:9000</value>
</property>
<!-- nn4的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.dt.nn4</name>
<value>centos-aaron-ha-04:50070</value>
</property>

<!-- 指定NameNode的edits元數據在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/bi</value>
</property>

<!--  在dt名稱空間的兩個namenode中,用以下配置-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/dt</value>
</property>

<!-- 指定JournalNode在本地磁盤存放數據的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/journaldata</value>
</property>
<!-- 開啓NameNode失敗自動切換 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失敗自動切換實現方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.dt</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔離機制方法,多個機制用換行分割,即每一個機制暫用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔離機制時須要ssh免登錄 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔離機制超時時間 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

          (3)將hdfs-site.xml、core-site.xml兩個文件分發到集羣上的全部機器app

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-02:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-02:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-03:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-03:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-04:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-04:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-05:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-05:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-06:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-06:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml  hadoop@centos-aaron-ha-07:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/hdfs-site.xml
sudo scp -r /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml  hadoop@centos-aaron-ha-07:/home/hadoop/apps/hadoop-2.9.1/etc/hadoop/core-site.xml

          (4)將centos-aaron-ha-0一、centos-aaron-ha-02這兩個namenode節點上的hdfs-site.xml中的如下配置部分刪除掉ssh

<!--  在dt名稱空間的兩個namenode中,用以下配置-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/dt</value>
</property>

          (5)將centos-aaron-ha-0三、centos-aaron-ha-04這兩個namenode節點上的hdfs-site.xml中的如下配置部分刪除掉

<!-- 指定NameNode的edits元數據在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/bi</value>
</property>

          (6)清理以前HA集羣留下的數據(namenode、datanode、journalnode)

#清理namenode、datanode、journalnode的工做目錄數據
rm -rf /home/hadoop/hdpdata/
rm -rf /home/hadoop/journaldata/
#清理zookeeper中以前存儲的集羣的相關信息
rm -rf /home/hadoop/apps/zookeeper-3.4.13/data/version-2/
rm -rf /home/hadoop/apps/zookeeper-3.4.13/data/zookeeper_server.pid

         (7).注意:兩個namenode之間要配置ssh免密碼登錄,別忘了配置centos-aaron-ha-04到centos-aaron-ha-03的免登錄

在centos-aaron-ha-04上生產一對鑰匙
ssh-keygen -t rsa
ssh-copy-id centos-aaron-ha-03

    4、集羣初始化啓動

          (1)啓動zookeeper集羣(分別在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上啓動zk)

cd /home/hadoop/apps/zookeeper-3.4.13/bin/
./zkServer.sh start
#查看狀態:一個leader,兩個follower
./zkServer.sh status

          (2)啓動journalnode(分別在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上執行)

cd /home/hadoop/apps/hadoop-2.9.1/
hadoop-daemon.sh start journalnode
#運行jps命令檢驗,centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上多了JournalNode進程

          (3)在bi下nn1上初始化namenode

#在centos-aaron-ha-01上執行命令:
hdfs namenode -format -clusterid hdp2019
#格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/home/hadoop/hdpdata/,而後將/home/hadoop/hdpdata/拷貝到centos-aaron-ha-02的/home/hadoop/hdpdata/下。
scp -r hdpdata/ centos-aaron-ha-02:/home/hadoop/
##也能夠這樣,建議hdfs namenode -bootstrapStandby 【注:此步驟需先啓動centos-aaron-ha-01上的namenode: hadoop-daemon.sh start namenode】

          (4)在dt下nn3上初始化namenode

#在centos-aaron-ha-03上執行命令:
hdfs namenode -format -clusterid hdp2019
#格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/home/hadoop/hdpdata/,而後將/home/hadoop/hdpdata/拷貝到centos-aaron-ha-04的/home/hadoop/hdpdata/下。
scp -r hdpdata/ centos-aaron-ha-04:/home/hadoop/
##也能夠這樣,建議hdfs namenode -bootstrapStandby 【注:此步驟需先啓動centos-aaron-ha-03上的namenode: hadoop-daemon.sh start namenode】

          (5)格式化ZKFC(在centos-aaron-ha-0一、centos-aaron-ha-03上各執行一次便可)

hdfs zkfc -formatZK

          (6)在bi下nn1上(centos-aaron-ha-01)

start-dfs.sh

          (7)在resoucemanager配置的主機上(centos-aaron-ha-03)啓動yarn

start-yarn.sh

          (8)在 centos-aaron-ha-04上啓動resourcemanager

yarn-daemon.sh start resourcemanager

    5、集羣驗證

          (1)namenode驗證,瀏覽器訪問

http://centos-aaron-ha-01:50070
NameNode 'hadoop01:9000' (active)
http://centos-aaron-ha-02:50070
NameNode 'hadoop02:9000' (standby)
http://centos-aaron-ha-03:50070
NameNode 'hadoop01:9000' (active)
http://centos-aaron-ha-04:50070
NameNode 'hadoop02:9000' (standby)

          (2)驗證HDFS HA federation

#查看集羣中的根目錄,能夠看到有【/bi /dt】兩個子目錄
hadoop fs -ls /
#建立一個文件夾
hadoop fs -mkdir ad.txt /bi/wd
#首先向hdfs上傳一個文件
hadoop fs -put ad.txt /bi/wd
hadoop fs -ls /bi/wd
而後再kill掉active的NameNode
kill -9 <pid of NN>
經過瀏覽器訪問:http://centos-aaron-ha-01:50070
NameNode 'centos-aaron-ha-01:9000' (active)
這個時候centos-aaron-ha-02上的NameNode變成了active
在執行命令:
hadoop fs -ls /bi/wd
Found 1 items
-rw-r--r--   3 hadoop supergroup         44 2019-01-11 18:20 /bi/wd/ad.txt
剛纔上傳的文件依然存在!!!
手動啓動那個掛掉的NameNode
hadoop-daemon.sh start namenode
經過瀏覽器訪問:http://centos-aaron-ha-02:50070
NameNode 'centos-aaron-ha-02:9000' (standby)

**另外兩臺namenode集羣的測試同理

          (3)驗證YARN:運行一下hadoop提供的demo中的WordCount程序:

hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount viewfs:///bi/wd viewfs:///bi/wdout

    6、運行效果

[hadoop@centos-aaron-ha-01 ~]$ hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount viewfs:///bi/wd viewfs:///bi/wdout
19/01/11 19:02:37 INFO input.FileInputFormat: Total input files to process : 1
19/01/11 19:02:37 INFO mapreduce.JobSubmitter: number of splits:1
19/01/11 19:02:37 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
19/01/11 19:02:37 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/01/11 19:02:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1547204450523_0001
19/01/11 19:02:38 INFO impl.YarnClientImpl: Submitted application application_1547204450523_0001
19/01/11 19:02:38 INFO mapreduce.Job: The url to track the job: http://centos-aaron-ha-03:8088/proxy/application_1547204450523_0001/
19/01/11 19:02:38 INFO mapreduce.Job: Running job: job_1547204450523_0001
19/01/11 19:02:51 INFO mapreduce.Job: Job job_1547204450523_0001 running in uber mode : false
19/01/11 19:02:51 INFO mapreduce.Job:  map 0% reduce 0%
19/01/11 19:03:03 INFO mapreduce.Job:  map 100% reduce 0%
19/01/11 19:03:16 INFO mapreduce.Job:  map 100% reduce 100%
19/01/11 19:03:16 INFO mapreduce.Job: Job job_1547204450523_0001 completed successfully
19/01/11 19:03:16 INFO mapreduce.Job: Counters: 54
        File System Counters
                FILE: Number of bytes read=98
                FILE: Number of bytes written=405013
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=129
                HDFS: Number of bytes written=60
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
                VIEWFS: Number of bytes read=0
                VIEWFS: Number of bytes written=0
                VIEWFS: Number of read operations=0
                VIEWFS: Number of large read operations=0
                VIEWFS: Number of write operations=0
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=9134
                Total time spent by all reduces in occupied slots (ms)=8261
                Total time spent by all map tasks (ms)=9134
                Total time spent by all reduce tasks (ms)=8261
                Total vcore-milliseconds taken by all map tasks=9134
                Total vcore-milliseconds taken by all reduce tasks=8261
                Total megabyte-milliseconds taken by all map tasks=9353216
                Total megabyte-milliseconds taken by all reduce tasks=8459264
        Map-Reduce Framework
                Map input records=5
                Map output records=8
                Map output bytes=76
                Map output materialized bytes=98
                Input split bytes=85
                Combine input records=8
                Combine output records=8
                Reduce input groups=8
                Reduce shuffle bytes=98
                Reduce input records=8
                Reduce output records=8
                Spilled Records=16
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=595
                CPU time spent (ms)=3070
                Physical memory (bytes) snapshot=351039488
                Virtual memory (bytes) snapshot=4140134400
                Total committed heap usage (bytes)=139972608
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=0
        File Output Format Counters 
                Bytes Written=0
[hadoop@centos-aaron-ha-01 ~]$ 
[hadoop@centos-aaron-ha-01 ~]$ hdfs dfs -ls  viewfs:///bi/wdout
Found 2 items
-rw-r--r--   3 hadoop supergroup          0 2019-01-11 19:03 viewfs:///bi/wdout/_SUCCESS
-rw-r--r--   3 hadoop supergroup         60 2019-01-11 19:03 viewfs:///bi/wdout/part-r-00000
[hadoop@centos-aaron-ha-01 ~]$ hdfs dfs -cat  viewfs:///bi/wdout/part-r-00000
ddfsZZ  1
df      1
dsfsd   1
hello   1
sdfdsf  1
sdfsd   1
sdss    1
xxx     1
[hadoop@centos-aaron-ha-01 ~]$

    7、最後總結

           本次搭建聯邦集羣的過程當中也遇到了一些問題,如.1:運行mapreduce程序是報錯找不到temp的目錄、博主這裏主要是由於少core-site中配置了tmp目錄參數引發的。如.2:hdfs集羣初始化後,瀏覽器上查看clustid不一致;博主這是由於執行namenode fomart的時候clustid寫成clustId引發。總之,在遇到問題的時候,看着日誌來查問題就能夠解決。

           最後寄語,以上是博主本次文章的所有內容,若是你們以爲博主的文章還不錯,請點贊;若是您對博主其它服務器大數據技術或者博主本人感興趣,請關注博主博客,而且歡迎隨時跟博主溝通交流。

相關文章
相關標籤/搜索