機架感知實戰案例

            機架感知實戰案例
html

                                        做者:尹正傑node

版權聲明:原創做品,謝絕轉載!不然將追究法律責任。python

 

 

 

一.網絡拓撲機架感知概述web

1>.網絡拓撲概述express

  有可能你會問,在本地網絡中,兩個節點被稱爲「彼此近鄰」是什麼意思?在海量數據處理中,其主要限制因素是節點之間傳輸的傳輸速率,即帶寬很稀缺。這裏的想法是將兩個節點間的帶寬做爲距離的衡量標準。
    
  不用衡量節點之間的帶寬,實際上很難實現(它須要一個穩定的集羣,而且在集羣中兩兩節點對數量是節點數量的平方),Hadoop爲此採用了一個簡單的想法:把網絡看做一棵樹,涼涼節點間的距離是它們刀最近共同祖先的距離總和。該樹種的層次是沒有預先設定的,可是相對於數據中心,機架和正在運行的節點,一般能夠設定等級。具體想法是針對如下每一個場景,可用帶寬依次遞減:     (
1)同一個節點上的進程;     (2)同一個機架上不一樣的節點;     (3)同一個數據中心中不一樣機架上的節點;     (4)不一樣數據中心的節點;
  例如,假設又數據中心d1機架和r1中的節點n1。該節點能夠表示爲「
/d1/r1/n1」。利用這種標記,這裏給出四種距離描述:     (1)distance(/d1/r1/n1,/d1/r1/n1) = 0(同一節點上的進程)     (2)distance(/d1/r1/n1,/d1/r1/n2) = 2(同一個機架上不一樣的節點)     (3)distance(/d1/r1/n1,/d1/r1/n3) = 4(同一數據中心不一樣機架上的節點)     (4)distance(/d1/r1/n1,/d1/r1/n4) = 6(不一樣數據中心的節點)
  最後,咱們必須意識到Hadoop沒法自動發現你的網絡拓撲結構。它須要一些幫助(須要實現一些Java定義的接口)。不過默認狀況下,假設網絡是扁平化的只有一層,或換句話說,全部節點都在同一個數據中心的同一個機架上。規模小(好比集羣節點小於20臺,均放在同一個機架上)的集羣可能如此,不須要進一步配置。

2>.機架感知概述(副本放置策略)apache

  HDFS和YARN都支持機架感知策略(其實是對交換機的感知),即集羣中的節點都有彼此相對的位置這樣一個概念。HDFS利用機架感知策略,確保將一個數據塊複製到不一樣機架來實現容錯的目的。這樣,若是網絡被關閉或者整個機架下架,仍然可以對數據進行訪問。

  ResourceManager利用機架感知策略優化資源的分配,使客戶端儘量訪問距離最近的數據。NameNode和ResourceManager守護進程經過調用API(將DNS映射到機架ID)的方式獲取機架信息。

  在默認三副本備份的狀況下,數據塊通常存儲在兩個機架而非三個機架上,這在讀取數據時,可以減小網絡帶寬佔用。

  以下圖所示,展現了Hadoop如何利用機架感知策略,配合不一樣的機架幫助實現集羣的冗餘性。由於同一個機架節點間的網絡流量相比於不一樣機架上節點之間的網絡流量少,所以配置多個機架是有益的。

  若是配置了多個機架,NameNode會嘗試將數據複製到多個機架,從而提供更高的容錯性。

 

3>.默認狀況下,Hadoop集羣均在同一個機架"/default-rack"vim

將Hadoop集羣中的節點安排到多個機架是很常見的。默認狀況下,即便集羣中的節點實際分屬於多個機架,Hadoop也會將全部的節點都屬於同一個機架。

以下所示,是我新搭建的測試集羣,默認狀況下均屬於同一個機架,即"/default-rack"。

[root@hadoop101.yinzhengjie.com ~]# hdfs dfsadmin -printTopology
Rack: /default-rack
   172.200.6.102:50010 (hadoop102.yinzhengjie.com)
   172.200.6.103:50010 (hadoop103.yinzhengjie.com)
   172.200.6.104:50010 (hadoop104.yinzhengjie.com)

[root@hadoop101.yinzhengjie.com ~]# 

 

二.基於腳本配置機架感知策略服務器

1>.如何在集羣中配置機架感知策略網絡

  咱們必須意識到Hadoop沒法自動發現你的網絡拓撲結構。它須要一些幫助好比:須要實現一些Java定義的接口,或者基於腳本的方式定義。

  Hadoop提供了以腳本的方式來幫助配置集羣的機架感知策略。Hadoop集羣經過這個腳本肯定節點在機架的位置。該腳本使用一個基本文件的控制文件,能夠經過編輯該文件添加集羣中節點信息(IP地址)。

  執行腳本時,Hadoop會根據機架信息文本中提供的IP地址獲得一份機架名稱列表。爲了讓機架感知策略生效,須要修改Hadoop的核心配置文件(core.site.xml)

2>.在NameNode節點配置Hadoop的核心文件(注意參數"net.topology.script.file.name")app

[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/core-site.xml
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# cat ${HADOOP_HOME}/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <!-- core-site.xml文件中包含核心Hadoop屬性的值,可使用此文件覆蓋core-default.xml文件中的默認參數值. -->

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop101.yinzhengjie.com:9000</value>
        <description>指定默認文件系統的名稱(這裏指定的是hdfs)以及NameNode服務的主機和端口信息,該屬性指定集羣NameNode的URI。DataNode將使用此URI向NameNode註冊,使應用程序能夠訪問存儲在DataNod
es上的數據。客戶端還將使用此URI來檢索HDFS中數據塊的位置,一般使用9000端口,可是若是你願意,可使用不一樣的端口(在Hadoop 1.x版本中該屬性名稱爲"fs.default.name",官方宣佈該參數已被廢棄).</description>    
  </property>
<property> <name>hadoop.http.staticuser.user</name> <value>yinzhengjie</value> <description>指定HDFS web UI的默認用戶名,默認值爲"dr.who"(在Hadoop 1.x版本中該屬性名稱爲"dfs.web.ugi",官方宣佈該參數已被廢棄).</description> </property> <!-- 雖然上面兩個參數可使集羣運行,但包括下述這兩個配置參數是現階段更好的選擇. --> <property> <name>fs.trash.interval</name> <value>4320</value> <description>指定刪除檢查點的分鐘數,若是爲0(官方默認即爲0),垃圾箱功能將被禁用。我這裏指定了4320分鐘(3天),刪除文件72小時後,Hadoop會將其從HDFS存儲中永久刪除.</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/yinzhengjie/data/hadoop/fully-mode/hdfs</value> <description>指定本地文件系統和HDFS中基本臨時目錄,該配置參數將被其它的配置文件引用(好比hdfs-site.xml咱們就會引用"hadoop.tmp.dir"變量),其默認值是"/tmp/hadoop-${user.name}",建議將該 目錄設置爲"/tmp"之外的目錄,由於某些環境會按期運行腳原本清理"/tmp"目錄下的全部內容.</description> </property> <!-- 如下參數用於配置機架感知,生產環境中若是大數據集羣使用的機架數超過2個以上,建議啓動該功能. --> <property> <name>net.topology.script.file.name</name> <value>/yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py</value> <description>配置機架感知策略文件</description> </property> </configuration> [root@hadoop101.yinzhengjie.com ~]#

3>.編輯主機和機架的對應關係

[root@hadoop101.yinzhengjie.com ~]# vim /yinzhengjie/softwares/hadoop/etc/hadoop/conf/host-rack.txt
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# cat /yinzhengjie/softwares/hadoop/etc/hadoop/conf/host-rack.txt      #生產環境中建議你根據實際服務器和對應的機架編號進行命名,我這裏僅是爲了測試方便。
172.200.6.101,/rack001
172.200.6.102,/rack001
172.200.6.103,/rack002
172.200.6.104,/rack002
172.200.6.105,/rack003
[root@hadoop101.yinzhengjie.com ~]# 

4>.編寫python腳本並添加執行權限

[root@hadoop101.yinzhengjie.com ~]# mkdir -v ${HADOOP_HOME}/etc/hadoop/conf              #建立存放機架感知相關配置文件的目錄
mkdir: created directory ‘/yinzhengjie/softwares/hadoop/etc/hadoop/conf’
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/conf/toplogy.py          #編輯腳本內容
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# cat ${HADOOP_HOME}/etc/hadoop/conf/toplogy.py
#!/usr/bin/env python
#_*_conding:utf-8_*_
#@author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie

import sys

DEFAULT_RACK="/prod/default-rack"

HOST_RACK_FILE="/yinzhengjie/softwares/hadoop/etc/hadoop/conf/host-rack.txt"

host_rack = {}

for line in open(HOST_RACK_FILE):
    (host,rack) = line.split(",")
    host_rack[host] = rack

for host in sys.argv[1:]:
    if host in host_rack:
        print host_rack[host]
    else:
        print DEFAULT_RACK
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ll /yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py
-rw-r--r-- 1 root root 463 Aug 13 18:33 /yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# chmod +x /yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py      #必定要添加執行權限,不然啓動Hadoop集羣時會報錯權限被拒絕。
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ll /yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py
-rwxr-xr-x 1 root root 463 Aug 13 18:33 /yinzhengjie/softwares/hadoop/etc/hadoop/conf/toplogy.py
[root@hadoop101.yinzhengjie.com ~]# 

4>.重啓HDFS集羣(無需分發到DataNode節點),並觀察日誌信息

[root@hadoop101.yinzhengjie.com ~]# manage-hdfs.sh restart
hadoop101.yinzhengjie.com | CHANGED | rc=0 >>
stopping namenode
hadoop105.yinzhengjie.com | CHANGED | rc=0 >>
stopping secondarynamenode
hadoop104.yinzhengjie.com | CHANGED | rc=0 >>
stopping datanode
hadoop103.yinzhengjie.com | CHANGED | rc=0 >>
stopping datanode
hadoop102.yinzhengjie.com | CHANGED | rc=0 >>
stopping datanode
Stoping HDFS:                                              [  OK  ]
hadoop101.yinzhengjie.com | CHANGED | rc=0 >>
starting namenode, logging to /yinzhengjie/softwares/hadoop-2.10.0-fully-mode/logs/hadoop-root-namenode-hadoop101.yinzhengjie.com.out
hadoop105.yinzhengjie.com | CHANGED | rc=0 >>
starting secondarynamenode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-secondarynamenode-hadoop105.yinzhengjie.com.out
hadoop102.yinzhengjie.com | CHANGED | rc=0 >>
starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop102.yinzhengjie.com.out
hadoop104.yinzhengjie.com | CHANGED | rc=0 >>
starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop104.yinzhengjie.com.out
hadoop103.yinzhengjie.com | CHANGED | rc=0 >>
starting datanode, logging to /yinzhengjie/softwares/hadoop/logs/hadoop-root-datanode-hadoop103.yinzhengjie.com.out
Starting HDFS:                                             [  OK  ]
[root@hadoop101.yinzhengjie.com ~]# 

5>. 查看集羣的機架信息

[root@hadoop101.yinzhengjie.com ~]# hdfs dfsadmin -printTopology
Rack: /rack001
   172.200.6.102:50010 (hadoop102.yinzhengjie.com)

Rack: /rack002
   172.200.6.103:50010 (hadoop103.yinzhengjie.com)
   172.200.6.104:50010 (hadoop104.yinzhengjie.com)

[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# hdfs dfsadmin -report
Configured Capacity: 24740939366400 (22.50 TB)
Present Capacity: 24740939366400 (22.50 TB)
DFS Remaining: 24740939202560 (22.50 TB)
DFS Used: 163840 (160 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (3):

Name: 172.200.6.102:50010 (hadoop102.yinzhengjie.com)
Hostname: hadoop102.yinzhengjie.com
Rack: /rack001
Decommission Status : Normal
Configured Capacity: 8246979788800 (7.50 TB)
DFS Used: 40960 (40 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 8246979747840 (7.50 TB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Aug 13 18:44:52 CST 2020
Last Block Report: Thu Aug 13 18:41:01 CST 2020


Name: 172.200.6.103:50010 (hadoop103.yinzhengjie.com)
Hostname: hadoop103.yinzhengjie.com
Rack: /rack002
Decommission Status : Normal
Configured Capacity: 8246979788800 (7.50 TB)
DFS Used: 61440 (60 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 8246979727360 (7.50 TB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Aug 13 18:44:52 CST 2020
Last Block Report: Thu Aug 13 18:41:01 CST 2020


Name: 172.200.6.104:50010 (hadoop104.yinzhengjie.com)
Hostname: hadoop104.yinzhengjie.com
Rack: /rack002
Decommission Status : Normal
Configured Capacity: 8246979788800 (7.50 TB)
DFS Used: 61440 (60 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 8246979727360 (7.50 TB)
DFS Used%: 0.00%
DFS Remaining%: 100.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Aug 13 18:44:52 CST 2020
Last Block Report: Thu Aug 13 18:41:01 CST 2020


[root@hadoop101.yinzhengjie.com ~]#  

 

三.基於自定義代碼配置機架感知策略

  博主推薦閱讀:
    https://www.cnblogs.com/yinzhengjie/p/9142230.html
相關文章
相關標籤/搜索