NameNode和SecondaryNameNode工做原理剖析

            NameNode和SecondaryNameNode工做原理剖析html

                                     做者:尹正傑java

版權聲明:原創做品,謝絕轉載!不然將追究法律責任。node

 

 

一.NameNode中的元數據是存儲在那裏的?git

1>.首先,咱們作個假設,若是存儲在NameNode節點的磁盤中,由於常常須要進行隨機訪問,還有響應客戶請求,必然是效率太低。所以,元數據須要存放在內存中。但若是隻存在內存中,一旦斷電,元數據丟失,整個集羣就沒法工做了。所以產生在磁盤中備份元數據的FsImage。

2>.這樣又會帶來新的問題,當在內存中的元數據更新時,若是同時更新FsImage,就會致使效率太低,但若是不更新,就會發生一致性問題,一旦NameNode節點斷電,就會產生數據丟失。所以,引入Edits文件(只進行追加操做,效率很高)。每當元數據有更新或者添加元數據時,修改內存中的元數據並追加到Edits中。這樣,一旦NameNode節點斷電,能夠經過FsImage和Edits的合併,合成元數據。
3>.可是,若是長時間添加數據到Edits中,會致使該文件數據過大,效率下降,並且一旦斷電,恢復元數據須要的時間過長。所以,須要按期進行FsImage和Edits的合併,若是這個操做由NameNode節點完成,又會效率太低。所以,引入一個新的節點SecondaryNamenode,專門用於FsImage和Edits的合併。

 

二.NameNode和SecondaryNameNode工做原理web

1>.NameNode和SecondaryNameNode工做機制簡介sql

第一階段:NameNode啓動
  1>.第一次啓動NameNode格式化後,建立Fsimage和Edits文件。若是不是第一次啓動,直接加載編輯日誌和鏡像文件到內存。
  2>.客戶端對元數據進行增刪改的請求。
  3>.NameNode記錄操做日誌,更新滾動日誌。
  4>.NameNode在內存中對數據進行增刪改。


第二階段:Secondary NameNode工做   1>.Secondary NameNode詢問NameNode是否須要CheckPoint。直接帶回NameNode是否檢查結果。   2>.Secondary NameNode請求執行CheckPoint。   3>.NameNode滾動正在寫的Edits日誌。   4>.將滾動前的編輯日誌和鏡像文件拷貝到Secondary NameNode。   5>.Secondary NameNode加載編輯日誌和鏡像文件到內存,併合並。   6>.生成新的鏡像文件fsimage.chkpoint。   7>.拷貝fsimage.chkpoint到NameNode。   8>.NameNode將fsimage.chkpoint從新命名成fsimage。

2>.NameNode和SecondaryNameNode工做機制詳解shell

Fsimage:
    NameNode內存中元數據序列化後造成的文件。
Edits:
    記錄客戶端更新元數據信息的每一步操做(可經過Edits運算出元數據)。


若是看懂上圖的小夥伴,這段文字能夠跳過,若是沒有看明白那麼就得仔細閱讀下段文字啦:
  1>.NameNode啓動時,先滾動Edits並生成一個空的edits.inprogress,而後加載Edits和Fsimage到內存中,此時NameNode內存就持有最新的元數據信息。
  2>.Client開始對NameNode發送元數據的增刪改的請求,這些請求的操做首先會被記錄到edits.inprogress中(查詢元數據的操做不會被記錄在Edits中,由於查詢操做不會更改元數據信息),若是此時NameNode掛掉,重啓後會從Edits中讀取元數據的信息。而後,NameNode會在內存中執行元數據的增刪改的操做。   3>.因爲Edits中記錄的操做會愈來愈多,Edits文件會愈來愈大,致使NameNode在啓動加載Edits時會很慢,因此須要對Edits和Fsimage進行合併(所謂合併,就是將Edits和Fsimage加載到內存中,照着Edits中的操做一步步執行,最終造成新的Fsimage)。
  4>.SecondaryNameNode的做用就是幫助NameNode進行Edits和Fsimage的合併工做。   5>.SecondaryNameNode首先會詢問NameNode是否須要CheckPoint(觸發CheckPoint須要知足兩個條件中的任意一個,定時時間到和Edits中數據寫滿了)。直接帶回NameNode是否檢查結果。
  6>.SecondaryNameNode執行CheckPoint操做,首先會讓NameNode滾動Edits並生成一個空的edits.inprogress,滾動Edits的目的是給Edits打個標記,之後全部新的操做都寫入edits.inprogress,其餘未合併的Edits和Fsimage會拷貝到SecondaryNameNode的本地,而後將拷貝的Edits和Fsimage加載到內存中進行合併,生成fsimage.chkpoint,而後將fsimage.chkpoint拷貝給NameNode,重命名爲Fsimage後替換掉原來的Fsimage。
  7>.NameNode在啓動時就只須要加載以前未合併的Edits和Fsimage便可,由於合併過的Edits中的元數據信息已經被記錄在Fsimage中。

   關於Hadoop徹底分佈式部署可參考:Apache Hadoop 2.9.2 徹底分佈式部署(HDFS)apache

3>.chkpoint檢查時間參數設置json

[hdfs-default.xml]

<configuration>
            .....
    <property>
          <name>dfs.namenode.checkpoint.period</name>
          <value>3600</value>
    </property>
            .....
</configuration>
一般狀況下,SecondaryNameNode每隔一小時執行一次 
<property>
  <name>dfs.namenode.checkpoint.txns</name>
  <value>1000000</value>
<description>操做動做次數</description>
</property>

<property>
  <name>dfs.namenode.checkpoint.check.period</name>
  <value>60</value>
<description> 1分鐘檢查一次操做次數</description>
</property>
默認一分鐘檢查一次操做次數,當操做次數達到1百萬時,SecondaryNameNode執行一次。

 

三.Fsimage和Edits解析windows

1>.Fsimage和Edits概念

   NameNode被格式化以後,將在咱們定義的數據目錄中("${hadoop.tmp.dir}/dfs/name/current/")產生以下圖所示文件:

一.Fsimage文件
    HDFS文件系統元數據的一個永久性的檢查點,其中包含HDFS文件系統的全部目錄和文件inode的序列化信息。
  
二.Edits文件
    存放HDFS文件系統的全部更新操做的路徑,文件系統客戶端執行的全部寫操做首先會被記錄到Edits文件中。

三.seen_txid文件
    文件保存的是一個數字,就是最後一個edits_的數字。

四.VERSION
    記錄着集羣的版本號,包括存儲id,集羣id,ctime屬性,datanodeuuuid,存儲類型等內容。彆着急,下面關於NameNode版本號會對該文件的內容進行詳細的解釋!


舒適提示:
    每次NameNode啓動的時候都會將Fsimage文件讀入內存,加載Edits裏面的更新操做,保證內存中的元數據是最新的,同步的,能夠當作NameNoede啓動的時候將Fsimage和Edits文件進行了合併操做。
    可是當格式化NameNode後,若是以前有數據,那麼對不起,以前的全部數據丟面臨丟失的問題,格式化後第一次啓動是不加在編輯日誌的,咱們能夠在NameNode的Web UI中查到相應的記錄信息!

2>.使用oiv命令查看鏡像(fsimage)文件

[root@node101.yinzhengjie.org.cn ~]# hdfs oiv
Usage: bin/hdfs oiv [OPTIONS] -i INPUTFILE -o OUTPUTFILE
Offline Image Viewer
View a Hadoop fsimage INPUTFILE using the specified PROCESSOR,
saving the results in OUTPUTFILE.

The oiv utility will attempt to parse correctly formed image files
and will abort fail with mal-formed image files.

The tool works offline and does not require a running cluster in
order to process an image file.

The following image processors are available:
  * XML: This processor creates an XML document with all elements of
    the fsimage enumerated, suitable for further analysis by XML
    tools.
  * ReverseXML: This processor takes an XML file and creates a
    binary fsimage containing the same elements.
  * FileDistribution: This processor analyzes the file size
    distribution in the image.
    -maxSize specifies the range [0, maxSize] of file sizes to be
     analyzed (128GB by default).
    -step defines the granularity of the distribution. (2MB by default)
    -format formats the output result in a human-readable fashion
     rather than a number of bytes. (false by default)
  * Web: Run a viewer to expose read-only WebHDFS API.
    -addr specifies the address to listen. (localhost:5978 by default)
  * Delimited (experimental): Generate a text file with all of the elements common
    to both inodes and inodes-under-construction, separated by a
    delimiter. The default delimiter is \t, though this may be
    changed via the -delimiter argument.

Required command line arguments:
-i,--inputFile <arg>   FSImage or XML file to process.

Optional command line arguments:
-o,--outputFile <arg>  Name of output file. If the specified
                       file exists, it will be overwritten.
                       (output to stdout by default)
                       If the input file was an XML file, we
                       will also create an <outputFile>.md5 file.
-p,--processor <arg>   Select which type of processor to apply
                       against image file. (XML|FileDistribution|
                       ReverseXML|Web|Delimited)
                       The default is Web.
-delimiter <arg>       Delimiting string to use with Delimited processor.  
-t,--temp <arg>        Use temporary dir to cache intermediate result to generate
                       Delimited outputs. If not set, Delimited processor constructs
                       the namespace in memory before outputting text.
-h,--help              Display usage information and exit

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oiv                          #查看oiv命令的幫助信息
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 1120
-rw-r--r--. 1 root root      42 Apr 11 18:54 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 11 19:54 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root      42 Apr 11 20:54 edits_0000000000000000005-0000000000000000006
-rw-r--r--. 1 root root      42 Apr 11 21:54 edits_0000000000000000007-0000000000000000008
-rw-r--r--. 1 root root      42 Apr 11 22:54 edits_0000000000000000009-0000000000000000010
-rw-r--r--. 1 root root      42 Apr 11 23:54 edits_0000000000000000011-0000000000000000012
-rw-r--r--. 1 root root      42 Apr 12 00:54 edits_0000000000000000013-0000000000000000014
-rw-r--r--. 1 root root      42 Apr 12 01:54 edits_0000000000000000015-0000000000000000016
-rw-r--r--. 1 root root      42 Apr 12 02:54 edits_0000000000000000017-0000000000000000018
-rw-r--r--. 1 root root      42 Apr 12 03:54 edits_0000000000000000019-0000000000000000020
-rw-r--r--. 1 root root      42 Apr 12 04:54 edits_0000000000000000021-0000000000000000022
-rw-r--r--. 1 root root      42 Apr 12 05:54 edits_0000000000000000023-0000000000000000024
-rw-r--r--. 1 root root      42 Apr 12 06:54 edits_0000000000000000025-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 07:54 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 08:54 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root      42 Apr 12 09:54 edits_0000000000000000031-0000000000000000032
-rw-r--r--. 1 root root      42 Apr 12 10:54 edits_0000000000000000033-0000000000000000034
-rw-r--r--. 1 root root      42 Apr 12 11:54 edits_0000000000000000035-0000000000000000036
-rw-r--r--. 1 root root 1048576 Apr 12 11:54 edits_inprogress_0000000000000000037
-rw-r--r--. 1 root root     323 Apr 12 10:54 fsimage_0000000000000000034
-rw-r--r--. 1 root root      62 Apr 12 10:54 fsimage_0000000000000000034.md5
-rw-r--r--. 1 root root     323 Apr 12 11:54 fsimage_0000000000000000036
-rw-r--r--. 1 root root      62 Apr 12 11:54 fsimage_0000000000000000036.md5
-rw-r--r--. 1 root root       3 Apr 12 11:54 seen_txid
-rw-r--r--. 1 root root     215 Apr 11 18:07 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
[root@node101.yinzhengjie.org.cn ~]# ll
total 0
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oiv -p XML -i /data/hadoop/hdfs/dfs/name/current/fsimage_0000000000000000036 -o ./fsimage.xml
19/04/12 12:49:09 INFO offlineImageViewer.FSImageHandler: Loading 2 strings
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll
total 4
-rw-r--r--. 1 root root 1264 Apr 12 12:49 fsimage.xml
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oiv -p XML -i /data/hadoop/hdfs/dfs/name/current/fsimage_0000000000000000036 -o ./fsimage.xml
[root@node101.yinzhengjie.org.cn ~]# cat fsimage.xml               
<?xml version="1.0"?>
<fsimage><version><layoutVersion>-63</layoutVersion><onDiskVersion>1</onDiskVersion><oivRevision>826afbeae31ca687bc2f8471dc841b66ed2c6704</oivRevision></version>
<NameSection><namespaceId>429640720</namespaceId><genstampV1>1000</genstampV1><genstampV2>1000</genstampV2><genstampV1Limit>0</genstampV1Limit><lastAllocatedBlockId>1073741824</lastAllocatedBlockId><txid>36</txid></NameSection>
<INodeSection><lastInodeId>16385</lastInodeId><numInodes>1</numInodes><inode><id>16385</id><type>DIRECTORY</type><name></name><mtime>0</mtime><permission>root:supergroup:0755</permission><nsquota>9223372036854775807</nsquota><dsquota>-1</dsquota></inode>
</INodeSection>
<INodeReferenceSection></INodeReferenceSection><SnapshotSection><snapshotCounter>0</snapshotCounter><numSnapshots>0</numSnapshots></SnapshotSection>
<INodeDirectorySection></INodeDirectorySection>
<FileUnderConstructionSection></FileUnderConstructionSection>
<SecretManagerSection><currentId>0</currentId><tokenSequenceNumber>0</tokenSequenceNumber><numDelegationKeys>0</numDelegationKeys><numTokens>0</numTokens></SecretManagerSection><CacheManagerSection><nextDirectiveId>1</nextDirectiveId><numDirectives>0</numDirectives><numPools>0</numPools></CacheManagerSection>
</fsimage>
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# sz fsimage.xml     #咱們發如今Linux上直接查看的話可讀性太差啦!咱們能夠下載下來它並使用開發工具將其打開,使用Eclise或者Idea進行格式化一下就OK
rz Starting zmodem transfer. Press Ctrl+C to cancel. Transferring fsimage.xml... 100% 1 KB 1 KB/sec 00:00:01 0 Errors 
[root@node101.yinzhengjie.org.cn ~]#
[root@node101.yinzhengjie.org.cn ~]# sz fsimage.xml     #咱們發如今Linux上直接查看的話可讀性太差啦!咱們能夠下載下來它並使用開發工具將其打開,使用Eclise或者Idea進行格式化一下就OK

3>.使用oev查看edits文件 

[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 1124
-rw-r--r--. 1 root root      42 Apr 11 18:54 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 11 19:54 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root      42 Apr 11 20:54 edits_0000000000000000005-0000000000000000006
-rw-r--r--. 1 root root      42 Apr 11 21:54 edits_0000000000000000007-0000000000000000008
-rw-r--r--. 1 root root      42 Apr 11 22:54 edits_0000000000000000009-0000000000000000010
-rw-r--r--. 1 root root      42 Apr 11 23:54 edits_0000000000000000011-0000000000000000012
-rw-r--r--. 1 root root      42 Apr 12 00:54 edits_0000000000000000013-0000000000000000014
-rw-r--r--. 1 root root      42 Apr 12 01:54 edits_0000000000000000015-0000000000000000016
-rw-r--r--. 1 root root      42 Apr 12 02:54 edits_0000000000000000017-0000000000000000018
-rw-r--r--. 1 root root      42 Apr 12 03:54 edits_0000000000000000019-0000000000000000020
-rw-r--r--. 1 root root      42 Apr 12 04:54 edits_0000000000000000021-0000000000000000022
-rw-r--r--. 1 root root      42 Apr 12 05:54 edits_0000000000000000023-0000000000000000024
-rw-r--r--. 1 root root      42 Apr 12 06:54 edits_0000000000000000025-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 07:54 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 08:54 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root      42 Apr 12 09:54 edits_0000000000000000031-0000000000000000032
-rw-r--r--. 1 root root      42 Apr 12 10:54 edits_0000000000000000033-0000000000000000034
-rw-r--r--. 1 root root      42 Apr 12 11:54 edits_0000000000000000035-0000000000000000036
-rw-r--r--. 1 root root      42 Apr 12 12:54 edits_0000000000000000037-0000000000000000038
-rw-r--r--. 1 root root 1048576 Apr 12 12:54 edits_inprogress_0000000000000000039
-rw-r--r--. 1 root root     323 Apr 12 11:54 fsimage_0000000000000000036
-rw-r--r--. 1 root root      62 Apr 12 11:54 fsimage_0000000000000000036.md5
-rw-r--r--. 1 root root     323 Apr 12 12:54 fsimage_0000000000000000038
-rw-r--r--. 1 root root      62 Apr 12 12:54 fsimage_0000000000000000038.md5
-rw-r--r--. 1 root root       3 Apr 12 12:54 seen_txid
-rw-r--r--. 1 root root     215 Apr 11 18:07 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
[root@node101.yinzhengjie.org.cn ~]# hdfs oev
Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
Offline edits viewer
Parse a Hadoop edits log file INPUT_FILE and save results
in OUTPUT_FILE.
Required command line arguments:
-i,--inputFile <arg>   edits file to process, xml (case
                       insensitive) extension means XML format,
                       any other filename means binary format.
                       XML/Binary format input file is not allowed
                       to be processed by the same type processor.
-o,--outputFile <arg>  Name of output file. If the specified
                       file exists, it will be overwritten,
                       format of the file is determined
                       by -p option

Optional command line arguments:
-p,--processor <arg>   Select which type of processor to apply
                       against image file, currently supported
                       processors are: binary (native binary format
                       that Hadoop uses), xml (default, XML
                       format), stats (prints statistics about
                       edits file)
-h,--help              Display usage information and exit
-f,--fix-txids         Renumber the transaction IDs in the input,
                       so that there are no gaps or invalid
                       transaction IDs.
-r,--recover           When reading binary edit logs, use recovery 
                       mode.  This will give you the chance to skip 
                       corrupt parts of the edit log.
-v,--verbose           More verbose output, prints the input and
                       output filenames, for processors that write
                       to a file, also output to screen. On large
                       image files this will dramatically increase
                       processing time (default is false).


Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oev                                                              #查看oev的使用方法
[root@node101.yinzhengjie.org.cn ~]# ll
total 4
-rw-r--r--. 1 root root 1264 Apr 12 12:49 fsimage.xml
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oev -p XML -i /data/hadoop/hdfs/dfs/name/current/edits_inprogress_0000000000000000039 -o ./edits.xml                                             
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll
total 8
-rw-r--r--. 1 root root 3124 Apr 12 13:31 edits.xml
-rw-r--r--. 1 root root 1264 Apr 12 12:49 fsimage.xml
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs oev -p XML -i /data/hadoop/hdfs/dfs/name/current/edits_inprogress_0000000000000000039 -o ./edits.xml     #咱們下載正在寫的編輯日誌文件
[root@node101.yinzhengjie.org.cn ~]# cat edits.xml 
<?xml version="1.0" encoding="UTF-8"?>
<EDITS>
  <EDITS_VERSION>-63</EDITS_VERSION>
  <RECORD>
    <OPCODE>OP_START_LOG_SEGMENT</OPCODE>
    <DATA>
      <TXID>39</TXID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_MKDIR</OPCODE>
    <DATA>
      <TXID>40</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16386</INODEID>
      <PATH>/yinzhengjie</PATH>
      <TIMESTAMP>1555046944497</TIMESTAMP>
      <PERMISSION_STATUS>
        <USERNAME>root</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>493</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD</OPCODE>
    <DATA>
      <TXID>41</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>16387</INODEID>
      <PATH>/yinzhengjie/fsimage.xml._COPYING_</PATH>
      <REPLICATION>2</REPLICATION>
      <MTIME>1555046953857</MTIME>
      <ATIME>1555046953857</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME>DFSClient_NONMAPREDUCE_-152402097_1</CLIENT_NAME>
      <CLIENT_MACHINE>172.30.1.101</CLIENT_MACHINE>
      <OVERWRITE>true</OVERWRITE>
      <PERMISSION_STATUS>
        <USERNAME>root</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
      <RPC_CLIENTID>3020556e-7e1b-4883-bfad-6c3cea06e2b4</RPC_CLIENTID>
      <RPC_CALLID>3</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
    <DATA>
      <TXID>42</TXID>
      <BLOCK_ID>1073741825</BLOCK_ID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
    <DATA>
      <TXID>43</TXID>
      <GENSTAMPV2>1001</GENSTAMPV2>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_ADD_BLOCK</OPCODE>
    <DATA>
      <TXID>44</TXID>
      <PATH>/yinzhengjie/fsimage.xml._COPYING_</PATH>
      <BLOCK>
        <BLOCK_ID>1073741825</BLOCK_ID>
        <NUM_BYTES>0</NUM_BYTES>
        <GENSTAMP>1001</GENSTAMP>
      </BLOCK>
      <RPC_CLIENTID></RPC_CLIENTID>
      <RPC_CALLID>-2</RPC_CALLID>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_CLOSE</OPCODE>
    <DATA>
      <TXID>45</TXID>
      <LENGTH>0</LENGTH>
      <INODEID>0</INODEID>
      <PATH>/yinzhengjie/fsimage.xml._COPYING_</PATH>
      <REPLICATION>2</REPLICATION>
      <MTIME>1555046954585</MTIME>
      <ATIME>1555046953857</ATIME>
      <BLOCKSIZE>134217728</BLOCKSIZE>
      <CLIENT_NAME></CLIENT_NAME>
      <CLIENT_MACHINE></CLIENT_MACHINE>
      <OVERWRITE>false</OVERWRITE>
      <BLOCK>
        <BLOCK_ID>1073741825</BLOCK_ID>
        <NUM_BYTES>1264</NUM_BYTES>
        <GENSTAMP>1001</GENSTAMP>
      </BLOCK>
      <PERMISSION_STATUS>
        <USERNAME>root</USERNAME>
        <GROUPNAME>supergroup</GROUPNAME>
        <MODE>420</MODE>
      </PERMISSION_STATUS>
    </DATA>
  </RECORD>
  <RECORD>
    <OPCODE>OP_RENAME_OLD</OPCODE>
    <DATA>
      <TXID>46</TXID>
      <LENGTH>0</LENGTH>
      <SRC>/yinzhengjie/fsimage.xml._COPYING_</SRC>
      <DST>/yinzhengjie/fsimage.xml</DST>
      <TIMESTAMP>1555046954593</TIMESTAMP>
      <RPC_CLIENTID>3020556e-7e1b-4883-bfad-6c3cea06e2b4</RPC_CLIENTID>
      <RPC_CALLID>9</RPC_CALLID>
    </DATA>
  </RECORD>
</EDITS>
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat edits.xml

 

4>.滾動編輯日誌(edits)

  編輯日誌滾動只有兩種條件會主動觸發,要麼就是重啓hdfs集羣,要麼就是手動滾動編輯日誌,手動滾動編輯日誌很簡單,就一條命令搞定:

[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 3144
-rw-r--r--. 1 root root      42 Apr 12 18:53 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 12 18:53 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root 1048576 Apr 12 18:53 edits_0000000000000000005-0000000000000000005
-rw-r--r--. 1 root root      42 Apr 12 18:56 edits_0000000000000000006-0000000000000000007
-rw-r--r--. 1 root root      42 Apr 12 18:56 edits_0000000000000000008-0000000000000000009
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000010-0000000000000000011
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000012-0000000000000000013
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000014-0000000000000000015
-rw-r--r--. 1 root root      42 Apr 12 18:58 edits_0000000000000000016-0000000000000000017
-rw-r--r--. 1 root root      42 Apr 12 18:58 edits_0000000000000000018-0000000000000000019
-rw-r--r--. 1 root root      42 Apr 12 18:59 edits_0000000000000000020-0000000000000000021
-rw-r--r--. 1 root root      42 Apr 12 18:59 edits_0000000000000000022-0000000000000000023
-rw-r--r--. 1 root root      42 Apr 12 19:00 edits_0000000000000000024-0000000000000000025
-rw-r--r--. 1 root root 1048576 Apr 12 19:00 edits_0000000000000000026-0000000000000000026
-rw-r--r--. 1 root root 1048576 Apr 12 19:00 edits_inprogress_0000000000000000027              ------->這是當前正在寫的編輯日誌。
-rw-r--r--. 1 root root     323 Apr 12 19:00 fsimage_0000000000000000025
-rw-r--r--. 1 root root      62 Apr 12 19:00 fsimage_0000000000000000025.md5
-rw-r--r--. 1 root root     323 Apr 12 19:00 fsimage_0000000000000000026
-rw-r--r--. 1 root root      62 Apr 12 19:00 fsimage_0000000000000000026.md5
-rw-r--r--. 1 root root       3 Apr 12 19:00 seen_txid
-rw-r--r--. 1 root root     217 Apr 12 19:00 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -rollEdits
Successfully rolled edit logs.
New segment starts at txid 31          -------->注意,這裏就是告訴咱們新生成的編輯日誌編號!
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 3152
-rw-r--r--. 1 root root      42 Apr 12 18:53 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 12 18:53 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root 1048576 Apr 12 18:53 edits_0000000000000000005-0000000000000000005
-rw-r--r--. 1 root root      42 Apr 12 18:56 edits_0000000000000000006-0000000000000000007
-rw-r--r--. 1 root root      42 Apr 12 18:56 edits_0000000000000000008-0000000000000000009
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000010-0000000000000000011
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000012-0000000000000000013
-rw-r--r--. 1 root root      42 Apr 12 18:57 edits_0000000000000000014-0000000000000000015
-rw-r--r--. 1 root root      42 Apr 12 18:58 edits_0000000000000000016-0000000000000000017
-rw-r--r--. 1 root root      42 Apr 12 18:58 edits_0000000000000000018-0000000000000000019
-rw-r--r--. 1 root root      42 Apr 12 18:59 edits_0000000000000000020-0000000000000000021
-rw-r--r--. 1 root root      42 Apr 12 18:59 edits_0000000000000000022-0000000000000000023
-rw-r--r--. 1 root root      42 Apr 12 19:00 edits_0000000000000000024-0000000000000000025
-rw-r--r--. 1 root root 1048576 Apr 12 19:00 edits_0000000000000000026-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 19:01 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 19:01 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root 1048576 Apr 12 19:01 edits_inprogress_0000000000000000031             ------>滾動後,新生成的編輯日誌編號還記得麼?
-rw-r--r--. 1 root root     323 Apr 12 19:00 fsimage_0000000000000000026
-rw-r--r--. 1 root root      62 Apr 12 19:00 fsimage_0000000000000000026.md5
-rw-r--r--. 1 root root     323 Apr 12 19:01 fsimage_0000000000000000028
-rw-r--r--. 1 root root      62 Apr 12 19:01 fsimage_0000000000000000028.md5
-rw-r--r--. 1 root root       3 Apr 12 19:01 seen_txid
-rw-r--r--. 1 root root     217 Apr 12 19:00 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -rollEdits            #咱們能夠手動滾動編輯日誌

5>.滾動鏡像文件(fsimage)

[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode enter
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get  
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode enter        #進入安全模式,下面會詳細說明集羣的安全模式相關概念
[root@node101.yinzhengjie.org.cn ~]# ll -h /data/hadoop/hdfs/dfs/name/current/ | grep fsimage
-rw-r--r--. 1 root root  323 Apr 12 19:08 fsimage_0000000000000000056
-rw-r--r--. 1 root root   62 Apr 12 19:08 fsimage_0000000000000000056.md5
-rw-r--r--. 1 root root  323 Apr 12 19:08 fsimage_0000000000000000058        ------->滾動前最新的鏡像文件
-rw-r--r--. 1 root root   62 Apr 12 19:08 fsimage_0000000000000000058.md5
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -saveNamespace        #滾動鏡像文件
Save namespace successful
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll -h /data/hadoop/hdfs/dfs/name/current/ | grep fsimage
-rw-r--r--. 1 root root  323 Apr 12 19:08 fsimage_0000000000000000058
-rw-r--r--. 1 root root   62 Apr 12 19:08 fsimage_0000000000000000058.md5
-rw-r--r--. 1 root root  323 Apr 12 19:11 fsimage_0000000000000000060        ------->滾動後最新的鏡像文件
-rw-r--r--. 1 root root   62 Apr 12 19:11 fsimage_0000000000000000060.md5
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -saveNamespace        #滾動鏡像文件,或者我們也能夠說在保存名稱空間!
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get                             
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode leave        #退出安全模式
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get  
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode leave        #退出安全模式

  推薦閱讀:Hadoop默認的webUI訪問端口

 

四.NameNode版本號

1>.查看namenode版本號

[root@node101.yinzhengjie.org.cn ~]# cat /data/hadoop/hdfs/dfs/name/current/VERSION   
#Thu Apr 11 18:07:23 CST 2019
namespaceID=429640720
clusterID=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c
cTime=1554977243283
storageType=NAME_NODE
blockpoolID=BP-681013498-172.30.1.101-1554977243283
layoutVersion=-63
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 

2>.namenode版本號具體解釋

一.namespaceID
    在HDFS上,會有多個Namenode,因此不一樣Namenode的namespaceID是不一樣的,分別管理一組blockpoolID。

二.clusterID
    集羣id,全局惟一

三.cTime
    屬性標記了namenode存儲系統的建立時間,對於剛剛格式化的存儲系統,這個屬性爲0;可是在文件系統升級以後,該值會更新到新的時間戳。

四.storageType
    屬性說明該存儲目錄包含的是namenode的數據結構。

五.blockpoolID
    一個block pool id標識一個block pool,而且是跨集羣的全局惟一。當一個新的Namespace被建立的時候(format過程的一部分)會建立並持久化一個惟一ID。在建立過程構建全局惟一的BlockPoolID比人爲的配置更可靠一些。NN將BlockPoolID持久化到磁盤中,在後續的啓動過程當中,會再次load並使用。

六.layoutVersion
    分層版本,它是一個負整數。一般只有HDFS增長新特性時纔會更新這個版本號。

 

五.SecondaryNameNode目錄結構

   Secondary Name用來監控HDFS狀態的輔助後臺呈現,每隔一段時間獲取HDFS元數據的快照。在咱們定義的數據目錄中("${hadoop.tmp.dir}/dfs/namesecondary/current/")能夠查看都相應的目錄結構,以下圖所示:

[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/namesecondary/current/
total 1128
-rw-r--r--. 1 root root      42 Apr 11 18:54 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 11 19:54 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root      42 Apr 11 20:54 edits_0000000000000000005-0000000000000000006
-rw-r--r--. 1 root root      42 Apr 11 21:54 edits_0000000000000000007-0000000000000000008
-rw-r--r--. 1 root root      42 Apr 11 22:54 edits_0000000000000000009-0000000000000000010
-rw-r--r--. 1 root root      42 Apr 11 23:54 edits_0000000000000000011-0000000000000000012
-rw-r--r--. 1 root root      42 Apr 12 00:54 edits_0000000000000000013-0000000000000000014
-rw-r--r--. 1 root root      42 Apr 12 01:54 edits_0000000000000000015-0000000000000000016
-rw-r--r--. 1 root root      42 Apr 12 02:54 edits_0000000000000000017-0000000000000000018
-rw-r--r--. 1 root root      42 Apr 12 03:54 edits_0000000000000000019-0000000000000000020
-rw-r--r--. 1 root root      42 Apr 12 04:54 edits_0000000000000000021-0000000000000000022
-rw-r--r--. 1 root root      42 Apr 12 05:54 edits_0000000000000000023-0000000000000000024
-rw-r--r--. 1 root root      42 Apr 12 06:54 edits_0000000000000000025-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 07:54 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 08:54 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root      42 Apr 12 09:54 edits_0000000000000000031-0000000000000000032
-rw-r--r--. 1 root root      42 Apr 12 10:54 edits_0000000000000000033-0000000000000000034
-rw-r--r--. 1 root root      42 Apr 12 11:54 edits_0000000000000000035-0000000000000000036
-rw-r--r--. 1 root root      42 Apr 12 12:54 edits_0000000000000000037-0000000000000000038
-rw-r--r--. 1 root root 1048576 Apr 12 13:48 edits_0000000000000000039-0000000000000000046
-rw-r--r--. 1 root root      42 Apr 12 13:48 edits_0000000000000000047-0000000000000000048
-rw-r--r--. 1 root root      42 Apr 12 14:48 edits_0000000000000000049-0000000000000000050
-rw-r--r--. 1 root root     489 Apr 12 13:48 fsimage_0000000000000000048
-rw-r--r--. 1 root root      62 Apr 12 13:48 fsimage_0000000000000000048.md5
-rw-r--r--. 1 root root     489 Apr 12 14:48 fsimage_0000000000000000050
-rw-r--r--. 1 root root      62 Apr 12 14:48 fsimage_0000000000000000050.md5
-rw-r--r--. 1 root root     215 Apr 12 14:48 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/namesecondary/current/        
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 2156
-rw-r--r--. 1 root root      42 Apr 11 18:54 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 11 19:54 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root      42 Apr 11 20:54 edits_0000000000000000005-0000000000000000006
-rw-r--r--. 1 root root      42 Apr 11 21:54 edits_0000000000000000007-0000000000000000008
-rw-r--r--. 1 root root      42 Apr 11 22:54 edits_0000000000000000009-0000000000000000010
-rw-r--r--. 1 root root      42 Apr 11 23:54 edits_0000000000000000011-0000000000000000012
-rw-r--r--. 1 root root      42 Apr 12 00:54 edits_0000000000000000013-0000000000000000014
-rw-r--r--. 1 root root      42 Apr 12 01:54 edits_0000000000000000015-0000000000000000016
-rw-r--r--. 1 root root      42 Apr 12 02:54 edits_0000000000000000017-0000000000000000018
-rw-r--r--. 1 root root      42 Apr 12 03:54 edits_0000000000000000019-0000000000000000020
-rw-r--r--. 1 root root      42 Apr 12 04:54 edits_0000000000000000021-0000000000000000022
-rw-r--r--. 1 root root      42 Apr 12 05:54 edits_0000000000000000023-0000000000000000024
-rw-r--r--. 1 root root      42 Apr 12 06:54 edits_0000000000000000025-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 07:54 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 08:54 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root      42 Apr 12 09:54 edits_0000000000000000031-0000000000000000032
-rw-r--r--. 1 root root      42 Apr 12 10:54 edits_0000000000000000033-0000000000000000034
-rw-r--r--. 1 root root      42 Apr 12 11:54 edits_0000000000000000035-0000000000000000036
-rw-r--r--. 1 root root      42 Apr 12 12:54 edits_0000000000000000037-0000000000000000038
-rw-r--r--. 1 root root 1048576 Apr 12 13:29 edits_0000000000000000039-0000000000000000046
-rw-r--r--. 1 root root      42 Apr 12 13:48 edits_0000000000000000047-0000000000000000048
-rw-r--r--. 1 root root      42 Apr 12 14:48 edits_0000000000000000049-0000000000000000050
-rw-r--r--. 1 root root 1048576 Apr 12 14:48 edits_inprogress_0000000000000000051
-rw-r--r--. 1 root root     489 Apr 12 13:48 fsimage_0000000000000000048
-rw-r--r--. 1 root root      62 Apr 12 13:48 fsimage_0000000000000000048.md5
-rw-r--r--. 1 root root     489 Apr 12 14:48 fsimage_0000000000000000050
-rw-r--r--. 1 root root      62 Apr 12 14:48 fsimage_0000000000000000050.md5
-rw-r--r--. 1 root root       3 Apr 12 14:48 seen_txid
-rw-r--r--. 1 root root     215 Apr 11 18:07 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/

  SecondNameNode存放數據的目錄和NameNode存放數據目錄的佈局相同,只不過SecondNameNode沒有seen_txid這個文件。SecondNamenode的好處就是當NameNode方式故障時,能夠從SecondNameNode的數據目錄中恢復數據。恢復數據的方法有兩種:

    方法一:將SecondaryNameNode中的數據拷貝到NameNode存儲數據的目錄

    方法二:使用-importCheckpoint選項啓動namenode守護進程,從而將SecondaryNameNode尊重數據拷貝到NameNode目錄中。

 1>.模擬NameNode故障,採用方法一,恢復NameNode數據。(推薦使用)

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# jps
19750 Jps
1978 NameNode
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# kill -9 1978
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# jps
19819 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# kill -9 1978                                          #殺掉正在運行的namenode進程
[root@node101.yinzhengjie.org.cn ~]# rm -rf /data/hadoop/hdfs/dfs/name/*
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name
total 0
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]#  
[root@node101.yinzhengjie.org.cn ~]# rm -rf /data/hadoop/hdfs/dfs/name/*                            #殺掉進程後將namenode的存儲數據所有刪除掉!夠不夠狠?哈哈哈~
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name
total 0
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]#  
[root@node101.yinzhengjie.org.cn ~]# cp -r  /data/hadoop/hdfs/dfs/namesecondary/* /data/hadoop/hdfs/dfs/name/
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name
total 8
drwxr-xr-x. 2 root root 4096 Apr 12 15:19 current
-rw-r--r--. 1 root root   31 Apr 12 15:19 in_use.lock
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cp -r /data/hadoop/hdfs/dfs/namesecondary/* /data/hadoop/hdfs/dfs/name/      #將nameSecondary下的數據拷貝到namenode到數據目錄中
[root@node101.yinzhengjie.org.cn ~]# jps
21137 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hadoop-daemon.sh start namenode
starting namenode, logging to /yinzhengjie/softwares/hadoop-2.9.2/logs/hadoop-root-namenode-node101.yinzhengjie.org.cn.out
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# jps
21217 NameNode
21316 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hadoop-daemon.sh start namenode                              #啓動namenode服務

舒適提示:

  咱們上面雖然說把數據恢復了,可是咱們明明知道SeconaryName數據中有一個seen_txid這個文件是沒有的,當咱們把數據拷貝到NameNode後,啓動NameNode時,咱們發現他自動生成了該文件!是否是很神奇呢?

[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/
total 2156
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000001-0000000000000000002
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000003-0000000000000000004
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000005-0000000000000000006
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000007-0000000000000000008
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000009-0000000000000000010
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000011-0000000000000000012
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000013-0000000000000000014
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000015-0000000000000000016
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000017-0000000000000000018
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000019-0000000000000000020
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000021-0000000000000000022
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000023-0000000000000000024
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000025-0000000000000000026
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000027-0000000000000000028
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000029-0000000000000000030
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000031-0000000000000000032
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000033-0000000000000000034
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000035-0000000000000000036
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000037-0000000000000000038
-rw-r--r--. 1 root root 1048576 Apr 12 15:19 edits_0000000000000000039-0000000000000000046
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000047-0000000000000000048
-rw-r--r--. 1 root root      42 Apr 12 15:19 edits_0000000000000000049-0000000000000000050
-rw-r--r--. 1 root root 1048576 Apr 12 15:21 edits_inprogress_0000000000000000051
-rw-r--r--. 1 root root     489 Apr 12 15:19 fsimage_0000000000000000048
-rw-r--r--. 1 root root      62 Apr 12 15:19 fsimage_0000000000000000048.md5
-rw-r--r--. 1 root root     489 Apr 12 15:19 fsimage_0000000000000000050
-rw-r--r--. 1 root root      62 Apr 12 15:19 fsimage_0000000000000000050.md5
-rw-r--r--. 1 root root       3 Apr 12 15:20 seen_txid
-rw-r--r--. 1 root root     215 Apr 12 15:19 VERSION
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /data/hadoop/hdfs/dfs/name/current/seen_txid   
51
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name/current/

2>.模擬NameNode故障,採用方法二,恢復NameNode數據

[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml   
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
        <property>
                <name>dfs.namenode.checkpoint.period</name>
                <value>30</value>
        </property>

        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/data/hadoop/hdfs/dfs/name</value>
        </property>

        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
</configuration>

<!--
hdfs-site.xml 配置文件的做用:
        #HDFS的相關設定,如文件副本的個數、塊大小及是否使用強制權限等,此中的參數定義會覆蓋hdfs-default.xml文件中的默認配置.


dfs.namenode.checkpoint.period 參數的做用:
        #兩個按期檢查點之間的秒數,默認是3600,即1小時。

dfs.namenode.name.dir 參數的做用:
        #指定namenode的工做目錄,默認是file://${hadoop.tmp.dir}/dfs/name

dfs.replication 參數的做用:
        #爲了數據可用性及冗餘的目的,HDFS會在多個節點上保存同一個數據塊的多個副本,其默認爲3個。而只有一個節點的僞分佈式環境中其僅用
保存一個副本便可,這能夠經過dfs.replication屬性進行定義。它是一個軟件級備份。

-->

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
[root@node101.yinzhengjie.org.cn ~]# jps
21217 NameNode
26332 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# kill -9 21217
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# jps
26377 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# kill -9 21217
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name
total 8
drwxr-xr-x. 2 root root 4096 Apr 12 15:20 current
-rw-r--r--. 1 root root   32 Apr 12 15:20 in_use.lock
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# rm -rf /data/hadoop/hdfs/dfs/name/*
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name      
total 0
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# rm -rf /data/hadoop/hdfs/dfs/name/*
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/name      
total 0
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/
total 12
drwx------. 3 root root 4096 Apr 12 13:47 data
drwxr-xr-x. 2 root root 4096 Apr 12 15:51 name
drwxr-xr-x. 3 root root 4096 Apr 12 13:47 namesecondary
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/namesecondary/
total 8
drwxr-xr-x. 2 root root 4096 Apr 12 14:48 current
-rw-r--r--. 1 root root   31 Apr 12 13:47 in_use.lock
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# rm -f /data/hadoop/hdfs/dfs/namesecondary/in_use.lock 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/namesecondary/                
total 4
drwxr-xr-x. 2 root root 4096 Apr 12 14:48 current
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# rm -f /data/hadoop/hdfs/dfs/namesecondary/in_use.lock   #若是你的secondary目錄和namenode再也不同一個節點的還,使用scp拷貝到namenode的統計目錄便可,別忘記把這個鎖文件給他幹掉!不然執行下面的步驟會提示目錄被鎖啦!
[root@node101.yinzhengjie.org.cn ~]# hdfs namenode -importCheckpoint    
19/04/12 15:53:16 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node101.yinzhengjie.org.cn/172.30.1.101
STARTUP_MSG:   args = [-importCheckpoint]
STARTUP_MSG:   version = 2.9.2
STARTUP_MSG:   classpath = /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-collections-3.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jettison-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/junit-4.11.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/httpclient-4.5.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hadoop-annotations-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-framework-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/json-smart-1.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsch-0.1.54.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/stax2-api-3.1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-net-3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hadoop-auth-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-digester-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/httpcore-4.4.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsp-api-2.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-client-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-json-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-lang3-3.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/gson-2.2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/activation-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-nfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/okio-1.6.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-client-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/woodstox-core-5.0.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guice-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/api-util-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jettison-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jcip-annotations-1.0-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-configuration-1.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-beanutils-1.7.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-recipes-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/httpclient-4.5.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/java-xmlbuilder-0.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/javax.inject-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-framework-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-sslengine-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/json-smart-1.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsch-0.1.54.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/nimbus-jose-jwt-4.41.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/java-util-1.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/stax2-api-3.1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/apacheds-i18n-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-net-3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/api-asn1-api-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jets3t-0.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-digester-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/httpcore-4.4.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsp-api-2.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-lang3-3.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/gson-2.2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-beanutils-core-1.8.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/activation-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/fst-2.50.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/json-io-2.5.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-api-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-router-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-registry-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 826afbeae31ca687bc2f8471dc841b66ed2c6704; compiled by 'ajisaka' on 2018-11-13T12:42Z
STARTUP_MSG:   java = 1.8.0_201
************************************************************/
19/04/12 15:53:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/04/12 15:53:16 INFO namenode.NameNode: createNameNode [-importCheckpoint]
19/04/12 15:53:16 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
19/04/12 15:53:16 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
19/04/12 15:53:16 INFO impl.MetricsSystemImpl: NameNode metrics system started
19/04/12 15:53:16 INFO namenode.NameNode: fs.defaultFS is hdfs://node101.yinzhengjie.org.cn:8020
19/04/12 15:53:16 INFO namenode.NameNode: Clients are to use node101.yinzhengjie.org.cn:8020 to access this namenode/service.
19/04/12 15:53:17 INFO util.JvmPauseMonitor: Starting JVM pause monitor
19/04/12 15:53:17 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://account.jetbrains.com:50070
19/04/12 15:53:17 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
19/04/12 15:53:17 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
19/04/12 15:53:17 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
19/04/12 15:53:17 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19/04/12 15:53:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
19/04/12 15:53:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
19/04/12 15:53:17 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19/04/12 15:53:17 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
19/04/12 15:53:17 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
19/04/12 15:53:17 INFO http.HttpServer2: Jetty bound to port 50070
19/04/12 15:53:17 INFO mortbay.log: jetty-6.1.26
19/04/12 15:53:17 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@account.jetbrains.com:50070
19/04/12 15:53:17 WARN common.Util: Path /data/hadoop/hdfs/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 15:53:17 WARN common.Util: Path /data/hadoop/hdfs/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 15:53:17 WARN namenode.FSNamesystem: !!! WARNING !!!
        The NameNode currently runs without persistent storage.
        Any changes to the file system meta-data may be lost.
        Recommended actions:
                - shutdown and restart NameNode with configured "dfs.namenode.edits.dir.required" in hdfs-site.xml;
                - use Backup Node as a persistent and up-to-date storage of the file system meta-data.
19/04/12 15:53:17 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/04/12 15:53:17 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
19/04/12 15:53:17 WARN common.Util: Path /data/hadoop/hdfs/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 15:53:17 WARN common.Util: Path /data/hadoop/hdfs/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 15:53:17 INFO namenode.FSEditLog: Edit logging is async:true
19/04/12 15:53:17 INFO namenode.FSNamesystem: KeyProvider: null
19/04/12 15:53:17 INFO namenode.FSNamesystem: fsLock is fair: true
19/04/12 15:53:17 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/04/12 15:53:17 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
19/04/12 15:53:17 INFO namenode.FSNamesystem: supergroup          = supergroup
19/04/12 15:53:17 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/04/12 15:53:17 INFO namenode.FSNamesystem: HA Enabled: false
19/04/12 15:53:17 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19/04/12 15:53:17 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
19/04/12 15:53:17 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/04/12 15:53:17 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/04/12 15:53:17 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Apr 12 15:53:17
19/04/12 15:53:17 INFO util.GSet: Computing capacity for map BlocksMap
19/04/12 15:53:17 INFO util.GSet: VM type       = 64-bit
19/04/12 15:53:17 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
19/04/12 15:53:17 INFO util.GSet: capacity      = 2^21 = 2097152 entries
19/04/12 15:53:17 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/04/12 15:53:17 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
19/04/12 15:53:17 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
19/04/12 15:53:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/04/12 15:53:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
19/04/12 15:53:17 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
19/04/12 15:53:17 INFO blockmanagement.BlockManager: defaultReplication         = 2
19/04/12 15:53:17 INFO blockmanagement.BlockManager: maxReplication             = 512
19/04/12 15:53:17 INFO blockmanagement.BlockManager: minReplication             = 1
19/04/12 15:53:17 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
19/04/12 15:53:17 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/04/12 15:53:17 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
19/04/12 15:53:17 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
19/04/12 15:53:17 INFO namenode.FSNamesystem: Append Enabled: true
19/04/12 15:53:17 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
19/04/12 15:53:17 INFO util.GSet: Computing capacity for map INodeMap
19/04/12 15:53:17 INFO util.GSet: VM type       = 64-bit
19/04/12 15:53:17 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
19/04/12 15:53:17 INFO util.GSet: capacity      = 2^20 = 1048576 entries
19/04/12 15:53:17 INFO namenode.FSDirectory: ACLs enabled? false
19/04/12 15:53:17 INFO namenode.FSDirectory: XAttrs enabled? true
19/04/12 15:53:17 INFO namenode.NameNode: Caching file names occurring more than 10 times
19/04/12 15:53:17 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
19/04/12 15:53:17 INFO util.GSet: Computing capacity for map cachedBlocks
19/04/12 15:53:17 INFO util.GSet: VM type       = 64-bit
19/04/12 15:53:17 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
19/04/12 15:53:17 INFO util.GSet: capacity      = 2^18 = 262144 entries
19/04/12 15:53:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/04/12 15:53:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/04/12 15:53:17 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/04/12 15:53:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/04/12 15:53:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/04/12 15:53:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/04/12 15:53:17 INFO util.GSet: VM type       = 64-bit
19/04/12 15:53:17 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/04/12 15:53:17 INFO util.GSet: capacity      = 2^15 = 32768 entries
19/04/12 15:53:17 INFO common.Storage: Lock on /data/hadoop/hdfs/dfs/name/in_use.lock acquired by nodename 28133@node101.yinzhengjie.org.cn
19/04/12 15:53:17 INFO namenode.FSImage: Storage directory /data/hadoop/hdfs/dfs/name is not formatted.
19/04/12 15:53:17 INFO namenode.FSImage: Formatting ...
19/04/12 15:53:17 INFO namenode.FSEditLog: Edit logging is async:true
19/04/12 15:53:17 INFO common.Storage: Lock on /data/hadoop/hdfs/dfs/namesecondary/in_use.lock acquired by nodename 28133@node101.yinzhengjie.org.cn
19/04/12 15:53:17 WARN namenode.FSNamesystem: !!! WARNING !!!
        The NameNode currently runs without persistent storage.
        Any changes to the file system meta-data may be lost.
        Recommended actions:
                - shutdown and restart NameNode with configured "dfs.namenode.edits.dir.required" in hdfs-site.xml;
                - use Backup Node as a persistent and up-to-date storage of the file system meta-data.
19/04/12 15:53:17 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data/hadoop/hdfs/dfs/namesecondary/current
19/04/12 15:53:17 INFO namenode.FSImage: No edit log streams selected.
19/04/12 15:53:17 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/data/hadoop/hdfs/dfs/namesecondary/current/fsimage_0000000000000000050, cpktTxId=0000000000000000050)
19/04/12 15:53:18 INFO namenode.FSImageFormatPBINode: Loading 3 INodes.
19/04/12 15:53:18 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
19/04/12 15:53:18 INFO namenode.FSImage: Loaded image for txid 50 from /data/hadoop/hdfs/dfs/namesecondary/current/fsimage_0000000000000000050
19/04/12 15:53:18 WARN namenode.FSNamesystem: !!! WARNING !!!
        The NameNode currently runs without persistent storage.
        Any changes to the file system meta-data may be lost.
        Recommended actions:
                - shutdown and restart NameNode with configured "dfs.namenode.edits.dir.required" in hdfs-site.xml;
                - use Backup Node as a persistent and up-to-date storage of the file system meta-data.
19/04/12 15:53:18 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data/hadoop/hdfs/dfs/name/current
19/04/12 15:53:18 INFO namenode.FSImage: Save namespace ...
19/04/12 15:53:18 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000050 using no compression
19/04/12 15:53:18 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000050 of size 489 bytes saved in 0 seconds .
19/04/12 15:53:18 INFO namenode.FSImageTransactionalStorageInspector: No version file in /data/hadoop/hdfs/dfs/name
19/04/12 15:53:18 INFO namenode.FSImageTransactionalStorageInspector: No version file in /data/hadoop/hdfs/dfs/name
19/04/12 15:53:18 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
19/04/12 15:53:18 INFO namenode.FSEditLog: Starting log segment at 51
19/04/12 15:53:18 INFO namenode.NameCache: initialized with 0 entries 0 lookups
19/04/12 15:53:18 INFO namenode.FSNamesystem: Finished loading FSImage in 335 msecs
19/04/12 15:53:18 INFO namenode.NameNode: RPC server is binding to node101.yinzhengjie.org.cn:8020
19/04/12 15:53:18 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
19/04/12 15:53:18 INFO ipc.Server: Starting Socket Reader #1 for port 8020
19/04/12 15:53:18 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
19/04/12 15:53:18 WARN common.Util: Path /data/hadoop/hdfs/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 15:53:18 WARN namenode.FSNamesystem: !!! WARNING !!!
        The NameNode currently runs without persistent storage.
        Any changes to the file system meta-data may be lost.
        Recommended actions:
                - shutdown and restart NameNode with configured "dfs.namenode.edits.dir.required" in hdfs-site.xml;
                - use Backup Node as a persistent and up-to-date storage of the file system meta-data.
19/04/12 15:53:18 INFO namenode.LeaseManager: Number of blocks under construction: 0
19/04/12 15:53:18 INFO blockmanagement.BlockManager: initializing replication queues
19/04/12 15:53:18 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs
19/04/12 15:53:18 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
19/04/12 15:53:18 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
19/04/12 15:53:18 INFO blockmanagement.BlockManager: Total number of blocks            = 1
19/04/12 15:53:18 INFO blockmanagement.BlockManager: Number of invalid blocks          = 0
19/04/12 15:53:18 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 1
19/04/12 15:53:18 INFO blockmanagement.BlockManager: Number of  over-replicated blocks = 0
19/04/12 15:53:18 INFO blockmanagement.BlockManager: Number of blocks being written    = 0
19/04/12 15:53:18 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 6 msec
19/04/12 15:53:18 INFO ipc.Server: IPC Server Responder: starting
19/04/12 15:53:18 INFO ipc.Server: IPC Server listener on 8020: starting
19/04/12 15:53:18 INFO namenode.NameNode: NameNode RPC up at: node101.yinzhengjie.org.cn/172.30.1.101:8020
19/04/12 15:53:18 INFO namenode.FSNamesystem: Starting services required for active state
19/04/12 15:53:18 INFO namenode.FSDirectory: Initializing quota with 4 thread(s)
19/04/12 15:53:18 INFO namenode.FSDirectory: Quota initialization completed in 4 milliseconds
name space=3
storage space=2528
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
19/04/12 15:53:18 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
19/04/12 15:53:19 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.30.1.101:50010, datanodeUuid=07a8ce7e-9ee2-4f39-aa4c-06fc06175ac7, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283) storage 07a8ce7e-9ee2-4f39-aa4c-06fc06175ac7
19/04/12 15:53:19 INFO net.NetworkTopology: Adding a new node: /default-rack/172.30.1.101:50010
19/04/12 15:53:19 INFO blockmanagement.BlockReportLeaseManager: Registered DN 07a8ce7e-9ee2-4f39-aa4c-06fc06175ac7 (172.30.1.101:50010).
19/04/12 15:53:19 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-2be780fb-d455-426c-ae0a-f04329f7f6af for DN 172.30.1.101:50010
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0x8fbe58a5c4ff1ffc: Processing first storage report for DS-2be780fb-d455-426c-ae0a-f04329f7f6af from datanode 07a8ce7e-9ee2-4f39-aa4c-06fc06175ac7
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0x8fbe58a5c4ff1ffc: from storage DS-2be780fb-d455-426c-ae0a-f04329f7f6af node DatanodeRegistration(172.30.1.101:50010, datanodeUuid=07a8ce7e-9ee2-4f39-aa4c-06fc06175ac7, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283), blocks: 1, hasStaleStorage: false, processing time: 3 msecs, invalidatedBlocks: 0
19/04/12 15:53:19 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.30.1.102:50010, datanodeUuid=8810056a-5a58-4d85-8a00-e0ceb5d1ac8b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283) storage 8810056a-5a58-4d85-8a00-e0ceb5d1ac8b
19/04/12 15:53:19 INFO net.NetworkTopology: Adding a new node: /default-rack/172.30.1.102:50010
19/04/12 15:53:19 INFO blockmanagement.BlockReportLeaseManager: Registered DN 8810056a-5a58-4d85-8a00-e0ceb5d1ac8b (172.30.1.102:50010).
19/04/12 15:53:19 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.30.1.103:50010, datanodeUuid=6625a3aa-8e60-4614-922c-4e3f2821cb9d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283) storage 6625a3aa-8e60-4614-922c-4e3f2821cb9d
19/04/12 15:53:19 INFO net.NetworkTopology: Adding a new node: /default-rack/172.30.1.103:50010
19/04/12 15:53:19 INFO blockmanagement.BlockReportLeaseManager: Registered DN 6625a3aa-8e60-4614-922c-4e3f2821cb9d (172.30.1.103:50010).
19/04/12 15:53:19 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-58b626df-edca-4f58-baae-051a10700a6c for DN 172.30.1.102:50010
19/04/12 15:53:19 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-b1a5ab0c-365e-4c54-8947-5686377fc317 for DN 172.30.1.103:50010
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0x894eeec5f232acb8: Processing first storage report for DS-58b626df-edca-4f58-baae-051a10700a6c from datanode 8810056a-5a58-4d85-8a00-e0ceb5d1ac8b
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0x894eeec5f232acb8: from storage DS-58b626df-edca-4f58-baae-051a10700a6c node DatanodeRegistration(172.30.1.102:50010, datanodeUuid=8810056a-5a58-4d85-8a00-e0ceb5d1ac8b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0xe5052da591a1fb0a: Processing first storage report for DS-b1a5ab0c-365e-4c54-8947-5686377fc317 from datanode 6625a3aa-8e60-4614-922c-4e3f2821cb9d
19/04/12 15:53:19 INFO BlockStateChange: BLOCK* processReport 0xe5052da591a1fb0a: from storage DS-b1a5ab0c-365e-4c54-8947-5686377fc317 node DatanodeRegistration(172.30.1.103:50010, datanodeUuid=6625a3aa-8e60-4614-922c-4e3f2821cb9d, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-5e6a5eca-6d94-4087-9ff8-7decc325338c;nsid=429640720;c=1554977243283), blocks: 1, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0
19/04/12 15:54:05 INFO namenode.FSNamesystem: Roll Edit Log from 172.30.1.101
19/04/12 15:54:05 INFO namenode.FSEditLog: Rolling edit logs
19/04/12 15:54:05 INFO namenode.FSEditLog: Ending log segment 51, 51
19/04/12 15:54:05 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 50 Number of syncs: 3 SyncTimes(ms): 5 
19/04/12 15:54:05 INFO namenode.FileJournalManager: Finalizing edits file /data/hadoop/hdfs/dfs/name/current/edits_inprogress_0000000000000000051 -> /data/hadoop/hdfs/dfs/name/current/edits_0000000000000000051-0000000000000000052
19/04/12 15:54:05 INFO namenode.FSEditLog: Starting log segment at 53
19/04/12 15:54:05 INFO namenode.TransferFsImage: Sending fileName: /data/hadoop/hdfs/dfs/name/current/edits_0000000000000000051-0000000000000000052, fileSize: 42. Sent total: 42 bytes. Size of last segment intended to send: -1 bytes.
19/04/12 15:54:05 INFO namenode.TransferFsImage: Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /data/hadoop/hdfs/dfs/name/current/fsimage.ckpt_0000000000000000052 took 0.00s.
19/04/12 15:54:05 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000052 size 489 bytes.
19/04/12 15:54:05 INFO namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 50
^C19/04/12 16:03:13 ERROR namenode.NameNode: RECEIVED SIGNAL 2: SIGINT
19/04/12 16:03:13 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node101.yinzhengjie.org.cn/172.30.1.101
************************************************************/
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs namenode -importCheckpoint                #運行1分鐘左右直接Ctrl +c就能夠了,咱們執行這個步驟目的就是將namesecondary目錄的數據拷貝過去,可是它會一直佔用終端!
[root@node101.yinzhengjie.org.cn ~]# jps
31621 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hadoop-daemon.sh start namenode
starting namenode, logging to /yinzhengjie/softwares/hadoop-2.9.2/logs/hadoop-root-namenode-node101.yinzhengjie.org.cn.out
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# jps
31690 NameNode
31804 Jps
2333 SecondaryNameNode
2141 DataNode
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hadoop-daemon.sh start namenode

 

 六.集羣的安全模式操做
1>.HDFS集羣安全模式概述
  安全模式是hadoop的一種保護機制,用於保證集羣中的數據塊的安全性。
 
  Namenode啓動時,首先將映像文件(fsimage)載入內存,並執行編輯日誌(edits)中的各項操做。一旦在內存中成功創建文件系統元數據的映像,則建立一個新的fsimage文件和一個空的編輯日誌。此時,namenode開始監聽datanode請求。可是此刻,namenode運行在安全模式,即namenode的文件系統對於客戶端來講是隻讀的。   系統中的數據塊的位置並非由namenode維護的,而是以塊列表的形式存儲在datanode中。在系統的正常操做期間,namenode會在內存中保留全部塊位置的映射信息。在安全模式下,各個datanode會向namenode發送最新的塊列表信息,namenode瞭解到足夠多的塊位置信息以後,便可高效運行文件系統。   若是知足「最小副本條件」,namenode會在30秒鐘以後就退出安全模式。所謂的最小副本條件指的是在整個文件系統中99.
9%的塊知足最小副本級別(默認值:dfs.replication.min=1)。在啓動一個剛剛格式化的HDFS集羣時,由於系統中尚未任何塊,因此namenode不會進入安全模式。

  以下圖所示,集羣處於安全模式:

  固然,咱們還有部分的DataNode尚未加入到集羣中,所以依舊處於安全模式,咱們能夠在NameNode的WebUI中看到,以下所示:

 

2>. 查看hdfs當前集羣安全模式的狀態
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get 
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get
3>.使集羣進入安全模式狀態 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode enter
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get  
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode enter

4>.使集羣離開安全模式狀態
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get  
Safe mode is ON
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode leave
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode get  
Safe mode is OFF
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs dfsadmin -safemode leave

5>.使集羣等待安全模式狀態
   咱們先將客戶端開啓安全模式,具體操做以下所示:

  而後咱們再退出安全模式:

 6>.安全模式相關參數設置

  咱們知道當集羣啓動的時候,會首先進入安全模式。當系統處於安全模式時會檢查數據塊的完整性。假設咱們設置的副本數(即參數dfs.replication)是5,那麼在datanode上就應該有5個副本存在,假設只存在3個副本,那麼比例就是3/5=0.6。在配置文件hdfs-default.xml中定義了一個最小的副本的副本率0.999,以下圖所示:


  網上說它是被「dfs.safemode.threshold.pct」屬性所控制,可是,在Hadoop2.9.2該屬性值並不推薦使用,而是建議我們使用「dfs.namenode.safemode.threshold-pct」,以下圖所示:(Deprecated Properties

1、參數含義

dfs.replication:設置數據塊應該被複制的份數;

dfs.replication.min:所規定的數據塊副本的最小份數;

dfs.safemode.threshold.pct:指定應有多少比例的數據塊知足最小副本數要求。

                                                      (1)當小於這個比例, 那就將系統切換成安全模式,對數據塊進行復制;

                                                      (2)當大於該比例時,就離開安全模式,說明系統有足夠的數據塊副本數,能夠對外提供服務。

                                                      (3)小於等於0意味不進入安全模式,大於1意味一直處於安全模式。

2、dfs.replication.min存在的意義

  副本數按dfs.replication設置,若是有失效節點致使某數據塊副本數下降,當低於dfs.replication.min後,系統再在其餘節點處複製新的副本。若是該數據塊
的副本常常丟失,致使在環境中太多的節點處複製了超過dfs.replication.max的副本數,那麼就再也不復制了。

3、hadoop安全模式的理解

hadoop的安全模式即只讀模式,是指當前系統中數據塊的副本數比較少,在該階段要對數據塊進行復制操做,不允外界對數據塊進行修改和刪除等操做。NameNode在啓動的時候首先進入安全模式,若是datanode丟失的block達到必定的比例(1-dfs.safemode.threshold.pct),則系統會一直處於安全模式狀態即只讀狀態。dfs.safemode.threshold.pct(缺省值0.999f)表示HDFS啓動的時候,若是DataNode上報的block個數達到了元數據記錄的block個數的0.999倍才能夠離開安全模式,不然一直是這種只讀模式。若是設爲1則HDFS永遠是處於SafeMode。

      


原文:https://blog.csdn.net/zcc_0015/article/details/18779599 
hadoop中dfs.replication、dfs.replication.min及dfs.safemode.threshold.pct關係

 

七.NameNode多目錄配置 
  NameNode的本地目錄能夠配置成多個,且每一個目錄存放內容相同。 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/      
total 12
drwx------. 3 root root 4096 Apr 12 18:09 data
drwxr-xr-x. 3 root root 4096 Apr 12 18:09 name
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 namesecondary
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/
[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml   
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
        <property>
                <name>dfs.namenode.checkpoint.period</name>
                <value>30</value>
        </property>

        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/data/hadoop/hdfs/dfs/name1,/data/hadoop/hdfs/dfs/name2,/data/hadoop/hdfs/dfs/name3</value>
        </property>

        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>

</configuration>

<!--
hdfs-site.xml 配置文件的做用:
        #HDFS的相關設定,如文件副本的個數、塊大小及是否使用強制權限等,此中的參數定義會覆蓋hdfs-default.xml文件中的默認配置.


dfs.namenode.checkpoint.period 參數的做用:
        #兩個按期檢查點之間的秒數,默認是3600,即1小時。

dfs.namenode.name.dir 參數的做用:
        #指定namenode的工做目錄,默認是file://${hadoop.tmp.dir}/dfs/name,namenode的本地目錄能夠配置成多個,且每一個目錄存放內容相同,增長了可靠性。建議配置的多目錄用不一樣磁盤掛在,這樣能夠提高IO性能!

dfs.replication 參數的做用:
        #爲了數據可用性及冗餘的目的,HDFS會在多個節點上保存同一個數據塊的多個副本,其默認爲3個。而只有一個節點的僞分佈式環境中其僅用
保存一個副本便可,這能夠經過dfs.replication屬性進行定義。它是一個軟件級備份。

-->

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
[root@node101.yinzhengjie.org.cn ~]# hdfs namenode -format
19/04/12 18:20:43 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node101.yinzhengjie.org.cn/172.30.1.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.9.2
STARTUP_MSG:   classpath = /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-collections-3.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jettison-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/junit-4.11.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/httpclient-4.5.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hadoop-annotations-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-framework-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/json-smart-1.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsch-0.1.54.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/stax2-api-3.1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-net-3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hadoop-auth-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-digester-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/httpcore-4.4.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jsp-api-2.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/curator-client-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jersey-json-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-lang3-3.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/gson-2.2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/activation-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-nfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/okio-1.6.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-client-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/hdfs/hadoop-hdfs-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/woodstox-core-5.0.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guice-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/api-util-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jettison-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jcip-annotations-1.0-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-configuration-1.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-beanutils-1.7.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-recipes-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/httpclient-4.5.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/xmlenc-0.52.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/java-xmlbuilder-0.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/javax.inject-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-framework-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-sslengine-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/json-smart-1.3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsch-0.1.54.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/nimbus-jose-jwt-4.41.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/java-util-1.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/stax2-api-3.1.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/apacheds-i18n-2.0.0-M15.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/htrace-core4-4.1.0-incubating.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-net-3.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/api-asn1-api-1.0.0-M20.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jets3t-0.9.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-digester-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/httpcore-4.4.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jsp-api-2.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-lang3-3.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/gson-2.2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-beanutils-core-1.8.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/activation-1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/fst-2.50.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/lib/json-io-2.5.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-api-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-router-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-client-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-registry-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/snappy-java-1.0.5.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/avro-1.7.7.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.2-tests.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.9.2.jar:/yinzhengjie/softwares/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 826afbeae31ca687bc2f8471dc841b66ed2c6704; compiled by 'ajisaka' on 2018-11-13T12:42Z
STARTUP_MSG:   java = 1.8.0_201
************************************************************/
19/04/12 18:20:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
19/04/12 18:20:43 INFO namenode.NameNode: createNameNode [-format]
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name3 should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name1 should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name2 should be specified as a URI in configuration files. Please update hdfs configuration.
19/04/12 18:20:43 WARN common.Util: Path /data/hadoop/hdfs/dfs/name3 should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-e7603940-eaba-4ce6-9ecd-3a449027b432
19/04/12 18:20:43 INFO namenode.FSEditLog: Edit logging is async:true
19/04/12 18:20:43 INFO namenode.FSNamesystem: KeyProvider: null
19/04/12 18:20:43 INFO namenode.FSNamesystem: fsLock is fair: true
19/04/12 18:20:43 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19/04/12 18:20:43 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
19/04/12 18:20:43 INFO namenode.FSNamesystem: supergroup          = supergroup
19/04/12 18:20:43 INFO namenode.FSNamesystem: isPermissionEnabled = true
19/04/12 18:20:43 INFO namenode.FSNamesystem: HA Enabled: false
19/04/12 18:20:43 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19/04/12 18:20:43 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
19/04/12 18:20:43 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
19/04/12 18:20:43 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19/04/12 18:20:43 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Apr 12 18:20:43
19/04/12 18:20:43 INFO util.GSet: Computing capacity for map BlocksMap
19/04/12 18:20:43 INFO util.GSet: VM type       = 64-bit
19/04/12 18:20:43 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
19/04/12 18:20:43 INFO util.GSet: capacity      = 2^21 = 2097152 entries
19/04/12 18:20:43 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
19/04/12 18:20:43 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
19/04/12 18:20:43 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
19/04/12 18:20:43 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
19/04/12 18:20:43 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
19/04/12 18:20:43 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
19/04/12 18:20:43 INFO blockmanagement.BlockManager: defaultReplication         = 2
19/04/12 18:20:43 INFO blockmanagement.BlockManager: maxReplication             = 512
19/04/12 18:20:43 INFO blockmanagement.BlockManager: minReplication             = 1
19/04/12 18:20:43 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
19/04/12 18:20:43 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
19/04/12 18:20:43 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
19/04/12 18:20:43 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
19/04/12 18:20:43 INFO namenode.FSNamesystem: Append Enabled: true
19/04/12 18:20:43 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
19/04/12 18:20:43 INFO util.GSet: Computing capacity for map INodeMap
19/04/12 18:20:43 INFO util.GSet: VM type       = 64-bit
19/04/12 18:20:43 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
19/04/12 18:20:43 INFO util.GSet: capacity      = 2^20 = 1048576 entries
19/04/12 18:20:43 INFO namenode.FSDirectory: ACLs enabled? false
19/04/12 18:20:43 INFO namenode.FSDirectory: XAttrs enabled? true
19/04/12 18:20:43 INFO namenode.NameNode: Caching file names occurring more than 10 times
19/04/12 18:20:43 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
19/04/12 18:20:43 INFO util.GSet: Computing capacity for map cachedBlocks
19/04/12 18:20:43 INFO util.GSet: VM type       = 64-bit
19/04/12 18:20:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
19/04/12 18:20:43 INFO util.GSet: capacity      = 2^18 = 262144 entries
19/04/12 18:20:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
19/04/12 18:20:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
19/04/12 18:20:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19/04/12 18:20:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
19/04/12 18:20:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19/04/12 18:20:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/04/12 18:20:43 INFO util.GSet: VM type       = 64-bit
19/04/12 18:20:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/04/12 18:20:43 INFO util.GSet: capacity      = 2^15 = 32768 entries
19/04/12 18:20:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-883662044-172.30.1.101-1555064443805
19/04/12 18:20:43 INFO common.Storage: Storage directory /data/hadoop/hdfs/dfs/name1 has been successfully formatted.
19/04/12 18:20:43 INFO common.Storage: Storage directory /data/hadoop/hdfs/dfs/name2 has been successfully formatted.
19/04/12 18:20:43 INFO common.Storage: Storage directory /data/hadoop/hdfs/dfs/name3 has been successfully formatted.
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/hdfs/dfs/name3/current/fsimage.ckpt_0000000000000000000 using no compression
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/hdfs/dfs/name1/current/fsimage.ckpt_0000000000000000000 using no compression
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/hdfs/dfs/name2/current/fsimage.ckpt_0000000000000000000 using no compression
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/hdfs/dfs/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds .
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/hdfs/dfs/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds .
19/04/12 18:20:43 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/hdfs/dfs/name3/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds .
19/04/12 18:20:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/04/12 18:20:43 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node101.yinzhengjie.org.cn/172.30.1.101
************************************************************/
[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# hdfs namenode -format            #須要格式化
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/
total 24
drwx------. 3 root root 4096 Apr 12 18:09 data
drwxr-xr-x. 3 root root 4096 Apr 12 18:09 name
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 name1
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 name2
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 name3
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 namesecondary
[root@node101.yinzhengjie.org.cn ~]#  
[root@node101.yinzhengjie.org.cn ~]# ll /data/hadoop/hdfs/dfs/
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll name1/current/
total 1040
-rw-r--r--. 1 root root 1048576 Apr 12 18:25 edits_inprogress_0000000000000000001
-rw-r--r--. 1 root root     323 Apr 12 18:25 fsimage_0000000000000000000
-rw-r--r--. 1 root root      62 Apr 12 18:25 fsimage_0000000000000000000.md5
-rw-r--r--. 1 root root       2 Apr 12 18:25 seen_txid
-rw-r--r--. 1 root root     216 Apr 12 18:25 VERSION
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name1/current/VERSION
#Fri Apr 12 18:25:06 CST 2019
namespaceID=1161472027
clusterID=CID-e7603940-eaba-4ce6-9ecd-3a449027b432
cTime=1555064443805
storageType=NAME_NODE
blockpoolID=BP-883662044-172.30.1.101-1555064443805
layoutVersion=-63
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name1/current/VERSION
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll name2/current/        
total 1040
-rw-r--r--. 1 root root 1048576 Apr 12 18:25 edits_inprogress_0000000000000000001
-rw-r--r--. 1 root root     323 Apr 12 18:25 fsimage_0000000000000000000
-rw-r--r--. 1 root root      62 Apr 12 18:25 fsimage_0000000000000000000.md5
-rw-r--r--. 1 root root       2 Apr 12 18:25 seen_txid
-rw-r--r--. 1 root root     216 Apr 12 18:25 VERSION
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name2/current/VERSION
#Fri Apr 12 18:25:06 CST 2019
namespaceID=1161472027
clusterID=CID-e7603940-eaba-4ce6-9ecd-3a449027b432
cTime=1555064443805
storageType=NAME_NODE
blockpoolID=BP-883662044-172.30.1.101-1555064443805
layoutVersion=-63
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name2/current/VERSION
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll name3/current/        
total 1040
-rw-r--r--. 1 root root 1048576 Apr 12 18:25 edits_inprogress_0000000000000000001
-rw-r--r--. 1 root root     323 Apr 12 18:25 fsimage_0000000000000000000
-rw-r--r--. 1 root root      62 Apr 12 18:25 fsimage_0000000000000000000.md5
-rw-r--r--. 1 root root       2 Apr 12 18:25 seen_txid
-rw-r--r--. 1 root root     216 Apr 12 18:25 VERSION
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name3/current/VERSION
#Fri Apr 12 18:25:06 CST 2019
namespaceID=1161472027
clusterID=CID-e7603940-eaba-4ce6-9ecd-3a449027b432
cTime=1555064443805
storageType=NAME_NODE
blockpoolID=BP-883662044-172.30.1.101-1555064443805
layoutVersion=-63
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cat name3/current/VERSION

   上面的方式是第一次安裝NameNode時咱們能夠這樣配置,可是若是你的集羣已經在正常運行了一段咋辦呢?(換句話說,就是如今正在運行的集羣有數據,不能輕易刪除!)其實很簡單,咱們把正在運行的數據拷貝到配置文件中指定的目錄便可,案例以下:

[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll
total 24
drwx------. 3 root root 4096 Apr 12 18:09 data
drwxr-xr-x. 3 root root 4096 Apr 12 18:09 name
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name1
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name2
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name3
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 namesecondary
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll
[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
        <property>
                <name>dfs.namenode.checkpoint.period</name>
                <value>30</value>
        </property>

        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/data/hadoop/hdfs/dfs/name4,/data/hadoop/hdfs/dfs/name5,/data/hadoop/hdfs/dfs/name6</value>
        </property>

        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>

</configuration>

<!--
hdfs-site.xml 配置文件的做用:
        #HDFS的相關設定,如文件副本的個數、塊大小及是否使用強制權限等,此中的參數定義會覆蓋hdfs-default.xml文件中的默認配置.


dfs.namenode.checkpoint.period 參數的做用:
        #兩個按期檢查點之間的秒數,默認是3600,即1小時。

dfs.namenode.name.dir 參數的做用:
        #指定namenode的工做目錄,默認是file://${hadoop.tmp.dir}/dfs/name,namenode的本地目錄能夠配置成多個,且每一個目錄存放內容相同,增長了可靠性。建議配置的多目錄用不一樣磁盤掛在,這樣能夠提高IO性能!

dfs.replication 參數的做用:
        #爲了數據可用性及冗餘的目的,HDFS會在多個節點上保存同一個數據塊的多個副本,其默認爲3個。而只有一個節點的僞分佈式環境中其僅用
保存一個副本便可,這能夠經過dfs.replication屬性進行定義。它是一個軟件級備份。

-->

[root@node101.yinzhengjie.org.cn ~]# 
[root@node101.yinzhengjie.org.cn ~]# cat /yinzhengjie/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cp -r name1 name4
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cp -r name1 name5
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# cp -r name1 name6
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# ll
total 36
drwx------. 3 root root 4096 Apr 12 18:09 data
drwxr-xr-x. 3 root root 4096 Apr 12 18:09 name
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name1
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name2
drwxr-xr-x. 3 root root 4096 Apr 12 18:26 name3
drwxr-xr-x. 3 root root 4096 Apr 12 18:32 name4
drwxr-xr-x. 3 root root 4096 Apr 12 18:32 name5
drwxr-xr-x. 3 root root 4096 Apr 12 18:32 name6
drwxr-xr-x. 3 root root 4096 Apr 12 18:20 namesecondary
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
[root@node101.yinzhengjie.org.cn /data/hadoop/hdfs/dfs]# 
相關文章
相關標籤/搜索