Hadoop基礎-鏡像文件(fsimage)和編輯日誌(edits)node
做者:尹正傑linux
版權聲明:原創做品,謝絕轉載!不然將追究法律責任。ios
一.查看日誌鏡像文件(如:fsimage_0000000000000000767)內容nginx
1>.鏡像文件的做用web
經過查看上面的XML文件,能夠明顯的知道鏡像文件是存放的是目錄結構(你也能夠理解是一個樹形結構),文件屬性等信息,說到這就不說不提一下鏡像文件的md5校驗文件了,這個校驗文件是爲了判斷鏡像文件是否被修改。fsimage文件是namenode中關於元數據的鏡像,通常稱爲檢查點。它是在NameNode啓動時對整個文件系統的快照 。 sql
2>.用"hdfs oiv"命令下載鏡像文件格式爲XML,操做以下:shell
[yinzhengjie@s101 ~]$ ll total 0 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.2K May 27 06:02 fsimage_0000000000000000767 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 06:02 fsimage_0000000000000000767.md5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.4K May 27 07:58 fsimage_0000000000000000932 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 07:58 fsimage_0000000000000000932.md5 [yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML [yinzhengjie@s101 ~]$ ll total 8 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ sz yinzhengjie.xml rz zmodem trl+C ȡ 100% 4 KB 4 KB/s 00:00:01 0 Errors [yinzhengjie@s101 ~]$
<?xml version="1.0"?>
<fsimage><NameSection>
<genstampV1>1000</genstampV1><genstampV2>1019</genstampV2><genstampV1Limit>0</genstampV1Limit><lastAllocatedBlockId>1073741839</lastAllocatedBlockId><txid>767</txid></NameSection>
<INodeSection><lastInodeId>16414</lastInodeId><inode><id>16385</id><type>DIRECTORY</type><name></name><mtime>1527331031268</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>9223372036854775807</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16387</id><type>FILE</type><name>xrsync.sh</name><replication>3</replication><mtime>1527308253459</mtime><atime>1527330550802</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission><blocks><block><id>1073741826</id><genstamp>1002</genstamp><numBytes>700</numBytes></block>
</blocks>
</inode>
<inode><id>16389</id><type>FILE</type><name>hadoop-2.7.3.tar.gz</name><replication>3</replication><mtime>1527310784699</mtime><atime>1527310775186</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission><blocks><block><id>1073741827</id><genstamp>1003</genstamp><numBytes>134217728</numBytes></block>
<block><id>1073741828</id><genstamp>1004</genstamp><numBytes>79874467</numBytes></block>
</blocks>
</inode>
<inode><id>16402</id><type>DIRECTORY</type><name>shell</name><mtime>1527331084147</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16403</id><type>DIRECTORY</type><name>awk</name><mtime>1527332686407</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16404</id><type>DIRECTORY</type><name>sed</name><mtime>1527332624472</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16405</id><type>DIRECTORY</type><name>grep</name><mtime>1527332592029</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16406</id><type>FILE</type><name>yinzhengjie.sh</name><replication>3</replication><mtime>1527331084161</mtime><atime>1527331084147</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16409</id><type>FILE</type><name>1.txt</name><replication>3</replication><mtime>1527332587208</mtime><atime>1527332587194</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16410</id><type>FILE</type><name>2.txt</name><replication>3</replication><mtime>1527332592042</mtime><atime>1527332592029</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16411</id><type>FILE</type><name>zabbix.sql</name><replication>3</replication><mtime>1527332604168</mtime><atime>1527332604154</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16412</id><type>FILE</type><name>nagios.sh</name><replication>3</replication><mtime>1527332624486</mtime><atime>1527332624472</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16413</id><type>FILE</type><name>keepalive.sh</name><replication>3</replication><mtime>1527332677350</mtime><atime>1527332677335</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16414</id><type>FILE</type><name>nginx.conf</name><replication>3</replication><mtime>1527332686421</mtime><atime>1527332686407</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
</INodeSection>
<INodeReferenceSection></INodeReferenceSection><SnapshotSection><snapshotCounter>0</snapshotCounter></SnapshotSection>
<INodeDirectorySection><directory><parent>16385</parent><inode>16389</inode><inode>16402</inode><inode>16387</inode></directory>
<directory><parent>16402</parent><inode>16403</inode><inode>16405</inode><inode>16404</inode><inode>16406</inode></directory>
<directory><parent>16403</parent><inode>16413</inode><inode>16414</inode></directory>
<directory><parent>16404</parent><inode>16412</inode><inode>16411</inode></directory>
<directory><parent>16405</parent><inode>16409</inode><inode>16410</inode></directory>
</INodeDirectorySection>
<FileUnderConstructionSection></FileUnderConstructionSection>
<SnapshotDiffSection><diff><inodeid>16385</inodeid></diff></SnapshotDiffSection>
<SecretManagerSection><currentId>0</currentId><tokenSequenceNumber>0</tokenSequenceNumber></SecretManagerSection><CacheManagerSection><nextDirectiveId>1</nextDirectiveId></CacheManagerSection>
</fsimage
[yinzhengjie@s101 ~]$ cat yinzhengjie.xml <?xml version="1.0"?>
<fsimage><NameSection>
<genstampV1>1000</genstampV1><genstampV2>1019</genstampV2><genstampV1Limit>0</genstampV1Limit><lastAllocatedBlockId>1073741839</lastAllocatedBlockId><txid>767</txid></NameSection>
<INodeSection><lastInodeId>16414</lastInodeId><inode><id>16385</id><type>DIRECTORY</type><name></name><mtime>1527331031268</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>9223372036854775807</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16387</id><type>FILE</type><name>xrsync.sh</name><replication>3</replication><mtime>1527308253459</mtime><atime>1527330550802</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission><blocks><block><id>1073741826</id><genstamp>1002</genstamp><numBytes>700</numBytes></block>
</blocks>
</inode>
<inode><id>16389</id><type>FILE</type><name>hadoop-2.7.3.tar.gz</name><replication>3</replication><mtime>1527310784699</mtime><atime>1527310775186</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission><blocks><block><id>1073741827</id><genstamp>1003</genstamp><numBytes>134217728</numBytes></block>
<block><id>1073741828</id><genstamp>1004</genstamp><numBytes>79874467</numBytes></block>
</blocks>
</inode>
<inode><id>16402</id><type>DIRECTORY</type><name>shell</name><mtime>1527331084147</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16403</id><type>DIRECTORY</type><name>awk</name><mtime>1527332686407</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16404</id><type>DIRECTORY</type><name>sed</name><mtime>1527332624472</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16405</id><type>DIRECTORY</type><name>grep</name><mtime>1527332592029</mtime><permission>yinzhengjie:supergroup:rwxr-xr-x</permission><nsquota>-1</nsquota><dsquota>-1</dsquota></inode>
<inode><id>16406</id><type>FILE</type><name>yinzhengjie.sh</name><replication>3</replication><mtime>1527331084161</mtime><atime>1527331084147</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16409</id><type>FILE</type><name>1.txt</name><replication>3</replication><mtime>1527332587208</mtime><atime>1527332587194</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16410</id><type>FILE</type><name>2.txt</name><replication>3</replication><mtime>1527332592042</mtime><atime>1527332592029</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16411</id><type>FILE</type><name>zabbix.sql</name><replication>3</replication><mtime>1527332604168</mtime><atime>1527332604154</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16412</id><type>FILE</type><name>nagios.sh</name><replication>3</replication><mtime>1527332624486</mtime><atime>1527332624472</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16413</id><type>FILE</type><name>keepalive.sh</name><replication>3</replication><mtime>1527332677350</mtime><atime>1527332677335</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
<inode><id>16414</id><type>FILE</type><name>nginx.conf</name><replication>3</replication><mtime>1527332686421</mtime><atime>1527332686407</atime><perferredBlockSize>134217728</perferredBlockSize><permission>yinzhengjie:supergroup:rw-r--r--</permission></inode>
</INodeSection>
<INodeReferenceSection></INodeReferenceSection><SnapshotSection><snapshotCounter>0</snapshotCounter></SnapshotSection>
<INodeDirectorySection><directory><parent>16385</parent><inode>16389</inode><inode>16402</inode><inode>16387</inode></directory>
<directory><parent>16402</parent><inode>16403</inode><inode>16405</inode><inode>16404</inode><inode>16406</inode></directory>
<directory><parent>16403</parent><inode>16413</inode><inode>16414</inode></directory>
<directory><parent>16404</parent><inode>16412</inode><inode>16411</inode></directory>
<directory><parent>16405</parent><inode>16409</inode><inode>16410</inode></directory>
</INodeDirectorySection>
<FileUnderConstructionSection></FileUnderConstructionSection>
<SnapshotDiffSection><diff><inodeid>16385</inodeid></diff></SnapshotDiffSection>
<SecretManagerSection><currentId>0</currentId><tokenSequenceNumber>0</tokenSequenceNumber></SecretManagerSection><CacheManagerSection><nextDirectiveId>1</nextDirectiveId></CacheManagerSection>
</fsimage> [yinzhengjie@s101 ~]$
二.查看編輯日誌文件內容安全
1.編輯日誌的做用服務器
顧名思義,編輯日誌固然是記錄對文件或者目錄的修改信息啦,好比刪除目錄,修改文件等信息都會被該文件記錄。編輯日誌通常命名規則爲:「edits_*」(下面你會看到相似的文件), 它是在NameNode啓動後,記錄對文件系統的改動序列。 運維
2>.使用oev命令查詢hadoop的編輯日誌文件,操做以下:
[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:33 edits_0000000000000001001-0000000000000001002 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:34 edits_0000000000000001003-0000000000000001004 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:35 edits_0000000000000001005-0000000000000001006 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:36 edits_0000000000000001007-0000000000000001008 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 08:36 edits_inprogress_0000000000000001009 [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ll total 8 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007-0000000000000001008 -o edits.xml -p XML [yinzhengjie@s101 ~]$ ll total 12 -rw-rw-r--. 1 yinzhengjie yinzhengjie 315 May 27 08:39 edits.xml drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ cat edits.xml <?xml version="1.0" encoding="UTF-8"?> <EDITS> <EDITS_VERSION>-63</EDITS_VERSION> <RECORD> <OPCODE>OP_START_LOG_SEGMENT</OPCODE> <DATA> <TXID>1007</TXID> </DATA> </RECORD> <RECORD> <OPCODE>OP_END_LOG_SEGMENT</OPCODE> <DATA> <TXID>1008</TXID> </DATA> </RECORD> </EDITS> [yinzhengjie@s101 ~]$
3>.查看正在使用的編輯日誌文件
哪一個文件是正在使用的編輯日誌文件呢?估計你已經看出來了,就是帶有「edits_inprogress_*」字樣的文件。
[yinzhengjie@s101 ~]$ ll total 0 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:59 edits_0000000000000001053-0000000000000001054 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:00 edits_0000000000000001055-0000000000000001056 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:01 edits_0000000000000001057-0000000000000001058 -rw-rw-r--. 1 yinzhengjie yinzhengjie 912 May 27 09:02 edits_0000000000000001059-0000000000000001071 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 09:02 edits_inprogress_0000000000000001072 [yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_inprogress_0000000000000001072 -o yinzhengjie.xml -p XML [yinzhengjie@s101 ~]$ ll total 4 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 205 May 27 09:03 yinzhengjie.xml [yinzhengjie@s101 ~]$ more yinzhengjie.xml <?xml version="1.0" encoding="UTF-8"?> <EDITS> <EDITS_VERSION>-63</EDITS_VERSION> <RECORD> <OPCODE>OP_START_LOG_SEGMENT</OPCODE> <DATA> <TXID>1072</TXID> </DATA> </RECORD> </EDITS> [yinzhengjie@s101 ~]$
三.手動對編輯日誌進行滾動
1>.用hdfs dfsadmin 命令進行日誌滾動操做
[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:12 edits_0000000000000001090-0000000000000001091 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:13 edits_0000000000000001092-0000000000000001093 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:14 edits_0000000000000001094-0000000000000001095 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:14 edits_0000000000000001096-0000000000000001097 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 09:14 edits_inprogress_0000000000000001098 [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfsadmin -rollEdits Successfully rolled edit logs. New segment starts at txid 1100 [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:13 edits_0000000000000001092-0000000000000001093 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:14 edits_0000000000000001094-0000000000000001095 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:14 edits_0000000000000001096-0000000000000001097 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 09:15 edits_0000000000000001098-0000000000000001099 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 09:15 edits_inprogress_0000000000000001100 [yinzhengjie@s101 ~]$
2>.啓動hdfs時,編輯日誌滾動
重啓hdfs服務器是,鏡像文件編輯日子進行融合,會自動滾動編輯日誌。其實咱們能夠經過webUI來查看這個過程。
四.保存名稱空間(也就是手動保存鏡像文件)
1>.查看hdfs當前的模式,默安全模式默認是關閉的
[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode get
Safe mode is OFF
[yinzhengjie@s101 ~]$
2>.開啓安全模式
開啓安全模式咱們只須要執行:「hdfs dfsadmin -safemode enter」 便可。
[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode get Safe mode is OFF [yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode enter Safe mode is ON [yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode get Safe mode is ON [yinzhengjie@s101 ~]$
NameNode進入安全模式後,沒法進行寫的操做,可是讀取的操做依然是能夠正常進行的喲,此時以下:
[yinzhengjie@s101 ~]$ ll total 4 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 205 May 27 09:03 yinzhengjie.xml [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 185540433 2018-05-27 09:02 /jdk-8u131-linux-x64.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -put yinzhengjie.xml / put: Cannot create file/yinzhengjie.xml._COPYING_. Name node is in safe mode. [yinzhengjie@s101 ~]$
3>.退出安全模式
經過上面的測試案例咱們知道,當NameNode進入安全模式時,沒法進行刪除操做,只要咱們退出安全模式就能夠正常上傳文件啦。
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 185540433 2018-05-27 09:02 /jdk-8u131-linux-x64.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ ll total 4 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 205 May 27 09:03 yinzhengjie.xml [yinzhengjie@s101 ~]$ hdfs dfs -put yinzhengjie.xml / put: Cannot create file/yinzhengjie.xml._COPYING_. Name node is in safe mode. [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode leave Safe mode is OFF [yinzhengjie@s101 ~]$ hdfs dfs -put yinzhengjie.xml / [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 185540433 2018-05-27 09:02 /jdk-8u131-linux-x64.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh -rw-r--r-- 3 yinzhengjie supergroup 205 2018-05-27 09:40 /yinzhengjie.xml [yinzhengjie@s101 ~]$
4>.等待安全模式的應用
咱們先將客戶端開啓安全模式
而後咱們再退出安全模式:
5>.保存名稱空間
[yinzhengjie@s101 shell]$ hdfs dfsadmin -safemode get Safe mode is OFF [yinzhengjie@s101 shell]$ hdfs dfsadmin -safemode enter Safe mode is ON [yinzhengjie@s101 shell]$ hdfs dfsadmin -safemode get Safe mode is ON [yinzhengjie@s101 shell]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage
-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.4K May 27 07:58 fsimage_0000000000000000932 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 07:58 fsimage_0000000000000000932.md5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 650 May 27 09:20 fsimage_0000000000000001110 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 09:20 fsimage_0000000000000001110.md5 [yinzhengjie@s101 shell]$ [yinzhengjie@s101 shell]$ hdfs dfsadmin -saveNamespace Save namespace successful [yinzhengjie@s101 shell]$ [yinzhengjie@s101 shell]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage -rw-rw-r--. 1 yinzhengjie yinzhengjie 650 May 27 09:20 fsimage_0000000000000001110 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 09:20 fsimage_0000000000000001110.md5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 546 May 27 10:11 fsimage_0000000000000001200 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 10:11 fsimage_0000000000000001200.md5 [yinzhengjie@s101 shell]$ [yinzhengjie@s101 shell]$ hdfs dfsadmin -safemode leave Safe mode is OFF [yinzhengjie@s101 shell]$ hdfs dfsadmin -safemode get Safe mode is OFF [yinzhengjie@s101 shell]$
五.hdfs啓動過程以下
hdfs啓動過程大體分爲如下三個步驟:
第一:將以「edits_inprogress_*」文件實例化成爲新編輯日誌文件;
第二:將鏡像文件和新編輯日誌文件加載到內存,重現編輯日誌的執行過程,生成新的鏡像文件檢查點文件(.ckpt文件);
第三:將新的鏡像文件檢查點文件(.ckpt文件)後綴去掉,變爲新鏡像文件;
爲了方便了理解,我在《Hadoop》權威指南找到一張詳細的流程圖,以下:
若是你以爲上圖描述的不夠詳細的話,也能夠參考我畫的草圖以下,已經很清楚的描述了NameNode和SecondaryNameNode的工做機制:
六.日誌滾動(secondarynamenode,簡稱2nn)
上面我演示瞭如何手動實現滾動日誌以及保存名稱空間,可是手動滾動日誌的話須要進入安全模式,一旦進入安全模式用戶就沒法正常進行寫入操做,只能進行讀取,且不說運維人員若是頻繁的這樣操做很麻煩,我們就說說用戶體驗度確定是會降低的。這時候就有了SecondaryNameNode,在說SecondaryNameNode的功能以前,咱們想來講一下NameNode和DataNode。
1>.NameNode
NameNode是存放目錄,類型,權限等元數據信息的,你能夠理解它只是存放真實路徑的一個映射!還記得咱們年輕的時候看過周星馳的一部電影嗎?它是我喜歡的中國喜劇演員,我記得在他主演的一部叫作《鹿鼎記》的電影中,韋小寶拜陳近南爲師時,陳近南給韋小寶一本祕籍,韋小寶說:「咦,這麼大一本?我看要練個把月」,陳近南接着說:「這一本只不過是絕世武功的目錄」。
2>.DataNode
存放真實數據的服務器,用戶全部上傳的文件都會保存到DataNode的,真實環境中,保存的份量可能不止一份喲!
3>.SecondaryNameNode
咱們知道NameNode保存是實時記錄元數據信息的,這些數據信息的記錄信息會臨時保存到內存中,而後將數據記錄到編輯日誌中(好比:edits_0000000000000001007-0000000000000001008),當重啓NameNode服務時,編輯日誌纔會合併到鏡像文件中,從而獲得一個文件系統的最新快照。可是在產品集羣中NameNode是不多重啓的,這也意味着當NameNode運行了很長時間後,edit logs文件會變得很大。在這種狀況下就會出現一些問題:好比:edit logs文件會變的很大,怎麼去管理這個文件是一個挑戰。NameNode的重啓會花費很長時間,由於編輯日誌中i不少改動要合併到fsimage文件上。 若是NameNode掛掉了,那咱們就丟失了不少改動由於此時的fsimage文件很是舊。
所以爲了克服這個問題,咱們須要一個易於管理的機制來幫助咱們減少edit logs文件的大小和獲得一個最新的fsimage文件(這個跟我們玩的虛擬機的快照功能相似),這樣也會減少在NameNode上的壓力。這時候SecondaryNameNode就能夠幫咱們輕鬆搞定這個問題。SecondaryNameNode默認每一個3600s(1h) 建立一次檢查點(這個時間週期能夠在配置文件修改「dfs.namenode.checkpoint.period「屬性實現自定義時間週期檢查點),建立檢查點的細節以下:
第一:在NameNode中先進行日誌的滾動;
第二:將鏡像文件和新的編輯(edits)日誌文件更新(fetch)到SecondaryNameNode進行融合操做,產生新的檢查點文件(*.ckpt);
第三:將檢查點文件(.ckpt)發送到NameNode;
第四:NameNode將其重命名爲新的的鏡像文件;
那麼問題來了:SecondaryNameNode會自動幫咱們實現日誌滾動,並將生成的文件放在NameNode存放鏡像的目錄中,當NameNode重啓時,是從新生成新的fsimage文件仍是直接使用SecondaryNameNode給他最新提供好的呢?答案是:從新生成,他們各幹各的不耽誤,只不過NameNode從新生成的起點是SecondaryNameNode最近一次融合的結束點開始的!
七.Hadoop默認的webUI訪問端口
1>.Namenode的默認訪問端口是50070
2>.SecondaryNameNode的默認訪問端口是50090
3>.NameNode默認的客戶端連接端口是8020