HDFS Architecturehtml
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is http://hadoop.apache.org/.node
Hadoop分佈式文件系統是一個設計能夠運行在廉價硬件的分佈式系統。它跟目前存在的分佈式系統有不少類似之處。然而,不一樣之處纔是重要的。HDFS是一個高容錯和可部署在廉價機器上的系統。HDFS提供高吞吐數據能力適合處理大量數據。HDFS鬆散了一些需求使得支持流式傳輸。HDFS本來是爲Apache Butch的搜索引擎設計的,如今是Apache Hadoop項目的子項目。web
Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.shell
硬件失效是常態而不是意外。HDFS實例可能包含上百成千個服務器,每一個節點存儲着文件系統的部分數據。事實是集羣有大量的節點,而每一個節點都存在必定的機率失效也就意味着HDFS的一些組成部分常常失效。所以,檢測錯誤、快速和自動恢復是HDFS的核心架構。apache
Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.api
應用運行在HDFS須要容許流式訪問它的數據集。這不是普通的應用程序運行在普通的文件系統上。HDFS是被設計用於批量處理而非用戶交互。設計的重點是高吞吐量訪問而不是低延遲數據訪問。POSIX語義在一些關鍵領域是用來提升吞吐量。瀏覽器
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.緩存
運行在HDFS的應用程序有大數據集。一個典型文檔在HDFS是GB到TB級別的。所以,HDFS是用來支持大文件。它應該提供高帶寬和可擴展到上百節點在一個集羣中。它應該支持在一個實例中有以千萬計的文件數。安全
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed except for appends and truncates. Appending the content to the end of the files is supported but cannot be updated at arbitrary point. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model.bash
HDFS應用須要一個一次寫入屢次讀取的文件訪問模型。一個文件一旦建立,寫入和關係都不須要改變。支持在文件的末端進行追加數據而不支持在文件的任意位置進行修改。這個假設簡化了數據一致性問題和支持高吞吐量的訪問。支持在數據尾部增長內容而不支持在任意位置更新。一個Map/Reduce任務或者web爬蟲完美匹配了這個模型。
A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.。
若是應用的計算在它要操做的數據附近執行那就會更高效。尤爲是數據集很是大的時候。這將最大限度地減小網絡擁堵和提升系統的吞吐量。這個假設是在應用運行中常常移動計算到要操做的數據附近比移動數據數據更好HDFS提供接口讓應用去移動計算到數據所在的位置。
HDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.
HDFS被設計成可輕便從一個平臺跨到另外一個平臺。這促使HDFS被普遍地採用做爲應用的大數據集系統。
HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.
The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.
The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.
HDFS使用主/從架構。一個HDFS集羣包含一個NameNode,一個服務器管理系統的命名空間和並控制客戶端對文件的訪問。此外,有許多的DataNode,一般是集羣中的每一個節點,用來管理它們所運行的節點的內存。HDFS暴露文件系統的命名空間和容許用戶數據存儲在文件中。在系統內部,一個文件被切割成一個或者多個塊而這些塊將儲存在一系列的DataNode中。NameNode執行文件系統的命名空間操做例如打開、關閉和從命名文件和路徑。它也指定數據塊對應的DataNode。DataNode負責提供客戶端對文件的讀寫服務。DataNode也負責執行NameNode的建立、刪除和複製指令。
NameNode和DatNode是設計運行在商業電腦的軟件框架。這些機器一般是運行着GNU/Linux操做系統。HDFS是用Java語言構建的;任何機器只要支持Java就能夠運行NameNode或者DataNode。使用Java這種高可移植性的語言就意味着HDFS能夠部署在大範圍的機器上。部署一般是在專用的機器上只運行NameNode軟件。集羣中的其餘每一個機器運行着單個DaaNode實例。架構並不排除在同一臺機器部署多個DataNode,可是這種狀況比較少見。
集羣中只存在一個NameNode實例極大地簡化系統的架構。NameNode是HDFS元數據的仲裁者和儲存庫。這個系統用這樣的方式保證了數據的流動不能避過NameNode。
HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS supports user quotas and access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features.
The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.
HDFS支持傳統的層級文件結構。用戶或應用能夠建立文件目錄和存儲文件在這些目錄下。文件系統的命名空間層級跟其餘已經在存在的文件系統很相像;能夠建立和刪除文件,將文件從一個目錄移動到另外一個目錄或者重命名。HDFS支持用戶限制和訪問權限。HDFS不支持硬關聯或者軟關聯。然而,HDFS架構不排除實現這些特性。
NameNode維持文件系統的命名空間。文件系統的命名空間或者它的屬性的任何改變都被NameNode記錄着。應用能夠指定HDFS維持多少個文件副本。文件的拷貝數目稱爲文件的複製因子。這個信息將會被NameNode記錄。
HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file.
All blocks in a file except the last block are the same size, while users can start a new block without filling out the last block to the configured block size after the support for variable length block was added to append and hsync.
An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once (except for appends and truncates) and have strictly one writer at any time.
The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.
HDFS是被設計成在一個集羣中跨機器可靠地存儲大量文件。它將每一個文件存儲爲一序列的塊。文件的塊被複制保證容錯。每一個文件塊的大小和複製因子都是可配置的。
一個文件的全部的塊除了最後一個都是一樣大小的,同時用戶在能夠在一個支持可變長度的塊被同步添加以後啓動一個新的塊而沒有配置最後一個塊的大小。
應用能夠指定文件的副本數目。複製因子能夠在文件建立時指定,在後面時間修改。HDFS中的文件一旦寫入(除了添加和截斷)就必須在任什麼時候間嚴格遵照一個寫入者。
NameNode控制着關於blocks複製的全部決定。它週期性地接收集羣中DataNode發送的心跳和塊報告。收到心跳意味着DataNode在正常地運行着。一個塊報告包含着DataNode上全部塊信息的集合。
The placement of replicas is critical to HDFS reliability and performance. Optimizing replica placement distinguishes HDFS from most other distributed file systems. This is a feature that needs lots of tuning and experience. The purpose of a rack-aware replica placement policy is to improve data reliability, availability, and network bandwidth utilization. The current implementation for the replica placement policy is a first effort in this direction. The short-term goals of implementing this policy are to validate it on production systems, learn more about its behavior, and build a foundation to test and research more sophisticated policies.
Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches. In most cases, network bandwidth between machines in the same rack is greater than network bandwidth between machines in different racks.
The NameNode determines the rack id each DataNode belongs to via the process outlined in Hadoop Rack Awareness. A simple but non-optimal policy is to place replicas on unique racks. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. However, this policy increases the cost of writes because a write needs to transfer blocks to multiple racks.
For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.
The current, default replica placement policy described here is a work in progress.
副本的選址對HDFS的可靠性和性能是起到關鍵做用的。優化的副本選址使得HDFS有別於大多數分佈式文件系統。這是一個須要大量調試和經驗的特性。機架感知副本配置策略的目的是提升可靠性、可用性和網絡帶寬的利用率。目前的副本放置策略實現是第一次在這個方向上的努力。這個策略實現的短時間目標是在生產環境上驗證,更多地瞭解它的行爲表現,創建一個基礎用來測試和研究更好的策略。
運行在集羣計算機的大型HDFS實例通常是分佈在許多機架上。兩個不一樣機架上的節點的通信必須通過交換機。在大多數狀況下,同一個機架上的不一樣機器之間的網絡帶寬要優於不一樣機架上的機器的。NameNode經過在Hadoop Rack Awarenes概述過程來肯定每一個DataNode屬於哪一個機架ID。一個簡單但不是最佳的策略上是將副本部署在不一樣的機架上。這將避免一個機架失效時丟失數據和容許使用帶寬來跨機架讀取數據。這個策略平衡地將副本分佈在集羣中以平衡組件失效負載。然而,這個策略增長了寫的負擔由於一個塊數據須要在多個機架之間傳輸。
一般狀況下,當複製因子爲3時,HDFS的副本放置策略是將一個副本放在本機架的一個節點上,將另外一個副本放在本機架的另外一個節點,最後一個副本放在不一樣機架的不一樣節點上。該策略減小機架內部的傳輸以提升寫的性能。機架失效的機率要遠低於節點失效;這個策略不會影響數據可靠性和可用性的保證。然而,它確實會減小數據讀取時網絡帶寬的使用由於數據塊只放置在兩個單獨的機架而不是三個。在這個策略當中,副本的分佈不是均勻的。三分一個的副本放置在一個節點上,三分之二的副本放置在一個機架上,而另外三分之一均勻分佈在剩餘的機架上。這個策略提升了寫性能而不影響數據可靠性和讀性能。
目前,默認的副本放置策略描述的是正在進行的工做。
To minimize global bandwidth consumption and read latency, HDFS tries to satisfy a read request from a replica that is closest to the reader. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.
爲了最大限度地減小全局帶寬消耗和讀取延遲,HDFS試圖讓讀取者的讀取需求離副本最近。若是存在一個副本在客戶端節點所在的同個機架上,那麼這個副本是知足讀取需求的首選。若是HDFS集羣橫跨多個數據中心,那麼在本地數據中心的副本將是優先於其餘遠程副本。
On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.
在啓動時,NameNode進入一個特殊的狀態稱之爲安全模式。當NameNode進入安全模式以後數據塊的複製將不會發生。NameNode接收來自DataNode的心跳和數據塊報告。數據塊報告包含正在運行的DataNode上的數據塊信息集合。每一個快都指定了最小副本數。一個數據塊若是被NameNode檢查確保它知足最小副本數,那麼它被認爲是安全的。在NameNode檢查配置的必定比例的數據塊安全性檢查(加上30s),NameNode將會退出安全模式。而後它將確認有一組(若是可能)尚未達到指定數目副本的數據塊。NameNode將這些數據塊複製到其餘DataNode。
The HDFS namespace is stored by the NameNode. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata. For example, creating a new file in HDFS causes the NameNode to insert a record into the EditLog indicating this. Similarly, changing the replication factor of a file causes a new record to be inserted into the EditLog. The NameNode uses a file in its local host OS file system to store the EditLog. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too.
The NameNode keeps an image of the entire file system namespace and file Blockmap in memory. This key metadata item is designed to be compact, such that a NameNode with 4 GB of RAM is plenty to support a huge number of files and directories. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint. In the current implementation, a checkpoint only occurs when the NameNode starts up. Work is in progress to support periodic checkpointing in the near future.
The DataNode stores HDFS data in files in its local file system. The DataNode has no knowledge about HDFS files. It stores each block of HDFS data in a separate file in its local file system. The DataNode does not create all files in the same directory. Instead, it uses a heuristic to determine the optimal number of files per directory and creates subdirectories appropriately. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory. When a DataNode starts up, it scans through its local file system, generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode: this is the Blockreport.
NameNode存儲着HDFS的命名空間。NmaeNode使用一個稱之爲EditLog的事務日誌持續地記錄發生在文件系統元數據的每個改變。例如,HDFS中建立一個文件會致使NameNode在EditLog中插入一條記錄來表示。相同的,改變一個文件的複製因子也會致使EditLog中添加一條新的記錄。NameNode在它本地的系統中用一個文件來存儲EditLog。整個文件系統命名空間,包括blocks的映射關係和文件系統屬性,將儲存在一個叫FsImage的文件。FsImage也是儲存在NameNode所在的本地文件系統中。
NameNode在內存中保存着整個文件系統命名空間的圖像和文件映射關係。這個關鍵元數據項設計緊湊,以至一個有着4GB RAM的NameNode足夠支持大量的文件和目錄。當NameNode啓動時,它將從磁盤中讀取FsImage和EditLog,將EditLog中全部彙報更新到內存中的FsImage中,刷新輸出一個新版本的FsImage到磁盤中。而後縮短EditLog由於它的事務彙報已經更新到持久化的FsImage中。這個過程稱之爲檢查站。在目前這個版本中,只有當NameNode啓動時會執行一次。在不久的未來會在任務執行過程也運行CheckPoint。
DataNode將HDFS數據儲存在他本地的文件系統中。DataNode對於HDFS文件一無所知。它將每一塊HDFS數據存儲爲單獨的文件在它的本地文件系統中。DataNode不會再相同的目錄之下建立全部文件。相反,使用一個啓發式的方法來肯定每一個目錄的最優文件數目和恰當地建立子目錄。在同一個目錄下建立全部本地文件並非最優的由於本地文化系統或許不是高效地支持在一個目錄下有大量文件。當一個DataNode啓動時,它將掃描本地文件系統,生成一個對應本地文件的全部HDFS數據塊列表和將它發送給NameNode,這就是Blockreport。
All HDFS communication protocols are layered on top of the TCP/IP protocol. A client establishes a connection to a configurable TCP port on the NameNode machine. It talks the ClientProtocol with the NameNode. The DataNodes talk to the NameNode using the DataNode Protocol. A Remote Procedure Call (RPC) abstraction wraps both the Client Protocol and the DataNode Protocol. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients.
全部的HDFS通信協議的底層都是TCP/IP協議。客戶端經過NameNode機器上TCP端口與之創建鏈接。它使用客戶端協議與NameNode通信。DataNode使用DataNode協議與NameNode通信。RPC抽象封裝了客戶端協議和DataNode協議。有意地,NameNode從不會主動發起任何RPC,相反,它只回復DatsNodes和客戶端發來的RPC請求。
The primary objective of HDFS is to store data reliably even in the presence of failures. The three common types of failures are NameNode failures, DataNode failures and network partitions.
HDFS的主要目標是在失效出現時保證儲存的數據的可靠性。一般有這三種失效,分別爲NameNode失效,DataNode失效和網絡分裂(一種在系統的任何兩個組之間的全部網絡鏈接同時發生故障後所出現的狀況)
Each DataNode sends a Heartbeat message to the NameNode periodically. A network partition can cause a subset of DataNodes to lose connectivity with the NameNode. The NameNode detects this condition by the absence of a Heartbeat message. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. Any data that was registered to a dead DataNode is not available to HDFS any more. DataNode death may cause the replication factor of some blocks to fall below their specified value. The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. The necessity for re-replication may arise due to many reasons: a DataNode may become unavailable, a replica may become corrupted, a hard disk on a DataNode may fail, or the replication factor of a file may be increased.
The time-out to mark DataNodes dead is conservatively long (over 10 minutes by default) in order to avoid replication storm caused by state flapping of DataNodes. Users can set shorter interval to mark DataNodes as stale and avoid stale nodes on reading and/or writing by configuration for performance sensitive workloads.
每一個節點週期性地發送心跳信息給NameNode。網絡分裂會致使一部分DataNode失去與NameNode的鏈接。NameNode經過心跳信息的丟失發現這個狀況。NameNode將最近沒有心跳信息的DataNode標記爲死亡而且再也不轉發任何IO請求給他們。在已經死亡的DataNode註冊的任何數據在HDFS將不能再使用。DataNode死亡會致使部分數據塊的複製因子小於指定的數目。NameNode時常地跟蹤數據塊是否須要被複制和當必要的時候啓動複製。從新複製的必要性會由於許多緣由而提高:DataNode不可用,一個副本被破壞,DataNode的磁盤失效或者一個文件的複製因子增長了。
將DatNode標記爲死亡的超時時間適當地加長(默認超過10分鐘)是爲了不DataNode狀態改變引發的複製風暴。用戶能夠設置更短的時間間隔來標記DataNode爲失效和避免失效的節點在讀和/或寫配置性能敏感的工做負載
The HDFS architecture is compatible with data rebalancing schemes. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold. In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster. These types of data rebalancing schemes are not yet implemented.
HDFS架構兼容數據調整方案。一個方案可能自動地將數據從一個DataNode移動到另外一個距離上限值還有多餘空間的DataNode。對一個特別的文件忽然發生的需求,一個方案能夠動態地建立額外的副本和從新調整集羣中的數據。這些類型的調整方案目前尚未實現。
It is possible that a block of data fetched from a DataNode arrives corrupted. This corruption can occur because of faults in a storage device, network faults, or buggy software. The HDFS client software implements checksum checking on the contents of HDFS files. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
從一個失效的DataNode取得數據塊是可行的。這種失效會發生是由於存儲設備故障、網絡故障或者軟件bug。HDFS客戶端軟件實現了校驗和用來校驗HDFS文件的內容。當客戶端建立一個HDFS文件,他會計算文件的每個數據塊的校驗和而且將這些校驗和儲存在HDFS命名空間中一個單獨的隱藏的文件當中。當客戶端從DataNode得到數據時會對對其進行校驗和,而且將之與儲存在相關校驗和文件中的校驗和進行匹配。若是沒有,客戶端會選擇從另外一個擁有該數據塊副本的DataNode上恢復數據。
The FsImage and the EditLog are central data structures of HDFS. A corruption of these files can cause the HDFS instance to be non-functional. For this reason, the NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog. Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously. This synchronous updating of multiple copies of the FsImage and EditLog may degrade the rate of namespace transactions per second that a NameNode can support. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use.
Another option to increase resilience against failures is to enable High Availability using multiple NameNodes either with a shared storage on NFS or using a distributed edit log (called Journal). The latter is the recommended approach.
FsImage和EditLog是HDFS架構的中心數據。這些數據的失效會引發HDFS實例失效。由於這個緣由,NameNode能夠配置用來維持FsImage和EditLog的多個副本。FsImage或EditLog的任何改變會引發每一份FsImage和EditLog同步更新。同步更新多份FsImge和EditLog下降NameNode能支持的每秒更新命名空間事務的頻率。然而,頻率的下降是能夠接受盡管HDFS應用本質上是對數據敏感,而不是對元數據敏感。當一個NameNode從新啓動,他會選擇最新的FsImage和EditLog來使用。
Snapshots support storing a copy of data at a particular instant of time. One usage of the snapshot feature may be to roll back a corrupted HDFS instance to a previously known good point in time.
快照支持存儲一個特色時間的的數據副本。一個使用可能快照功能的狀況是一個失效的HDFS實例想要回滾到以前一個已知是正確的時間。
HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 128 MB. Thus, an HDFS file is chopped up into 128 MB chunks, and if possible, each chunk will reside on a different DataNode.
HDFS是被設計來支持大量文件的。一個應用兼容HDFS是那些處理大數據集的應用。這些應用都是一次寫入數據但可一次或屢次讀取和須要這些讀取知足必定的流式速度。HDFS支持一次寫入屢次讀取文件語義。HDFS中一般的數據塊大小爲128M。所以,一個HDFS文件會被切割成128M大小的塊,若是能夠的話,每個大塊都會分屬於一個不一樣的DatNode.
A client request to create a file does not reach the NameNode immediately. In fact, initially the HDFS client caches the file data into a local buffer. Application writes are transparently redirected to this local buffer. When the local file accumulates data worth over one chunk size, the client contacts the NameNode. The NameNode inserts the file name into the file system hierarchy and allocates a data block for it. The NameNode responds to the client request with the identity of the DataNode and the destination data block. Then the client flushes the chunk of data from the local buffer to the specified DataNode. When a file is closed, the remaining un-flushed data in the local buffer is transferred to the DataNode. The client then tells the NameNode that the file is closed. At this point, the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed, the file is lost.
The above approach has been adopted after careful consideration of target applications that run on HDFS. These applications need streaming writes to files. If a client writes to a remote file directly without any client side buffering, the network speed and the congestion in the network impacts throughput considerably. This approach is not without precedent. Earlier distributed file systems, e.g. AFS, have used client side caching to improve performance. A POSIX requirement has been relaxed to achieve higher performance of data uploads.
一個要求建立一個文件的客戶端請求將不會馬上到達NameNode。事實上,HDFS首先將文件緩存在本地緩存中。應用的寫入將透明地定位到這個本地緩存中。當本地緩存累積數據超過一個塊(128M)的大小,客戶端將會鏈接NameNode。NameNode將文件名插入到文件系統層級中併爲它分配一個數據塊。NameNode將DataNode的id和數據塊的地址回覆給客戶端。而後客戶端建本地緩存的數據刷新到指定的DataNode。當文件關閉後,本地剩餘的未刷新的數據將會傳輸到DataNode。而後客戶端告訴NameNode文件已經關閉。在這個點,NameNode提交建立文件操做在一個持久化倉庫。若是在文件關閉以前NameNode死亡了,文件會丟失。
在仔細地研究運行在HDFS的目標程序以後上面的方法被採用了。應用須要流式寫入文件。若是客戶端在沒有進行任何緩存的狀況下直接寫入遠程文件,那麼網絡速度和網絡擁堵會影響吞吐量。這個方法不是沒有先例。早期的分佈式文件系統,例如AFS,已經使用客戶端緩存來提升性能。POSIX已經知足輕鬆實現高性能的文件上傳。
When a client is writing data to an HDFS file, its data is first written to a local buffer as explained in the previous section. Suppose the HDFS file has a replication factor of three. When the local buffer accumulates a chunk of user data, the client retrieves a list of DataNodes from the NameNode. This list contains the DataNodes that will host a replica of that block. The client then flushes the data chunk to the first DataNode. The first DataNode starts receiving the data in small portions, writes each portion to its local repository and transfers that portion to the second DataNode in the list. The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the data to its local repository. Thus, a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. Thus, the data is pipelined from one DataNode to the next.
當一個客戶端寫入一個HDFS文件,它的數據首先是寫到它的本地緩存當中(這在前面一節已經解釋了)。假設HDFS文件的複製因子爲3。當客戶端累積了大量的用戶數據,客戶端將會從NameNode取得DataNode列表。這個列表包含着該塊的副本宿主DataNode。然偶客戶端會將數據刷新到第一個DataNode。第一個DataNode接收一小部分數據,將每一小塊數據寫到本地倉庫而後將這小塊數據傳到列表的第二個DataNode。第二個DataNode開始接收每一小塊數據並寫到本地數據倉庫而後將數據傳輸到第三個DataNode,最後,第三個DataNode將數據寫到本地倉庫。所以,一個Dataode可以在管道中接收到從上一個節點接收到數據而且在同一時間將數據轉發給下一個節點。所以,數據在管道中從一個DataNode傳輸到下一個。
HDFS can be accessed from applications in many different ways. Natively, HDFS provides a FileSystem Java API for applications to use. A C language wrapper for this Java API and REST API is also available. In addition, an HTTP browser and can also be used to browse the files of an HDFS instance. By using NFS gateway, HDFS can be mounted as part of the client’s local file system.
應用能夠經過不少方式訪問HDFS。HDFS自己提供FileSystem Java API 給應用使用。C語言封裝JAVA API和REST API也是可使用。此外,HTTP瀏覽器也能夠用來瀏覽HDFS實例的文件。經過使用NFS網關,HDFS能被安裝做爲本地文件系統的一部分。
HDFS allows user data to be organized in the form of files and directories. It provides a commandline interface called FS shell that lets a user interact with the data in HDFS. The syntax of this command set is similar to other shells (e.g. bash, csh) that users are already familiar with. Here are some sample action/command pairs:
HDFS容許用戶數據已文件盒目錄的形式來組織。他提供稱爲FS shell的命令行接口讓用戶與HDFSJ交互數據。這命令行的語法跟其餘用戶已經熟悉的shell類似。這裏是一些命令操做的例子
Action |
Command |
Create a directory named /foodir |
bin/hadoop dfs -mkdir /foodir |
Remove a directory named /foodir |
bin/hadoop fs -rm -R /foodir |
View the contents of a file named /foodir/myfile.txt |
bin/hadoop dfs -cat /foodir/myfile.txt |
FS shell is targeted for applications that need a scripting language to interact with the stored data.
FS shell的目標是讓一些使用腳本的應用能與HDFS進行數據交互。
The DFSAdmin command set is used for administering an HDFS cluster. These are commands that are used only by an HDFS administrator. Here are some sample action/command pairs:
DFSAdmin命令集是用來管理HDFS集羣。這些命令只能被HDFS管理員使用,下面是一些例子:
Action |
Command |
Put the cluster in Safemode |
bin/hdfs dfsadmin -safemode enter |
Generate a list of DataNodes |
bin/hdfs dfsadmin -report |
Recommission or decommission DataNode(s) |
bin/hdfs dfsadmin -refreshNodes |
A typical HDFS install configures a web server to expose the HDFS namespace through a configurable TCP port. This allows a user to navigate the HDFS namespace and view the contents of its files using a web browser.
一個一般的HDFS安裝會配置一個web服務經過一個配置好的TCP端口來暴露HDFS命名空間。它容許用戶看到HDFS命名空間的導航和使用web瀏覽器來查看他的文件內容。
If trash configuration is enabled, files removed by FS Shell is not immediately removed from HDFS. Instead, HDFS moves it to a trash directory (each user has its own trash directory under /user/<username>/.Trash). The file can be restored quickly as long as it remains in trash.
Most recent deleted files are moved to the current trash directory (/user/<username>/.Trash/Current), and in a configurable interval, HDFS creates checkpoints (under /user/<username>/.Trash/<date>) for files in current trash directory and deletes old checkpoints when they are expired. See expunge command of FS shell about checkpointing of trash.
After the expiry of its life in trash, the NameNode deletes the file from the HDFS namespace. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS.
若是垃圾相關配置是可用的,經過FS shell移除的文件將不會直接從HDFS移除。相反的,HDFS將它移動到一個回收目錄(每一個用戶在/usr/<username>/.Trash下都擁有它本身的回收站目錄)。一個文件只要還在回收站那麼就可以快速恢復。
大部分最近刪除的文件都將移動到當前的回收站目錄(/user/<username>/.Trash/Current),而且在設置好的時間間隔內,HDFS建立對 /user/<username>/.Trash/<date>目錄下的文件建立一個檢查點而且當老的檢查點過時的時候刪除他們。查看expunge command of FS shell 瞭解回收站的檢查點。
當文件在回收站期滿以後,NameNode將會將文件從HDFS的命名空間中刪除。文件的刪除將致使與該文件關聯的block被釋放。須要說明的是文件被用戶刪除的時間和對應的釋放空間的時間之間有一個明顯的時間延遲。
Following is an example which will show how the files are deleted from HDFS by FS Shell. We created 2 files (test1 & test2) under the directory delete
接下來是咱們展現如何經過FS shel刪除文件的例子。咱們在要刪除的目錄中建立test1和test2兩個文件
$ hadoop fs -mkdir -p delete/test1
$ hadoop fs -mkdir -p delete/test2
$ hadoop fs -ls delete/
Found 2 items
drwxr-xr-x - hadoop hadoop 0 2015-05-08 12:39 delete/test1
drwxr-xr-x - hadoop hadoop 0 2015-05-08 12:40 delete/test2
We are going to remove the file test1. The comment below shows that the file has been moved to Trash directory.
咱們來刪除文件test1.下面的註釋顯示文檔被移除到回收站目錄。
$ hadoop fs -rm -r delete/test1
Moved: hdfs://localhost:8020/user/hadoop/delete/test1 to trash at: hdfs://localhost:8020/user/hadoop/.Trash/Current
now we are going to remove the file with skipTrash option, which will not send the file to Trash.It will be completely removed from HDFS.
如今我來執行將文件刪除跳過回收站選項,文件則不會轉移到回收站。文件將徹底從HDFS中移除。
$ hadoop fs -rm -r -skipTrash delete/test2
Deleted delete/test2
We can see now that the Trash directory contains only file test1.
咱們如今能夠看到回收站目錄下只有test1
$ hadoop fs -ls .Trash/Current/user/hadoop/delete/
Found 1 items\
drwxr-xr-x - hadoop hadoop 0 2015-05-08 12:39 .Trash/Current/user/hadoop/delete/test1
So file test1 goes to Trash and file test2 is deleted permanently.
因此test1去了回收站而test2被永久地刪除了。
When the replication factor of a file is reduced, the NameNode selects excess replicas that can be deleted. The next Heartbeat transfers this information to the DataNode. The DataNode then removes the corresponding blocks and the corresponding free space appears in the cluster. Once again, there might be a time delay between the completion of the setReplication API call and the appearance of free space in the cluster.
當文件的副本因子減少時,NameNode將在能夠刪除的副本中選中多餘的副本。在下一個心跳通信中將該信息傳輸給DataNode。而後DataNode移除對應的數據塊而且釋放對應的空間。再重申一遍,在設置副本因子完成和集羣中出現新的空間之間有個時間延遲。
Hadoop JavaDoc API.
HDFS source code: http://hadoop.apache.org/version_control.html
*因爲譯者自己能力有限,因此譯文中確定會出現表述不正確的地方,請你們多多包涵,也但願你們可以指出文中翻譯得不對或者不許確的地方,共同探討進步,謝謝。
*用紅色標註的句子是翻譯得不順的地方,因此若是你們有更好的翻譯,請在評論中在告訴我,謝謝!