【翻譯】Apache Hbase新特性--MOB支持(一)

原文連接:http://blog.cloudera.com/blog/2015/06/inside-apache-hbases-new-support-for-mobs/數據庫

HBase MOBs特性的設計背景

Apache HBase is a distributed, scalable, performant, consistent key value database that can store a variety of binary data types. It excels at storing many relatively small values (<10K), and providing low-latency reads and writes.apache

However, there is a growing demand for storing documents, images, and other moderate objects (MOBs)  in HBase while maintaining low latency for reads and writes. One such use case is a bank that stores signed and scanned customer documents. As another example, transport agencies may want to store  snapshots of traffic and moving cars. These MOBs are generally write-once.緩存

Apache HBase是一個分佈式、可擴展,高性能,一致的鍵值數據庫,能夠存儲多種多樣的二進制數據。存儲小文件(小於10K)十分出色,讀寫延遲低。安全

隨之而來,對文檔、圖片和其餘中等大小文件的存儲需求日益增加,而且要保持讀寫低延遲。一個典型的場景就是銀行存儲客戶的簽字或掃描的文檔。另外一個典型的場景,交通部門保存路況或過車快照。中等大小文件一般寫入一次。架構

Unfortunately, performance can degrade in situations where many moderately sized values (100K to 10MB) are stored due to the ever-increasing  I/O pressure created by compactions. Consider the case where 1TB of photos from traffic cameras, each 1MB in size, are stored into HBase daily. Parts of the stored files are compacted multiple times via minor compactions and eventually, data is rewritten by major compactions. Along with accumulation of these MOBs, I/O created by compactions will slow down the compactions, further block memstore flushing, and eventually block updates. A big MOB store will trigger frequent region splits, reducing the availability of the affected regions.app

In order to address these drawbacks, Cloudera and Intel engineers have implemented MOB support in an HBase branch (hbase-11339: HBase MOB). This branch will be merged to the master in HBase 1.1 or 1.2, and is already present and supported in CDH 5.4.x, as well. dom

不幸的是,存儲文件大小在100k到10M之間時,因爲壓縮致使的持續增加的讀寫壓力,會致使性能降低。想象一下這樣的場景,交通攝像頭天天產生1TB的照片存到Hbase裏,每一個文件1MB。一部分文件被屢次壓縮以達到最小化。數據由於壓縮被重複寫入。隨着中等大小文件數量的積累,壓縮產生的讀寫會使壓縮變慢,進一步阻塞memstore刷新,最終阻止更新。大量的MOB存儲會觸發頻繁的region分割,相應region的可用性降低。分佈式

爲了解決這個問題,Cloudera和Intel的工程師在Hbase的分支實現了對MOB的支持。 (hbase-11339: HBase MOB)。(譯者注:這個特性並無出如今1.1和1.2版本,而是被合入的2.0.0版本)。你能夠在CDH 5.4.x中獲取。ide

Operations on MOBs are usually write-intensive, with rare updates or deletes and relatively infrequent reads. MOBs are usually stored together with their metadata. Metadata relating to MOBs may include, for instance, car number, speed, and color. Metadata are very small relative to the MOBs. Metadata are usually accessed for analysis, while MOBs are usually randomly accessed only when they are explicitly requested with row keys.post

Users want to read and write the MOBs in HBase with low latency in the same APIs, and want strong consistency, security, snapshot and HBase replication between clusters, and so on. To meet these goals, MOBs were moved out of the main I/O path of HBase and into a new I/O path.

In this post, you will learn about this design approach, and why it was selected.

對MOB的操做一般集中在寫入,不多更新或刪除,讀取不頻繁。MOB一般跟元數據一塊兒被存儲。元數據相對MOB很小,一般用來統計分析,而MOB通常經過明確的row key來獲取。

用戶但願在Hbase中用相同的API來讀寫MOB文件,而且集羣之間保持低延遲,強一致、安全、快照和Hbase副本等特性。要達到這一目標,必須將MOB從 HBase主要的讀寫目錄移到新的讀寫目錄。

可行方案分析

There were a few possible approaches to this problem. The first approach we considered was to store MOBs in HBase with a tuned split and compaction policies—a bigger desired MaxFileSize decreases the frequency of region split, and fewer or no compactions can avoid the write amplification penalty. That approach would improve write latency and throughput considerably. However, along with the increasing number of stored files, there would be too many opened readers in a single store, even more than what is allowed by the OS. As a result, a lot of memory would be consumed and read performance would degrade.

解決這個問題有潛在的方法。第一種,優化分割(split)和壓縮策略——一個更大的MaxFileSize來下降region分割頻率,減小或者不壓縮來避免寫入惡化。這樣會改善寫入延遲,吞吐量好得多。可是,隨着文件數量的增加,一次存儲會打開很是多的reader,甚至超過操做系統的限制。結果就是內存被耗光,性能降低。

Another approach was to use an HBase + HDFS model to store the metadata and MOBs separately. In this model, a single file is linked by an entry in HBase. This is a client solution, and the transaction is controlled by the client—no HBase-side memories are consumed by MOBs. This approach would work for objects larger than 50MB, but for MOBs, many small files lead to inefficient HDFS usage since the default block size in HDFS is 128MB.

For example, let’s say a NameNode has 48GB of memory and each file is 100KB with three replicas. Each file takes more than 300 bytes in memory, so a NameNode with 48GB memory can hold about 160 million files, which would limit us to only storing 16TB MOB files in total.

另一種方式能夠採用HBase+HDFS的方式來分開存儲元數據和MOB文件。一個文件對應一個Hbase入口。這是客戶端的解決方案,事務在客戶端控制。MOB不會消耗Hbase的內存。存儲的對象能夠超過50MB。可是,大量的小文件使HDFS利用率不高,由於默認的塊大小是128M。

舉個例子,NameNode有48G內存,每一個文件100KB,3個副本。每一個文件在內存中佔用300字節,48G內存能夠存大約1.6億文件,限制了存儲的總文件大小僅僅16T。

As an improvement, we could have assembled the small MOB files into bigger ones—that is, a file could have multiple MOB entries–and store the offset and length in the HBase table for fast reading. However, maintaining data consistency and managing deleted MOBs and small MOB files in compactions are difficult. Furthermore, if we were to use this approach, we’d have to consider new security policies, lose atomicity properties of writes, and potentially lose the backup and disaster recovery provided by replication and snapshots.

咱們能夠許多小的MOB合成一個大文件,一個文件有多個MOB入口,經過存儲偏移量(offset)和長度來加快讀取。不過維護數據一致性,管理刪除的文件和壓縮後的小文件十分困難。並且,咱們還須要考慮安全策略,失去寫數據的原子性,可能會丟失由複製和快照提供的備份和災難恢復。

HBase MOB 架構設計

In the end, because most of the concerns around storing MOBs in HBase involve the I/O created by compactions, the key was to move MOBs out of management by normal regions to avoid region splits and compactions there.

The HBase MOB design is similar to the HBase + HDFS approach because we store the metadata and MOBs separately. However, the difference lies in a server-side design: memstore caches the MOBs before they are flushed to disk, the MOBs are written into a HFile called 「MOB file」 in each flush, and each MOB file has multiple entries instead of single file in HDFS for each MOB. This MOB file is stored in a special region. All the read and write can be used by the current HBase APIs.

最後,因爲大部分擔憂來自於壓縮帶來的IO,最關鍵的是將MOB移出正常region的管理來避免region分割和壓縮。

HBase MOB設計相似於Hbase+HDFS的方式,將元數據和MOB分開存。不一樣的是服務端的設計。中等大小文件在被刷到磁盤前緩存在memstore裏,每次刷新,中等大小文件被寫入特殊的HFile文件—「MOB File」。每一箇中等文件有多個MOB入口,而不像HDFS只有一個入口。MOB file被放在特殊的region。讀寫都經過現有的Hbase API。

 

未完,見下一篇:http://www.javashuo.com/article/p-riumtcyq-hz.html

相關文章
相關標籤/搜索