筆記《Hbase 權威指南》

  • 爲何要用Hbase
    - Hbase的誕生是由於現有的關係型數據庫已經沒法在硬件上知足瘋狂增加的數據了,並且由於須要實時的數據提取Memcached也沒法知足
    - Hbase適合於無結構或半結構化數據,適合於schema變更的狀況
    - Hbase天生適合以時間軸作查詢
  • Werner Vogels,能夠關注一下他的博客(Amazon的CTO)
  • 分佈式計算系統的CAP定理
    在理論計算機科學中, CAP定理(CAP theorem), 又被稱做 布魯爾定理(Brewer's theorem), 它指出對於一個分佈式計算系統來說,不可能同時滿足如下三點:

    - 一致性(Consistency) (全部節點在同一時間具備相同的數據) html

    - 可用性(Availability) (保證每個請求無論成功或者失敗都有響應) node

    - 分隔容忍(Partition tolerance) (系統中任意信息的丟失或失敗不會影響系統的繼續運做)git

  • Eventually consistent http://www.allthingsdistributed.com/2007/12/eventually_consistent.html
  • NoSQL與關係型數據庫並不是二元(即非此即彼),你須要從如下幾個方面來看下你的數據場景進而決定是否須要使用象HBASE這樣的NOSQL數據庫
    - Data model: 數據訪問方式是怎樣,是非結構化、半結構化,列式存儲,仍是文件式存儲?你的數據的schema是如何演進的
    - Storage model: 是in-memory的仍是一直持久化的?RDBMS基本都是數據持久化的,不過你可能就須要一個in-memory的數據庫,仍然要看你的數據訪問方式是怎樣的。
    - Consistency model:是嚴格一致性仍是最終一致性? 爲了可用性/存儲方式或是網絡傳統速度,你願意放棄多少的數據一致性?see here 《Lessons from giant-scale services 》
    - Physical model:是否分佈式數據庫?
    - Read/write performance:讀寫速度,必須很清楚明白你的數據的讀寫方式,是少寫多讀,一次寫入屢次讀,仍是須要頻繁修改?
    - Secondary indexes:二級索引,你知道你的數據使用場景須要或將來會須要哪些二級索引嗎?
    - Failure handling:錯誤處理機制,容災/錯處理
    - Compression:壓縮算法能壓縮物理空間比到10:1甚至更高,尤爲適用於大數據的狀況
    - Load balancing
    - Atomic read-modify-write:Having these compare and swap (CAS) or check and set operations available can reduce client-side complexity,便是使用樂觀鎖仍是悲觀鎖,樂觀鎖即非阻塞式
    擴展閱讀: 《一種高效無鎖內存隊列的實現》《無鎖隊列的實現
    - Locking, waits and deadlocks:你的數據場景中對死鎖,等待等的設計
  • 垂直分佈:指的增長內存CPU/CORE等,是一種擴展性較差的須要投入大量資金的分佈方式,不適合於大數據了。
  • Denormalization, Duplication, and Intelligent Keys (DDI)github

  • 關於HBase URL Shortener示例的一些擴展閱讀:
    https://github.com/michiard/CLOUDS-LAB/tree/master/hbase-lab算法

  • The support for sparse, wide tables and column-oriented design often eliminates the need to normalize data and, in the process, the costly  JOINoperations needed to aggregatethe data at query time. Use of intelligent keys gives you fine-grained control over how—and where—data is stored. Partial key lookups are possible, and when combined with compound keys, they have the same properties as leading, left-edge indexes. Designing the schemas properly enables you to grow the data from 10 entries to 10 million entries, while still retaining the same write and read performance. shell

    稀疏 sparse,wide tables,是反範式的數據庫

  • 這一篇能夠看網絡

  • bigtable: 丟棄傳統的RDBMS的CRU特性,追求更高效適應水平分佈擴展需求的支持數據段掃描及全表掃描的分佈式數據庫app

  • HBASE元素:ROW,ROWKEY,COLUMN,COLUMN FAMILY,CELL負載均衡

  • HBASE的天然排序:HBASE是以lexicographically排序的即詞典式的排序,按字節碼排序

  • HBASE是支持二級索引的!但BIG TABLE不支持。

  • HBASE的COLUMN FAMILY須預先定義而且最好不該常常變成,數量上也最好要少於10個,不要太多(思考:難道是由於是sparse, wide,因此最好不要有太多空白列?),不過一個column family你儘管能夠擁有上百萬的column,由於它們佔用的是行而非列。Timestamp可由系統指定也可由用戶指定。
    105574066

  • Predicate deletion => Log-structured Merge-Tree/LSM

  • HFile是按column family存儲的,即一個column family佔用一個HFile,爲了更容易in-memory store

  • HBASE和BIGTABLE的「正統」用法是WEB TABLE啊.. 專用於爬蟲的,好比保存anchor, content等
    108395282

  • HBASE中的REGION至關於自動分片(auto - sharding)

    For HBase and modern hardware, the number would be more like 10 to 1,000 regions per server, but each between 1 GB and 2 GB in size
    RegionServer管理近千個regions,理論上每一個row只存在於一個region(問,若是一個row超過了一個region,如何處理的? 咱們是否是不該該設計這樣的rowkey先?)

  • Single-row transaction:一行的數據是事務性原子性的,無跨行原子性

  • Map-Reduce可將HBASE數據轉化爲inputFormat和outputFormat

  • HFile有block,block又導致必須得有block index lookup,這個index保存在內存中(in-memory block index)

  • Zookeeper是Chubby for bigtable的對應物,
    119268536
    It offers filesystem-like access with directories and files (called  znodes) that distributed systems can use to negotiate ownership, register services, or watch for updates. Every region server creates its own ephemeral node in ZooKeeper, which the master, in turn, uses to discover available servers. They are also used to track server failures or network partitions.
    117526192

  • master (HMaster) 會作
    - Zookeeper (問:Zookeeper和HMaster分別作什麼?書中P26語焉不詳)
    - 負載均衡管理
    - 監控和管理schema changes, metadata operations,如表/列族的建立等
    - Zookeeper仍然使用heartbeating機制

  • Region server
    - 作region split (sharding)
    - 作region管理

  • Client直接與regions交互讀寫數據,region server並不參與

  • 這兒徹底不懂(P27): 涉及到表掃描算法

    Table scans run in linear time and row key lookups or mutations are performed in logarithmic order—or, in extreme cases, even constant order (using Bloom filters). Designing the schema in a way to completely avoid explicit locking, combined with row-level atomicity, gives you the ability to scale your system without any notable effect on read or write performance.

  • 何謂read-modify-write? wiki
    In computer science, read–modify–write is a class of atomic operations such as test-and-set, fetch-and-add, and compare-and-swap which both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization.

3.疑惑:

  • 若是手動建二級索引表,如何保證數據的即時性? 建多個表,好比USERID,如何應付後期JOIN表格?難道要用MAP-REDUCE來完成?
  • HBASE的稀疏性是如何體現的呢?
  • 壓縮算法是怎樣的呢?
  • 合併中到底作了哪些事情呢?
  • LSM這一節(即P25說明部分沒看明白,還要FUTHER READINGS)
  • Zookeeper的quorum究竟是幹嗎的,爲何讓我配置在slave上
  • 這一句不懂:In addition, it provides push-down predicates, that is, filters, reducing data transferred over the network. 在第一章最後提到了說hbase提供一種push-down predicate,push-down: cause to come or go down,即便數據減小的斷言。這個有些不懂,不知道是否是指的就是配置只取出部分的數據,並能將不知足配置有效時間的數據刪除或過濾掉。
4.思考:
  • Denormalization, Duplication, and Intelligent Keys (DDI)
  • 讀<BIG TABLE>
  • APPEND? 物理存儲上的表現?
  • 若是和MEM STORE打交道?
相關文章
相關標籤/搜索