Spark操做hbase

於Spark它是一個計算框架,於Spark環境,不只支持單個文件操做,HDFS檔,同時也能夠使用Spark對Hbase操做。java

 從企業的數據源HBase取出。這涉及閱讀hbase數據,在本文中儘快爲了儘量地讓咱們能夠實踐和操做Hbase。Spark Shell 來進行Hbase操做。shell

1、環境:

Haoop2.2.0apache

Hbase版本號0.96.2-hadoop2, r1581096編程

Spark1.0.0api

本文若是環境已經搭建好,Spark環境搭建可見Spark Haoop集羣搭建session

Hadoop2.2.0要注意和Hbase的版本號兼容,這裏Hbase採用0.96.2mvc

2、原理

Spark操做HBase事實上是和java client操做HBase的原理是一致的:框架

scala和java都是基於jvm的語言。僅僅要將hbase的類載入到classpath內,就能夠調用操做,其餘框架類似。jvm

一樣點:即都是看成client來鏈接HMaster,而後利用hbase的API來對Hbase進行操做。socket

不一樣點:惟一不一樣的是:Spark可以將Hbase的數據來看成RDD處理,從而利用Spark來進行並行計算。

3、實踐

一、首先檢查依賴jar包。在這以前若是hbase的jar包不在spark-shell的classpath裏。則需要加入進來。
設置方法: 在Spark-evn.sh里加入SPARK_CLASSPATH=/home/victor/software/hbase/lib/*
這樣再再啓動啓動bin/spark-shell, 啓動完成並且Worker成功註冊上以後。import jar 包。 

二、操做hbase

2.1 Hbase中數據

hbase裏有張score表,裏面有2個CF。分別爲course和grade。數據例如如下:
hbase(main):001:0> scan 'scores'
ROW                                    COLUMN+CELL                                                                                                     
 Jim                                   column=course:art, timestamp=1404142440676, value=67                                                            
 Jim                                   column=course:math, timestamp=1404142434405, value=77                                                           
 Jim                                   column=grade:, timestamp=1404142422653, value=3                                                                 
 Tom                                   column=course:art, timestamp=1404142407018, value=88                                                            
 Tom                                   column=course:math, timestamp=1404142398986, value=97                                                           
 Tom                                   column=grade:, timestamp=1404142383206, value=5                                                                 
 shengli                               column=course:art, timestamp=1404142468266, value=17                                                            
 shengli                               column=course:math, timestamp=1404142461952, value=27                                                           
 shengli                               column=grade:, timestamp=1404142452157, value=8                                                                 
3 row(s) in 0.3230 seconds

2.1  初始化鏈接參數

scala> import org.apache.spark._
import org.apache.spark._

scala> import org.apache.spark.rdd.NewHadoopRDD
import org.apache.spark.rdd.NewHadoopRDD

scala> import org.apache.hadoop.conf.Configuration;  
import org.apache.hadoop.conf.Configuration

scala> import org.apache.hadoop.hbase.HBaseConfiguration;  
import org.apache.hadoop.hbase.HBaseConfiguration

scala> import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.mapreduce.TableInputFormat

scala> val configuration = HBaseConfiguration.create();  //初始化配置
configuration: org.apache.hadoop.conf.Configuration = Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hbase-default.xml, hbase-site.xml

scala> configuration.set("hbase.zookeeper.property.clientPort", "2181"); //設置zookeeper client端口
 
scala> configuration.set("hbase.zookeeper.quorum", "localhost");  //設置zookeeper quorum

scala> configuration.set("hbase.master", "localhost:60000");  //設置hbase master

scala> configuration.addResource("/home/victor/software/hbase/conf/hbase-site.xml")  //將hbase的配置載入

scala> configuration.set(TableInputFormat.INPUT_TABLE, "scores")
scala> import org.apache.hadoop.hbase.client.HBaseAdminimport org.apache.hadoop.hbase.client.HBaseAdmin
scala> val hadmin = new HBaseAdmin(configuration); //實例化hbase管理
2014-07-01 00:39:24,649 INFO  [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0xc7eea5, quorum=localhost:2181, baseZNode=/hbase
2014-07-01 00:39:24,707 INFO  [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xc7eea5 connecting to ZooKeeper ensemble=localhost:2181
2014-07-01 00:39:24,753 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-07-01 00:39:24,755 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to localhost/127.0.0.1:2181, initiating session
2014-07-01 00:39:24,938 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x146ed61c4ef0015, negotiated timeout = 40000
hadmin: org.apache.hadoop.hbase.client.HBaseAdmin = org.apache.hadoop.hbase.client.HBaseAdmin@1260466

接下來用haoop api來建立一個RDD 
  
scala> val hrdd = sc.newAPIHadoopRDD(configuration, classOf[TableInputFormat], 
     | classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
     | classOf[org.apache.hadoop.hbase.client.Result])
2014-07-01 00:51:06,683 WARN  [main] util.SizeEstimator (Logging.scala:logWarning(70)) - Failed to check whether UseCompressedOops is set; assuming yes
2014-07-01 00:51:06,936 INFO  [main] storage.MemoryStore (Logging.scala:logInfo(58)) - ensureFreeSpace(85877) called with curMem=0, maxMem=308910489
2014-07-01 00:51:06,946 INFO  [main] storage.MemoryStore (Logging.scala:logInfo(58)) - Block broadcast_0 stored as values to memory (estimated size 83.9 KB, free 294.5 MB)
hrdd: org.apache.spark.rdd.RDD[(org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.client.Result)] = NewHadoopRDD[0] at newAPIHadoopRDD at <console>:22

版本號一:(最新的版本號下列code可能不work,請看版本號二)

讀取記錄:

這裏咱們take 1 條數據,可以看到格式是依照咱們設定的HadoopRDD。key是一個不變的ImmutableBytesWritable,value是Hbase的Result
scala> hrdd take 1
2014-07-01 00:51:50,371 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Starting job: take at <console>:25
2014-07-01 00:51:50,423 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Got job 0 (take at <console>:25) with 1 output partitions (allowLocal=true)
2014-07-01 00:51:50,425 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Final stage: Stage 0(take at <console>:25)
2014-07-01 00:51:50,426 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Parents of final stage: List()
2014-07-01 00:51:50,477 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Missing parents: List()
2014-07-01 00:51:50,478 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Computing the requested partition locally
2014-07-01 00:51:50,509 INFO  [Local computation of job 0] rdd.NewHadoopRDD (Logging.scala:logInfo(58)) - Input split: localhost:,
2014-07-01 00:51:50,894 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Job finished: take at <console>:25, took 0.522612687 s
res5: Array[(org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.client.Result)] = Array((4a 69 6d,keyvalues={Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0}))

找到Result對象
scala> val res = hrdd.take(1)
2014-07-01 01:09:13,486 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Starting job: take at <console>:24
2014-07-01 01:09:13,487 INFO  [spark-akka.actor.default-dispatcher-15] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Got job 4 (take at <console>:24) with 1 output partitions (allowLocal=true)
2014-07-01 01:09:13,487 INFO  [spark-akka.actor.default-dispatcher-15] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Final stage: Stage 4(take at <console>:24)
2014-07-01 01:09:13,487 INFO  [spark-akka.actor.default-dispatcher-15] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Parents of final stage: List()
2014-07-01 01:09:13,488 INFO  [spark-akka.actor.default-dispatcher-15] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Missing parents: List()
2014-07-01 01:09:13,488 INFO  [spark-akka.actor.default-dispatcher-15] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Computing the requested partition locally
2014-07-01 01:09:13,488 INFO  [Local computation of job 4] rdd.NewHadoopRDD (Logging.scala:logInfo(58)) - Input split: localhost:,
2014-07-01 01:09:13,504 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Job finished: take at <console>:24, took 0.018069267 s
res: Array[(org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.client.Result)] = Array((4a 69 6d,keyvalues={Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0}))

scala> res(0)
res33: (org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.client.Result) = (4a 69 6d,keyvalues={Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0})

scala> res(0)._2
res34: org.apache.hadoop.hbase.client.Result = keyvalues={Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0}

scala> val rs = res(0)._2
rs: org.apache.hadoop.hbase.client.Result = keyvalues={Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0}

scala> rs.
asInstanceOf             cellScanner              containsColumn           containsEmptyColumn      containsNonEmptyColumn   copyFrom                 
getColumn                getColumnCells           getColumnLatest          getColumnLatestCell      getExists                getFamilyMap             
getMap                   getNoVersionMap          getRow                   getValue                 getValueAsByteBuffer     isEmpty                  
isInstanceOf             list                     listCells                loadValue                raw                      rawCells                 
setExists                size                     toString                 value                    

遍歷這條記錄,取出每個cell的值:
scala> val kv_array = rs.raw
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
kv_array: Array[org.apache.hadoop.hbase.KeyValue] = Array(Jim/course:art/1404142440676/Put/vlen=2/mvcc=0, Jim/course:math/1404142434405/Put/vlen=2/mvcc=0, Jim/grade:/1404142422653/Put/vlen=1/mvcc=0)

遍歷記錄

scala> for(keyvalue <- kv) println("rowkey:"+ new String(keyvalue.getRow)+ " cf:"+new String(keyvalue.getFamily()) + " column:" + new String(keyvalue.getQualifier) + " " + "value:"+new String(keyvalue.getValue()))
warning: there were 4 deprecation warning(s); re-run with -deprecation for details
rowkey:Jim cf:course column:art value:67
rowkey:Jim cf:course column:math value:77
rowkey:Jim cf:grade column: value:3

查詢記錄個數

scala> hrdd.count
2014-07-01 01:26:03,133 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Starting job: count at <console>:25
2014-07-01 01:26:03,134 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Got job 5 (count at <console>:25) with 1 output partitions (allowLocal=false)
2014-07-01 01:26:03,134 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Final stage: Stage 5(count at <console>:25)
2014-07-01 01:26:03,134 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Parents of final stage: List()
2014-07-01 01:26:03,135 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Missing parents: List()
2014-07-01 01:26:03,166 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Submitting Stage 5 (NewHadoopRDD[0] at newAPIHadoopRDD at <console>:22), which has no missing parents
2014-07-01 01:26:03,397 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Submitting 1 missing tasks from Stage 5 (NewHadoopRDD[0] at newAPIHadoopRDD at <console>:22)
2014-07-01 01:26:03,401 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.TaskSchedulerImpl (Logging.scala:logInfo(58)) - Adding task set 5.0 with 1 tasks
2014-07-01 01:26:03,427 INFO  [spark-akka.actor.default-dispatcher-16] scheduler.FairSchedulableBuilder (Logging.scala:logInfo(58)) - Added task set TaskSet_5 tasks to pool default
2014-07-01 01:26:03,439 INFO  [spark-akka.actor.default-dispatcher-5] scheduler.TaskSetManager (Logging.scala:logInfo(58)) - Starting task 5.0:0 as TID 0 on executor 0: 192.168.2.105 (PROCESS_LOCAL)
2014-07-01 01:26:03,469 INFO  [spark-akka.actor.default-dispatcher-5] scheduler.TaskSetManager (Logging.scala:logInfo(58)) - Serialized task 5.0:0 as 1305 bytes in 7 ms
2014-07-01 01:26:11,015 INFO  [Result resolver thread-0] scheduler.TaskSetManager (Logging.scala:logInfo(58)) - Finished TID 0 in 7568 ms on 192.168.2.105 (progress: 1/1)
2014-07-01 01:26:11,017 INFO  [Result resolver thread-0] scheduler.TaskSchedulerImpl (Logging.scala:logInfo(58)) - Removed TaskSet 5.0, whose tasks have all completed, from pool default
2014-07-01 01:26:11,036 INFO  [spark-akka.actor.default-dispatcher-4] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Completed ResultTask(5, 0)
2014-07-01 01:26:11,057 INFO  [spark-akka.actor.default-dispatcher-4] scheduler.DAGScheduler (Logging.scala:logInfo(58)) - Stage 5 (count at <console>:25) finished in 7.605 s
2014-07-01 01:26:11,067 INFO  [main] spark.SparkContext (Logging.scala:logInfo(58)) - Job finished: count at <console>:25, took 7.933270634 s
res71: Long = 3

版本號2、

hrdd.map(tuple => tuple._2).map(result => (result.getRow, result.getColumn("course".getBytes(), "art".getBytes()))).map(row => {
(
  row._1.map(_.toChar).mkString,
  row._2.asScala.reduceLeft {
    (a, b) => if (a.getTimestamp > b.getTimestamp) a else b
  }.getValue.map(_.toChar).mkString
)
}).take(10)

這樣就能獲得row key 和相應 column family的值了。


4、總結

Spark操做Hbase事實上和java client操做Hbas大致流程是一致的,都是客戶端去鏈接HMaster,終於利用java api來操做hbase。

僅僅只是Spark提供了一種與RDD結合的概念,並且利用scala的語法簡潔性。提升了編程效率。


——EOF——

原創文章,轉載請註明來自: http://blog.csdn.net/oopsoom/article/details/36071323
相關文章
相關標籤/搜索