1、zookeeper集成
使用自帶zookeeper可參考hbase官網 https://hbase.apache.org/book.html#quickstart
2、spark/hadoop集成
須要將jar放到hadoop的lib下
或者在spark提交命令中添加 --driver-class-path /opt/hadoop-2.7.7/lib/*:/opt/hbase-2.1.2/conf/參數html
3、此時會出現各類奇葩問題,找不到類啊等等
通常都是jar包衝突問題,將衝突的jar包處理便可
netty
jacksonjava
4、spark 執行時報java.lang.ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration
將/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar放在在spark的jars中
推測若是是hadoop時報錯,須要將/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar 放到hadoop/lib中node
5、org.apache.hadoop.hbase.TableNotFoundException: hbase:testapache
hbase:test去掉hbase:oop
6、Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not se(轉)
當從SparkSql獲得的dataFrame,映射成RDD以後向hbase中直接保存數據的時候報錯:ui
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.this
採用的是saveAsNewApiHadoopDataSet spa
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import SparkContext._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.conf.Configuration
netty
可是更換爲saveAsHadoopDataset就可使用,orm
// var conf = new Configuration()
// var tableName = "test_t1"
// val jobConf = new JobConf(conf,this.getClass)
// jobConf.set("hbase.zookeeper.quorum","10.172.10.169,10.172.10.168,10.172.10.170")
// jobConf.setOutputKeyClass(classOf[ImmutableBytesWritable])
//// jobConf.setOutputValueClass(classOf[Put])
// jobConf.setOutputFormat(classOf[org.apache.hadoop.hbase.mapred.TableOutputFormat])
// jobConf.set(TableOutputFormat.OUTPUT_TABLE,"test_t1")
// rdd1.map(
// x => {
// var put = new Put(Bytes.toBytes(x._1))
// put.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
// (new ImmutableBytesWritable,put)
// }
// ).saveAsHadoopDataset(jobConf)
不知道在哪裏出現了錯誤,對比發現應該是 沒有使用sc.hadoopConfiguration,而是使用的JobConf 做爲參數,新API不能用舊的configuration 。
// sc.hadoopConfiguration.set("hbase.zookeeper.quorum","10.172.10.169,10.172.10.168,10.172.10.170")
//// sc.hadoopConfiguration.set("zookeeper.znode.parent","/hbase")
// sc.hadoopConfiguration.set(TableOutputFormat.OUTPUT_TABLE,"test_t1")
// var job = new Job(sc.hadoopConfiguration)
// job.setOutputKeyClass(classOf[ImmutableBytesWritable])
// job.setOutputValueClass(classOf[Result])
// job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]])
//
// rdd1.map(
// x => {
// var put = new Put(Bytes.toBytes(x._1))
// put.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2))
// (new ImmutableBytesWritable,put)
// }
// ).saveAsNewAPIHadoopDataset(job.getConfiguration)
7、Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder
將/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar放在在spark的jars中推測若是是hadoop時報錯,須要將/hadoop-2.7.7/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar 放到hadoop/lib中