Spark SQL讀取hive數據時報找不到mysql驅動

Exception:node

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BoneCP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.mysql

Solution:sql

一、$HIVE_HOME/conf/hive-site.xml中增長關於 hive.metastore.uris 的配置信息,以下:
<property>
  <name>hive.metastore.uris</name>
  <value>thrift://namenode1:9083</value>
  <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>

二、執行:$HIVE_HOME/bin/hive --service metastore,啓動元數據存儲服務;

三、將$HIVE_HOME/conf/hive-site.xml拷貝至$SPARK_HOME/conf/目錄下;

四、啓動spark-shell進行驗證:$SPARK_HOME/bin/spark-shell --master namenode1:7077或spark-sql -> show databases.


Note:
1. 當在Intellij IDE中編寫Spark SQL程序時(val hiveContext = new HiveContext(sc); import hiveContext.sql; sql("show databases")),打包成相應的.jar文件,並利用以下腳本將任務提交到Spark集羣運行時,Spark默認採用derby進行metastore,即元數據的存儲;當再次在不一樣目錄下執行該任務時,以前建立的數據庫或表數據沒法獲取,有點即用即刪的感受。故要想訪問Hive下的元數據,首先須要將Hive目錄下的配置文件中的hive-site.xml文件放到Spark目錄下的配置文件中,讓Spark集羣執行程序時能識別進入Hive元數據的路徑,而後啓動上述服務(
hive --service metastore)便可訪問Hive相應數據。

2.
/**
* An instance of the Spark SQL execution engine that integrates with data stored in Hive.
* Configuration for Hive is read from hive-site.xml on the classpath.
*/
class HiveContext(sc: SparkContext) extends SQLContext(sc) {

 ....................................

}
 

3. shell

Use HiveContext instead.  It will still create a local metastore if one is not specified. However, note that the default directory is ./metastore_db, not ./metastore

 

測試程序以下:數據庫

package com.husor.Hive

import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.hive.HiveContext

/* Spark SQL執行時的sql是臨時的,即用即刪 **/

/**
 * Created by kelvin on 2015/1/27.
 */
object Recommendation {
  def main(args: Array[String]) {

    println("Test is starting......")

    if (args.length < 1) {
      System.err.println("Usage:HDFS_OutputDir <Directory>")
      System.exit(1)
    }

    //System.setProperty("hadoop.home.dir", "d:\\winutil\\")

    val conf = new SparkConf().setAppName("Recommendation")
    val spark = new SparkContext(conf)

    val hiveContext = new HiveContext(spark)

    import hiveContext.sql

    /*sql("create database if not exists baby")
    val databases = sql("show databases")
    databases.collect.foreach(println)*/

    sql("use baby")
    /*sql("CREATE EXTERNAL TABLE if not exists origin_orders (oid string, uid INT, gmt_create INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' LOCATION '/beibei/order'")
    sql("CREATE EXTERNAL TABLE if not exists items (iid INT, pid INT, title string, cid INT, brand INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' LOCATION '/beibei/item'")
    sql("CREATE EXTERNAL TABLE if not exists order_item (oid string, iid INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' LOCATION '/beibei/order_item'")
    sql("create table if not exists test_orders(oid string, uid INT, gmt_create INT)")
    sql("create table if not exists verify_orders(oid string, uid INT, gmt_create INT)")
    sql("insert OVERWRITE table test_orders select * from origin_orders where gmt_create <= 1415635200")
    sql("insert OVERWRITE table verify_orders select * from origin_orders where gmt_create > 1415635200")

    val tables = sql("show tables")
    tables.collect.foreach(println)*/

    sql("SET spark.sql.shuffle.partitions = 5")

    val olderTime = System.currentTimeMillis()

    val userOrderData = sql("select i.pid, o.uid, o.gmt_create from items i " +
                                         "join order_item oi " +
                                         "on i.iid = oi.iid     " +
                                         "join test_orders o " +
                                         "on oi.oid = o.oid")

    userOrderData.take(10).foreach(println)

    val newTime = System.currentTimeMillis()

    println("Consume Time: " + (newTime - olderTime))

    userOrderData.saveAsTextFile(args(0))
    spark.stop()

    println("Test is Succeed!!!")

  }

}
相關文章
相關標籤/搜索