spark sql 鏈接使用mysql數據源

spark sql 能夠經過標準的jdbc鏈接數據庫,得到數據源java

 

public class SparkSql {

    public static SimpleDateFormat sdf = new SimpleDateFormat("_yyyyMMdd_HH_mm_ss"); 
 private static final String appName = "spark sql test";  private static final String master = "spark://192.168.1.21:7077";  private static final String JDBCURL = "jdbc:mysql://192.168.1.18:3306/lng?user=root&password=123456"; 
 public static void main(String[] avgs){

        SparkContext context = new SparkContext(master, appName); 
 SQLContext sqlContext = new SQLContext(context); 
 // Creates a DataFrame based on a table named "people"  // stored in a MySQL database.  DataFrame df = sqlContext
                .read()
                .format("jdbc")
                .option("url", JDBCURL)
                .option("dbtable", "tsys_user")
                .load(); 
 // Looks the schema of this DataFrame.  df.printSchema(); 
 // Counts people by age  DataFrame countsByAge = df.groupBy("customStyle").count();  countsByAge.show(); 
 // Saves countsByAge to S3 in the JSON format.  countsByAge.write().format("json").save("hdfs://192.168.1.17:9000/administrator/sql-result" + sdf.format(new Date())); 
 }

}

 

若是沒有包含mysql的驅動程序,須要參考http://stackoverflow.com/questions/34764505/no-suitable-driver-found-for-jdbc-in-sparkmysql

  1. You might want to assembly you application with your build manager (Maven,SBT) thus you'll not need to add the dependecies in your spark-submit cli. (意思就是把mysql的驅動程序打包到提交到spark的jar包裏)
  2. You can use the following option in your spark-submit cli :(改爲下面,經測試,可行,或者加入export SPARK_CLASSPATH=$SPARK_CLASSPATH:/usr/local/spark-1.6.1-bin-hadoop2.6/conf/driverLib/mysql-connector-java-5.1.36.jar 到conf/spark-env.shsql

    spark-submit --driver-class-path /usr/local/spark-1.6.1-bin-hadoop2.6/conf/driverLib/mysql-connector-java-5.1.36.jar --class com.xxx.SparkSql  /usr/local/spark.jar

    Explanation : Supposing that you have all your jars in a lib directory in your project root, this will read all the libraries and add them to the application submit.數據庫

  3. You can also try to configure these 2 variables : spark.driver.extraClassPath and spark.executor.extraClassPath in SPARK_HOME/conf/spark-default.conf file and specify the value of these variables as the path of the jar file. Ensure that the same path exists on workernodes.(經測,不行)json

相關文章
相關標籤/搜索