參考官網Spark SQL的例子——https://spark.apache.org/docs/1.2.1/sql-programming-guide.html#rdds,本身寫了一個腳本:html
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)
結果執行出錯:java
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 50.0 failed 1 times, most recent failure: Lost task 1.0 in stage 50.0 (TID 73, localhost): java.lang.ArrayIndexOutOfBoundsException: 5 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:30) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:30) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1319) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
從日誌能夠看出,是數組越界了。sql
用命令apache
sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).foreach(x => println(x.size))
發現有一行記錄split出來的大小是「5」api
6 6 6 6 6 6 6 6 6 6 15/05/21 20:47:37 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1774 bytes result sent to driver 6 6 6 6 6 6 5 6 15/05/21 20:47:37 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 1774 bytes result sent to driver
緣由是這行記錄有空值「44671799^2015-03-27 20:56:05^2^117.93.193.238^0^^」數組
網上找到了解決辦法——使用split(str,int)函數。修改後代碼:
app
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^", -1)).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)