1. 設置消息尺寸最大值sql
def main(args: Array[String]) { System.setProperty("spark.akka.frameSize", "1024") }
2.與yarn結合時設置隊列shell
val conf=new SparkConf().setAppName("WriteParquet") conf.set("spark.yarn.queue","wz111") val sc=new SparkContext(conf)
3.運行時使用yarn分配資源,並設置--num-executors參數緩存
nohup /home/SASadm/spark-1.4.1-bin-hadoop2.4/bin/spark-submit --name mergePartition --class main.scala.week2.mergePartition --num-executors 30 --master yarn mergePartition.jar >server.log 2>&1 &
4.讀取impala的parquet,對String串的處理app
sqlContext.setConf("spark.sql.parquet.binaryAsString","true")
5.parquetfile的寫oop
case class ParquetFormat(usr_id:BigInt , install_ids:String ) val appRdd=sc.textFile("hdfs://").map(_.split("\t")).map(r=>ParquetFormat(r(0).toLong,r(1))) sqlContext.createDataFrame(appRdd).repartition(1).write.parquet("hdfs://")
6.parquetfile的讀spa
val parquetFile=sqlContext.read.parquet("hdfs://") parquetFile.registerTempTable("install_running") val data=sqlContext.sql("select user_id,install_ids from install_running") data.map(t=>"user_id:"+t(0)+" install_ids:"+t(1)).collect().foreach(println)
7.寫文件時,將全部結果聚集到一個文件scala
repartition(1)
8.若是重複使用的rdd,使用cache緩存code
cache()
9.spark-shell 添加依賴包orm
spark-1.4.1-bin-hadoop2.4/bin/spark-shell local[4] --jars code.jar
10.spark-shell使用yarn模式,並使用隊列server
spark-1.4.1-bin-hadoop2.4/bin/spark-shell --master yarn-client --queue wz111