Spark算子:RDD基本轉換操做(2)–coalesce、repartition

coalesce

def coalesce(numPartitions: Int, shuffle: Boolean = false)(implicit ord: Ordering[T] = null): RDD[T]apache

該函數用於將RDD進行重分區,使用HashPartitioner。函數

第一個參數爲重分區的數目,第二個爲是否進行shuffle,默認爲false;es5

如下面的例子來看:spa

 
  1. scala> var data = sc.textFile("/tmp/lxw1234/1.txt")
  2. data: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[53] at textFile at :21
  3.  
  4. scala> data.collect
  5. res37: Array[String] = Array(hello world, hello spark, hello hive, hi spark)
  6.  
  7. scala> data.partitions.size
  8. res38: Int = 2 //RDD data默認有兩個分區
  9.  
  10. scala> var rdd1 = data.coalesce(1)
  11. rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[2] at coalesce at :23
  12.  
  13. scala> rdd1.partitions.size
  14. res1: Int = 1 //rdd1的分區數爲1
  15.  
  16.  
  17. scala> var rdd1 = data.coalesce(4)
  18. rdd1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[3] at coalesce at :23
  19.  
  20. scala> rdd1.partitions.size
  21. res2: Int = 2 //若是重分區的數目大於原來的分區數,那麼必須指定shuffle參數爲true,//不然,分區數不便
  22.  
  23. scala> var rdd1 = data.coalesce(4,true)
  24. rdd1: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at coalesce at :23
  25.  
  26. scala> rdd1.partitions.size
  27. res3: Int = 4
  28.  

repartition

def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T]scala

該函數其實就是coalesce函數第二個參數爲true的實現blog

 

 
  1. scala> var rdd2 = data.repartition(1)
  2. rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[11] at repartition at :23
  3.  
  4. scala> rdd2.partitions.size
  5. res4: Int = 1
  6.  
  7. scala> var rdd2 = data.repartition(4)
  8. rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at repartition at :23
  9.  
  10. scala> rdd2.partitions.size
  11. res5: Int = 4

 

若是以爲本博客對您有幫助,請 贊助做者 。ci

相關文章
相關標籤/搜索