【原創】大叔經驗分享(15)spark sql limit實現原理

以前討論過hive中limit的實現,詳見 http://www.javashuo.com/article/p-dochikfw-hc.html
下面看spark sql中limit的實現,首先看執行計劃:html

spark-sql> explain select * from test1 limit 10;
== Physical Plan ==
CollectLimit 10
+- HiveTableScan [id#35], MetastoreRelation temp, test1
Time taken: 0.201 seconds, Fetched 1 row(s)sql

limit對應的CollectLimit,對應的實現類是apache

org.apache.spark.sql.execution.CollectLimitExecide

case class CollectLimitExec(limit: Int, child: SparkPlan) extends UnaryExecNode {
...
  protected override def doExecute(): RDD[InternalRow] = {
    val locallyLimited = child.execute().mapPartitionsInternal(_.take(limit))
    val shuffled = new ShuffledRowRDD(
      ShuffleExchange.prepareShuffleDependency(
        locallyLimited, child.output, SinglePartition, serializer))
    shuffled.mapPartitionsInternal(_.take(limit))
  }

可見實現很是簡單,首先調用SparkPlan.execute獲得結果的RDD,而後從每一個partition中取前limit個row獲得一個新的RDD,而後再將這個新的RDD變成一個分區,而後再取前limit個,這樣就獲得最終的結果。spa

相關文章
相關標籤/搜索
本站公眾號
   歡迎關注本站公眾號,獲取更多信息