第9課:Spark Streaming源碼解讀之Receiver在Driver的精妙實現全生命週期徹

本期內容:node

1,Receiver啓動的方式設想ide

2,Receiver啓動源碼完全分析oop

 

爲何要Receiver?spa

Receiver不斷持續接收外部數據源的數據,並把數據彙報給Driver端,這樣咱們每隔BatchDuration會把彙報數據生成不一樣的Job,來執行RDD的操做。rest

 

Receiver是隨着應用程序的啓動而啓動的。ip

Receiver和InputDStream是一一對應的。hadoop

RDD[Receiver]只有一個Partition,一個Receiver實例。rpc

 

Spark Core並不知道RDD[Receiver]的特殊性,依然按照普通RDD對應的Job進行調度,就有可能在一樣一個Executor上啓動多個Receiver,會致使負載不均衡,會致使Receiver啓動失敗。get

 

Receiver在Executor啓動的方案:源碼

1,啓動不一樣Receiver採用RDD中不一樣Partiton的方式,不一樣的Partiton表明不一樣的Receiver,在執行層面就是不一樣的Task,在每一個Task啓動時就啓動Receiver。

這種方式實現簡單巧妙,可是存在弊端啓動可能失敗,運行過程當中Receiver失敗,會致使TaskRetry,若是3次失敗就會致使Job失敗,會致使整個Spark應用程序失敗。由於Receiver的故障,致使Job失敗,不能容錯。

 

2.第二種方式就是Spark Streaming採用的方式。

在ReceiverTacker的start方法中,先實例化Rpc消息通訊體ReceiverTrackerEndpoint,再調用

launchReceivers方法。

/** Start the endpoint and receiver execution thread. */
def start(): Unit = synchronized {
  if (isTrackerStarted) {
    throw new SparkException("ReceiverTracker already started")
  }

  if (!receiverInputStreams.isEmpty) {
    endpoint = ssc.env.rpcEnv.setupEndpoint(
      "ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv))
    if (!skipReceiverLaunch) launchReceivers()
    logInfo("ReceiverTracker started")
    trackerState = Started
  }
}

在launchReceivers方法中,先對每個ReceiverInputStream獲取到對應的一個Receiver,而後發送StartAllReceivers消息。Receiver對應一個數據來源。

/**
 * Get the receivers from the ReceiverInputDStreams, distributes them to the
 * worker nodes as a parallel collection, and runs them.
 */
private def launchReceivers(): Unit = {
  val receivers = receiverInputStreams.map(nis => {
    val rcvr = nis.getReceiver()
    rcvr.setReceiverId(nis.id)
    rcvr
  })

  runDummySparkJob()

  logInfo("Starting " + receivers.length + " receivers")
  endpoint.send(StartAllReceivers(receivers))
}

ReceiverTrackerEndpoint接收到StartAllReceivers消息後,先找到Receiver運行在哪些Executor上,而後調用startReceiver方法。

override def receive: PartialFunction[Any, Unit] = {
  // Local messages
  case StartAllReceivers(receivers) =>
    val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
    for (receiver <- receivers) {
      val executors = scheduledLocations(receiver.streamId)
      updateReceiverScheduledExecutors(receiver.streamId, executors)
      receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
      startReceiver(receiver, executors)
    }

startReceiver方法在Driver層面本身指定了TaskLocation,而不用Spark Core來幫咱們選擇TaskLocation。其有如下特色:終止Receiver不須要重啓Spark Job;第一次啓動Receiver,不會執行第二次;爲了啓動Receiver而啓動了一個Spark做業,一個Spark做業啓動一個Receiver。每一個Receiver啓動觸發一個Spark做業,而不是每一個Receiver是在一個Spark做業的一個Task來啓動。當提交啓動Receiver的做業失敗時發送RestartReceiver消息,來重啓Receiver。

/**
 * Start a receiver along with its scheduled executors
 */
private def startReceiver(
    receiver: Receiver[_],
    scheduledLocations: Seq[TaskLocation]): Unit = {
  def shouldStartReceiver: Boolean = {
    // It's okay to start when trackerState is Initialized or Started
    !(isTrackerStopping || isTrackerStopped)
  }

  val receiverId = receiver.streamId
  if (!shouldStartReceiver) {
    onReceiverJobFinish(receiverId)
    return
  }

  val checkpointDirOption = Option(ssc.checkpointDir)
  val serializableHadoopConf =
    new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration)

  // Function to start the receiver on the worker node
  val startReceiverFunc: Iterator[Receiver[_]] => Unit =
    (iterator: Iterator[Receiver[_]]) => {
      if (!iterator.hasNext) {
        throw new SparkException(
          "Could not start receiver as object not found.")
      }
      if (TaskContext.get().attemptNumber() == 0) {
        val receiver = iterator.next()
        assert(iterator.hasNext == false)
        val supervisor = new ReceiverSupervisorImpl(
          receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
        supervisor.start()
        supervisor.awaitTermination()
      } else {
        // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
      }
    }

  // Create the RDD using the scheduledLocations to run the receiver in a Spark job
  val receiverRDD: RDD[Receiver[_]] =
    if (scheduledLocations.isEmpty) {
      ssc.sc.makeRDD(Seq(receiver), 1)
    } else {
      val preferredLocations = scheduledLocations.map(_.toString).distinct
      ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
    }
  receiverRDD.setName(s"Receiver $receiverId")
  ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId")
  ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))

  val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](
    receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ())
  // We will keep restarting the receiver job until ReceiverTracker is stopped
  future.onComplete {
    case Success(_) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
    case Failure(e) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logError("Receiver has been stopped. Try to restart it.", e)
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
  }(submitJobThreadPool)
  logInfo(s"Receiver ${receiver.streamId} started") }

相關文章
相關標籤/搜索