Apache Spark™ 2.0: MQTT as a Source for Structured Streaming

Apache Spark™ 2.0: MQTT as a Source for Structured Streaming

 

Apache Spark™ 2.0 introduced Structured Streaming as an alpha release and Spark contributors have already created a library for reading data from MQTT Servers that allows Spark users to process Internet-of-Things data. Specifically, the library allows developers to create a SQL Stream with data received through an MQTT server using sqlContext.readstream.html

The work is being done in Apache Bahir™ which "provides extensions to distributed analytic platforms such as Apache Spark". For more detail about Bahir and to get started with MQTT Structured Streaming, check out this step-by-step post from Luciano Resende.git

What is MQTT?

For those who might be unfamiliar with MQTT (Message Queuing Telemetry Transport), the mqtt.org site defines it as "a machine-to-machine (M2M)/Internet of Things connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is a standard at OASIS-mqtt-v3.1.1"github

MQTT has the potential to be a good fit for Spark because of its efficiency, low networking overhead, and a message protocol governed by Quality of Service (QoS) levels that establish different delivery guarantees. Originally authored in 1999, MQTT is already in wide use and its various implementations feature easy distribution, high performance, and high availability.web

High throughput and fault tolerance for structured streaming sources

Currently, Spark provides fault tolerance in the sense that a streaming source can reliably replay the entire sequence of incoming messages. This is true of streaming sources like Kafka, S3, HDFS, and even local filesystems. However, it may not be the case with MQTT. (And it is definitely not the case with native socket-based sources.)sql

For now, all of the available sources in structured streaming are backed by a filesystem (S3, HDFS etc.). In these cases, all the executors can request offsets from the filesystem sources and process data in parallel.apache

However in cases where streaming sources have pub-sub as the only option for receiving the stream, supporting high throughput becomes a challenge. MQTT does not currently support reading in parallel and thus does not support high throughput.socket

Most message queues including MQTT lack built-in support for replaying the entire sequence of message. That deficiency might not be an issue since not all work loads require the entire sequence be retained on the server. If so, there will ideally be a limit on when a source can purge stored data without compromising any "exactly once" delivery guarantees. (Note that this is an outstanding issue. See SPARK-16963.)ide

For our current release, we implement a minimal fault tolerance guarantee by storing all incoming messages locally on the disk. This represents an intermediate solution, as it poses a limit on how many messages it can process.post

For source code and documentation, visit the related github repository, or visit the Bahir website to learn about other extensions available within the project.ui

相關文章
相關標籤/搜索