Spark Structured streaming API支持的輸出源有:Console、Memory、File和Foreach。其中Console在前兩篇博文中已有詳述,而Memory使用很是簡單。本文着重介紹File和Foreach兩種方式,並介紹如何在源碼基本擴展新的輸出方式。html
Structured Streaming支持將數據以File形式保存起來,其中支持的文件格式有四種:json、text、csv和parquet。其使用方式也很是簡單隻需設置checkpointLocation和path便可。checkpointLocation是檢查點保存的路徑,而path是真實數據保存的路徑。sql
以下所示的測試例子:express
// Create DataFrame representing the stream of input lines from connection to host:port apache val lines = spark.readStream json .format("socket") session .option("host", host) app .option("port", port) 框架 .load() less
// Split the lines into words socket val words = lines.as[String].flatMap(_.split(" "))
// Generate running word count val wordCounts = words.groupBy("value").count()
// Start running the query that prints the running counts to the console val query = wordCounts.writeStream .format("json") .option("checkpointLocation","root/jar") .option("path","/root/jar") .start() |
注意:
File形式不能設置"compelete"模型,只能設置"Append"模型。因爲Append模型不能有聚合操做,因此將數據保存到外部File時,不能有聚合操做。
foreach輸出方式只須要實現ForeachWriter抽象類,並實現三個方法,當Structured Streaming接收到數據就會執行其三個方法,以下的測試示例:
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
// scalastyle:off println package org.apache.spark.examples.sql.streaming
import org.apache.spark.sql.SparkSession
/** * Counts words in UTF8 encoded, '\n' delimited text received from the network. * * Usage: StructuredNetworkWordCount <hostname> <port> * <hostname> and <port> describe the TCP server that Structured Streaming * would connect to receive data. * * To run this on your local machine, you need to first run a Netcat server * `$ nc -lk 9999` * and then run the example * `$ bin/run-example sql.streaming.StructuredNetworkWordCount * localhost 9999` */ object StructuredNetworkWordCount { def main(args: Array[String]) { if (args.length < 2) { System.err.println("Usage: StructuredNetworkWordCount <hostname> <port>") System.exit(1) }
val host = args(0) val port = args(1).toInt
val spark = SparkSession .builder .appName("StructuredNetworkWordCount") .getOrCreate()
import spark.implicits._
// Create DataFrame representing the stream of input lines from connection to host:port val lines = spark.readStream .format("socket") .option("host", host) .option("port", port) .load()
// Start running the query that prints the running counts to the console val query = wordCounts.writeStream .outputMode("append") .foreach(new ForearchWriter[Row]{ override def open(partitionId:Long,version:Long):Boolean={ println("open") return true } override def process(value:Row):Unit={ val spark = SparkSession.builder.getOrCreate() val seq = value.mkString.split(" ") val row = Row.fromSeq(seq) val rowRDD:RDD[Row] = sparkContext.getOrCreate().parallelize[Row](Seq(row))
val userSchema = new StructType().add("name","String").add("age","String") val peopleDF = spark.createDataFrame(rowRDD,userSchema) peopleDF.createOrReplaceTempView(myTable) spark.sql("select * from myTable").show() }
override def close(errorOrNull:Throwable):Unit={ println("close") } }) .start()
query.awaitTermination() } } // scalastyle:on println |
上述程序是直接繼承ForeachWriter類的接口,並實現了open()、process()、close()三個方法。若採用顯示定義一個類來實現,須要注意Scala的泛型設計,以下所示:
class myForeachWriter[T<:Row](stream:CatalogTable) extends ForearchWriter[T]{ override def open(partionId:Long,version:Long):Boolean={ println("open") true }
override def process(value:T):Unit={ println(value) }
override def close(errorOrNull:Throwable):Unit={ println("close") } } |
若上述Spark Structured Streaming API提供的數據輸出源仍不能知足要求,那麼還有一種方法能夠使用:修改源碼。
以下經過實現一種自定義的Console來介紹這種使用方式:
Spark有一個Sink接口,用戶能夠實現該接口的addBatch方法,其中的data參數是接收的數據,以下所示直接將其輸出到控制檯:
class ConsoleSink(streamName:String) extends Sink{ override def addBatch(batchId:Long, data;DataFrame):Unit = { data.show() } } |
在用戶自定義的輸出形式時,並調用start()方法後,Spark框架會去調用DataStreamWriter類的start()方法。因此用戶能夠直接在該方法中添加自定義的輸出方式,如咱們向其傳遞上述建立的ConsoleSink類示例,以下所示:
def start():StreamingQuery={ if(source == "memory"){ ... }else if(source=="foreach"){ ... }else if(source=="consoleSink"){ val streamName:String = extraOption.get("streamName") mathc{ case Some(str):str case None=>throw new AnalysisException("streamName option must be specified for Sink") }
val sink = new consoleSink(streamName) df.sparkSession.sessionState.streamingQueryManager.startQuery( extraOption.get("queryName"), extraOption.get("checkpointLocation"), df, sink, outputMode, useTempCheckpointLocaltion = true, recoverFromCheckpointLocation = false, trigger = trigger ) }else{ ... } } |
在前兩部修改和實現完成後,用戶就能夠按正常的Structured Streaming API方式使用了,惟一不一樣的是在輸出形式傳遞的參數是"consoleSink"字符串,以下所示:
def execute(stream:CatalogTable):Unit={ val spark = SparkSession .builder .appName("StructuredNetworkWordCount") .getOrCreate() /**1. 獲取數據對象DataFrame*/ val lines = spark.readStream .format("socket") .option("host", "localhost") .option("port", 9999) .load()
/**2. 啓動Streaming開始接受數據源的信息*/ val query:StreamingQuery = lines.writeStream .outputMode("append") .format("consoleSink") .option("streamName","myStream") .start()
query.awaitTermination() } |
[1]. Structured Streaming Programming Guide.