歡迎轉載,轉載請註明出處,徽滬一郎。java
Spark應用開發實踐性很是強,不少時候可能都會將時間花費在環境的搭建和運行上,若是有一個比較好的指導將會大大的縮短應用開發流程。Spark Streaming中涉及到和許多第三方程序的整合,源碼中的例子如何真正跑起來,文檔不是不少也不詳細。apache
本篇主要講述如何運行KafkaWordCount,這個須要涉及Kafka集羣的搭建,仍是說的越仔細越好。bash
步驟1:下載kafka 0.8.1及解壓測試
wget https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.1.1/kafka_2.10-0.8.1.1.tgz tar zvxf kafka_2.10-0.8.1.1.tgz cd kafka_2.10-0.8.1.1
步驟2:啓動zookeeperspa
bin/zookeeper-server-start.sh config/zookeeper.properties
步驟3:修改配置文件config/server.properties,添加以下內容.net
host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the # value for "host.name" if configured. Otherwise, it will use the value returned from # java.net.InetAddress.getCanonicalHostName(). advertised.host.name=localhost
步驟4:啓動Kafka server線程
bin/kafka-server-start.sh config/server.properties
步驟5:建立topicscala
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
檢驗topic建立是否成功code
bin/kafka-topics.sh --list --zookeeper localhost:2181
若是正常返回testserver
步驟6:打開producer,發送消息
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test ##啓動成功後,輸入如下內容測試 This is a message This is another message
步驟7:打開consumer,接收消息
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning ###啓動成功後,若是一切正常將會顯示producer端輸入的內容 This is a message This is another message
KafkaWordCount源文件位置 examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala
儘管裏面有使用說明,見下文,但若是不是事先對Kafka有必定的瞭解的話,決然不知道這些參數是什麼意思,也不知道該如何填寫。
/** * Consumes messages from one or more topics in Kafka and does wordcount. * Usage: KafkaWordCount * is a list of one or more zookeeper servers that make quorum * is the name of kafka consumer group * is a list of one or more kafka topics to consume from * is the number of threads the kafka consumer should use * * Example: * `$ bin/run-example \ * org.apache.spark.examples.streaming.KafkaWordCount zoo01,zoo02,zoo03 \ * my-consumer-group topic1,topic2 1` */ object KafkaWordCount { def main(args: Array[String]) { if (args.length < 4) { System.err.println("Usage: KafkaWordCount ") System.exit(1) } StreamingExamples.setStreamingLogLevels() val Array(zkQuorum, group, topics, numThreads) = args val sparkConf = new SparkConf().setAppName("KafkaWordCount") val ssc = new StreamingContext(sparkConf, Seconds(2)) ssc.checkpoint("checkpoint") val topicpMap = topics.split(",").map((_,numThreads.toInt)).toMap val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicpMap).map(_._2) val words = lines.flatMap(_.split(" ")) val wordCounts = words.map(x => (x, 1L)) .reduceByKeyAndWindow(_ + _, _ - _, Minutes(10), Seconds(2), 2) wordCounts.print() ssc.start() ssc.awaitTermination() } }
講清楚了寫這篇博客的主要緣由以後,來看一看該如何運行KafkaWordCount
步驟1:中止運行剛纔的kafka-console-producer和kafka-console-consumer
步驟2:運行KafkaWordCountProducer
bin/run-example org.apache.spark.examples.streaming.KafkaWordCountProducer localhost:9092 test 3 5
解釋一下參數的意思,localhost:9092表示producer的地址和端口, test表示topic,3表示每秒發多少條消息,5表示每條消息中有幾個單詞
步驟3:運行KafkaWordCount
bin/run-example org.apache.spark.examples.streaming.KafkaWordCount localhost:2181 test-consumer-group test 1
解釋一下參數, localhost:2181表示zookeeper的監聽地址,test-consumer-group表示consumer-group的名稱,必須和$KAFKA_HOME/config/consumer.properties中的group.id的配置內容一致,test表示topic,1表示線程數。