1.3 Apache Flink本地安裝

Apache Flink 定義

Apache Flink是一個框架和分佈式處理引擎,用於對無界和有界數據流進行狀態計算。Flink設計爲在全部常見的集羣環境中運行,之內存速度和任何規模執行計算。html

設置:下載並啓動Flink

Flink可在Linux,Mac OS X和Windows上運行。爲了可以運行Flink,惟一的要求是安裝Java 8.x。Windows用戶,請查看Windows上的Flink指南,該指南介紹瞭如何在Windows上運行Flink以進行本地設置。前端

您能夠經過發出如下命令來檢查Java的正確安裝:java

java -versiongit

若是你有Java 8,輸出將以下所示:github

java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
複製代碼

下載和解壓縮

  1. 下載頁面下載二進制文件。您能夠選擇任何您喜歡的Hadoop / Scala組合。若是您打算只使用本地文件系統,任何Hadoop版本均可以正常工做。
  2. 轉到下載目錄。
  3. 解壓縮下載的存檔。
$ cd ~/Downloads # Go to download directory 
$ tar xzf flink-*.tgz # Unpack the downloaded archive 
$ cd flink-1.8.0
複製代碼

對於MacOS X用戶,能夠經過Homebrew安裝Flink 。web

$ brew install apache-flink ... 
$ flink --version Version: 1.8.0, Commit ID: 4caec0d
複製代碼

啓動本地Flink羣集

$ ./bin/start-cluster.sh # Start Flink
複製代碼

檢查分派器的web前端在HTTP://本地主機:8081,並確保一切都正常運行。Web前端應報告單個可用的TaskManager實例。 apache

jobmanager-1.png

您還能夠經過檢查logs目錄中的日誌文件來驗證系統是否正在運行:windows

$ tail log/flink-*-standalonesession-*.log
INFO ... - Rest endpoint listening at localhost:8081
INFO ... - http://localhost:8081 was granted leadership ...
INFO ... - Web frontend listening at http://localhost:8081.
INFO ... - Starting RPC endpoint for StandaloneResourceManager at akka://flink/user/resourcemanager .
INFO ... - Starting RPC endpoint for StandaloneDispatcher at akka://flink/user/dispatcher .
INFO ... - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership ...
INFO ... - Starting the SlotManager.
INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership ...
INFO ... - Recovering all persisted jobs.
INFO ... - Registering TaskManager ... under ... at the SlotManager.
複製代碼

閱讀代碼

您能夠在Scala中找到此SocketWindowWordCount示例的完整源代碼,並在GitHub上找到Javabash

Scala服務器

object SocketWindowWordCount {

    def main(args: Array[String]) : Unit = {

        // the port to connect to
        val port: Int = try {
            ParameterTool.fromArgs(args).getInt("port")
        } catch {
            case e: Exception => {
                System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'")
                return
            }
        }

        // get the execution environment
        val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

        // get input data by connecting to the socket
        val text = env.socketTextStream("localhost", port, '\n')

        // parse the data, group it, window it, and aggregate the counts
        val windowCounts = text
            .flatMap { w => w.split("\\s") }
            .map { w => WordWithCount(w, 1) }
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .sum("count")

        // print the results with a single thread, rather than in parallel
        windowCounts.print().setParallelism(1)

        env.execute("Socket Window WordCount")
    }

    // Data type for words with count
    case class WordWithCount(word: String, count: Long)
}
複製代碼

JAVA

public class SocketWindowWordCount {

    public static void main(String[] args) throws Exception {

        // the port to connect to
        final int port;
        try {
            final ParameterTool params = ParameterTool.fromArgs(args);
            port = params.getInt("port");
        } catch (Exception e) {
            System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
            return;
        }

        // get the execution environment
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // get input data by connecting to the socket
        DataStream<String> text = env.socketTextStream("localhost", port, "\n");

        // parse the data, group it, window it, and aggregate the counts
        DataStream<WordWithCount> windowCounts = text
            .flatMap(new FlatMapFunction<String, WordWithCount>() {
                @Override
                public void flatMap(String value, Collector<WordWithCount> out) {
                    for (String word : value.split("\\s")) {
                        out.collect(new WordWithCount(word, 1L));
                    }
                }
            })
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .reduce(new ReduceFunction<WordWithCount>() {
                @Override
                public WordWithCount reduce(WordWithCount a, WordWithCount b) {
                    return new WordWithCount(a.word, a.count + b.count);
                }
            });

        // print the results with a single thread, rather than in parallel
        windowCounts.print().setParallelism(1);

        env.execute("Socket Window WordCount");
    }

    // Data type for words with count
    public static class WordWithCount {

        public String word;
        public long count;

        public WordWithCount() {}

        public WordWithCount(String word, long count) {
            this.word = word;
            this.count = count;
        }

        @Override
        public String toString() {
            return word + " : " + count;
        }
    }
}
複製代碼

運行示例

如今,咱們將運行此Flink應用程序。它將從套接字讀取文本,而且每5秒打印一次前5秒內每一個不一樣單詞的出現次數,即處理時間的翻滾窗口,只要文字漂浮在其中。

  • 首先,咱們使用netcat來啓動本地服務器
$ nc -l 9000
複製代碼
  • 提交Flink計劃:
$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000 Starting execution of program
複製代碼

程序鏈接到套接字並等待輸入。您能夠檢查Web界面以驗證做業是否按預期運行:

  • 單詞在5秒的時間窗口(處理時間,翻滾窗口)中計算並打印到stdout。監視TaskManager的輸出文件並寫入一些文本nc(輸入在點擊後逐行發送到Flink):
$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye
複製代碼

該.out文件將在每一個時間窗口結束時,只要打印算做字浮在,例如:

$ tail -f log/flink-*-taskexecutor-*.out
lorem : 1
bye : 1
ipsum : 4
複製代碼

要中止Flink 所要作的操做:

$ ./bin/stop-cluster.sh
複製代碼
相關文章
相關標籤/搜索