flink-docker
https://github.com/melentye/flink-dockercss
https://hub.docker.com/_/flink/?tab=descriptionhtml
https://shekharsingh.com/blog/2016/11/12/apache-flink-rabbimq-streams-processor.htmlmysql
http://www.54tianzhisheng.cn/2019/01/20/Flink-RabbitMQ-sink/
https://github.com/tydhot/Kafka-Flink-Rabbitmq-Demo
https://github.com/rootcss/flink-rabbitmqgit
flink-aggregate---(sum,max,keyby)
https://segmentfault.com/a/1190000017571429github
flink-to-mysql
http://www.54tianzhisheng.cn/2019/01/15/Flink-MySQL-sink/redis
table data of mysql is fixed when start to query, so the job should be a flink batch job.sql
If you want to read the incoming data if there is incoming data, flink can not handle this kind of case, because flink does not know the incoming data unless you monitor binlog.docker
You have to use canal to sync up the binlog from mysql to kafka, and run a flink streaming job reading data from kafka. This is the best solution.apache
flinks-to-elastic-search
http://lxwei.github.io/posts/Flink(5)-Sink-%E4%BB%8B%E7%BB%8D%E4%B8%8E%E5%AE%9E%E8%B7%B5.htmlsegmentfault
flink-redis sink
https://blog.csdn.net/xianpanjia4616/article/details/82534369
flink-redishttps://www.cnblogs.com/jiashengmei/p/9084057.html