Flink的安裝部署

一. Flink的下載

安裝包下載地址:http://flink.apache.org/downloads.html  ,選擇對應Hadoop的Flink版本下載html

 

 

 

 Flink 有三種部署模式,分別是 Local、Standalone Cluster 和 Yarn Cluster。web

二. Local模式

對於 Local 模式來講,JobManager 和 TaskManager 會公用一個 JVM 來完成 Workload。若是要驗證一個簡單的應用,Local 模式是最方便的。實際應用中大多使用 Standalone 或者 Yarn Cluster,而local模式只是將安裝包解壓啓動(./bin/start-local.sh)便可,在這裏不在演示。sql

三. Standalone HA模式

Standalone模式顧名思義,是在本地集羣上調度執行,不依賴於外部調度機制例如YARN, 通常須要配置爲HA,防止Jobmanager忽然掛掉,致使整個集羣或者任務執行失敗的狀況發生。下面介紹一下Standalone HA模式的搭建安裝shell

當Flink程序運行時,若是jobmanager崩潰,那麼整個程序都會失敗。爲了防止jobmanager的單點故障,藉助於zookeeper的協調機制,能夠實現jobmanager的HA配置—-1主(leader)多從(standby)。這裏的HA配置只涉及standalone模式,yarn模式暫不考慮。 apache

本例中規劃Jobmanager:hadoop01,hadoop02(一個active,一個standby);Taskmanager:hadoop02hadoop03;zookeeper集羣vim

1.  集羣部署規劃

節點名稱 master worker zookeeper
hadoop01 master  worker zookeeper
hadoop02 master worker zookeeper
hadoop03   woker zookeeper

 

 

 

 

 

 

 

2. 解壓

[hadoop@hadoop01 apps]$ tar -zxvf flink-1.7.2-bin-scala_2.11.tgz -C ./

  [hadoop@hadoop01 apps]$ ls
  azkaban flink-1.7.2 flink-1.7.2-bin-scala_2.11.tgz flume-1.8.0 hadoop-2.7.4 jq kafka_2.11-0.11 zkdata zookeeper-3.4.10 zookeeper.outbash

3. 修改配置文件

配置masters文件服務器

  該文件用於指定主節點及其web訪問端口,表示集羣的Jobmanager,vi masters,添加master:8081session

[hadoop@hadoop01 conf]$ vim masters hadoop01:8081
hadoop02:8081

配置slaves文件,該文件用於指定從節點,表示集羣的taskManager。添加如下內容app

[hadoop@hadoop01 conf]$ vim slaves
hadoop01
hadoop02
hadoop03

配置文件flink-conf.yaml

#jobmanager.rpc.address: hadoop01
high-availability:zookeeper                             #指定高可用模式(必須)
high-availability.zookeeper.quorum: hadoop01:2181,hadoop02:2181,hadoop03:2181  #ZooKeeper仲裁是ZooKeeper服務器的複製組,它提供分佈式協調服務(必須)
high-availability.storageDir:hdfs://192.168.123.111:9000/flink-metadata/recovery/       #JobManager元數據保存在文件系統storageDir中,只有指向此狀態的指針存儲在ZooKeeper中(必須)
high-availability.zookeeper.path.root:/flink         #根ZooKeeper節點,在該節點下放置全部集羣節點(推薦) 
high-availability.cluster-id:/flinkCluster           #自定義集羣(推薦)
#檢查點生成的分佈式快照的保存地點,默認是jobmanager的memory,可是HA模式必須配置在hdfs上,且保存路徑須要在hdfs上建立並指定路徑
state.backend: filesystem
state.checkpoints.dir: hdfs://192.168.123.111:9000/flink-metadata/checkpoints
state.savepoints.dir: hdfs:///flink/checkpoints

 

 

4. 拷貝安裝包到各節點

[hadoop@hadoop01 apps]$ scp -r flink-1.7.2/ hadoop@hadoop02:`pwd`
[hadoop@hadoop01 apps]$ scp -r flink-1.7.2/ hadoop@hadoop03:`pwd`

5. 配置環境變量

 配置全部節點Flink的環境變量

[hadoop@hadoop01 ~]$ vim .bashrc
export FLINK_HOME=/home/hadoop/apps/flink-1.7.2
export PATH=$PATH:$FLINK_HOME/bin
[hadoop@hadoop01 ~]$ source .bashrc

6. 啓動flink

[hadoop@hadoop01 bin]$ pwd
/home/hadoop/apps/flink-1.7.2/bin
[hadoop@hadoop01 bin]$ ls
config.sh flink-daemon.sh mesos-appmaster.sh pyflink-stream.sh start-cluster.sh stop-zookeeper-quorum.sh
flink historyserver.sh mesos-taskmanager.sh sql-client.sh start-scala-shell.sh taskmanager.sh
flink.bat jobmanager.sh pyflink.bat standalone-job.sh start-zookeeper-quorum.sh yarn-session.sh
flink-console.sh mesos-appmaster-job.sh pyflink.sh start-cluster.bat stop-cluster.sh zookeeper.sh
[hadoop@hadoop01 bin]$ ./start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host hadoop01.
Starting taskexecutor daemon on host hadoop02.
Starting taskexecutor daemon on host hadoop03.

jps查看進程

 

 

 

 

7.  WebUI查看

 http://192.168.123.111:8081

相關文章
相關標籤/搜索