主要分紅兩部分,yarn的安裝與flink的安裝, 共3臺機器 html
10.10.10.125
10.10.10.126
10.10.10.127 java
----------------------------------------------------yarn 安裝node
wget 'http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.5/hadoop-2.8.5.tar.gz' tar -zxvf hadoop-2.8.5.tar.gz -C /home/zr/hadoop/ vim /etc/profile export HADOOP_HOME=/home/zr/hadoop/hadoop-2.8.5 export PATH=$HADOOP_HOME/bin:$PATH source /etc/profile vim /etc/sysconfig/network host名千萬不要搞成下劃線 不然就等着完蛋吧 HOSTNAME=flink125(不一樣的機器配置不一樣) vim /etc/hosts 10.10.10.125 flink125 10.10.10.126 flink126 10.10.10.127 flink127 vim yarn-site.xml 增長如下內容 <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>flink125</value> </property> <property> <name>yarn.resourcemanager.am.max-attempts</name> <value>4</value> <description>The maximum number of application master execution attempts</description> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> vi etc/hadoop/core-site.xml 增長如下內容 <configuration> <property> <name>fs.default.name</name> <value>hdfs://flink125:9000</value> </property> </configuration> vi etc/hadoop/hdfs-site.xml 增長如下內容 <property> <name>dfs.namenode.name.dir</name> <value>file:///data/hadoop/storage/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data/hadoop/storage/hdfs/data</value> </property> $ vi etc/hadoop/mapred-site.xml 增長如下內容 <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> vi slaves 增長如下內容 flink126 flink127 vim hadoop/etc/hadoop/hdoop-env.sh 將語句 export JAVA_HOME=$JAVA_HOME 修改成 export JAVA_HOME=/usr/java/jdk1.8.0_101
以上操做都是在三臺機器上執行apache
下面是免登操做vim
1 在125機器上執行 app
rm -r ~/.ssh ssh-keygen scp ~/.ssh/id_rsa.pub 126,127 機器上
2 在126 127上分別執行ssh
cat id_rsa.pub>>.ssh/authorized_keys
最後在125上執行如下操做ide
啓動hadoop start-dfs.sh 啓動yarn start-yarn.sh
----------------------------------------------------yarn 配置完畢oop
------------------------------------------------------flink 安裝scala
wget 'http://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.7.2/flink-1.7.2-bin-hadoop28-scala_2.12.tgz' tar zxvf flink-1.7.2-bin-hadoop28-scala_2.12.tgz -C /home/zr/module/ 修改flink/conf/masters,slaves,flink-conf.yaml vi masters flink125:8081 flink126:8081 vi slaves flink126 flink127 vi flink-conf.yaml taskmanager.numberOfTaskSlots:2 jobmanager.rpc.address: flink125 sudo vi /etc/profile export FLINK_HOME=/opt/module/flink-1.6.1 export PATH=$PATH:$FLINK_HOME/bin source /etc/profile conf/flink-conf.yaml high-availability: zookeeper high-availability.zookeeper.quorum: flink125:2181,flink126:2181,flink127:2181 high-availability.storageDir: hdfs:///home/zr/flink/recovery high-availability.zookeeper.path.root: /home/zr/flink yarn.application-attempts: 4 zoo.cfg server.1=flink125:2888:3888 server.2=flink126:2888:3888 server.3=flink127:2888:3888 以上操做都在三臺機器上執行
--------------------------------------------flink 配置完畢
在125機器上執行:
啓動ZooKeeper仲裁 ./start-zookeeper-quorum.sh 啓動flink ./flink run -m yarn-cluster -yn 2 -ytm 2048 /home/zr/module/flink-1.7.2/examples/zr/flink-data-sync-0.1.jar
參考文章