spark standalone zookeeper HA部署方式

雖然spark master掛掉的概率很低,不過仍是被我遇到了一次。之前在spark standalone的文章中也介紹過standalone的ha,如今詳細說下部署流程,其實也比較簡單。web

一.機器

zookeeper集羣sql

zk1:2181
zk2:2181
zk3:2181

spark mastershell

spark-m1
spark-m2

spark workerbash

若干

二.步驟

1.進入spark-m1
修改conf/spark-env.shmarkdown

vi spark-env.sh
export SPARK_MASTER_IP=spark-m1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=zk1:2181,zk2:2181,zk3:2181 -Dspark.deploy.zookeeper.dir=/spark"

啓動master和slavesapp

./sbin/start-master.sh ./sbin/start-slaves.sh

2.進入spark-m2ssh

修改conf/spark-env.sh測試

vi spark-env.sh
export SPARK_MASTER_IP=spark-m2
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=zk1:2181,zk2:2181,zk3:2181 -Dspark.deploy.zookeeper.dir=/spark"

啓動master和slavesui

./sbin/start-master.sh ./sbin/start-slaves.sh

三.檢測

在spark-m1的web ui中能夠看到狀態
url

spark-m2中能夠看處處於STANDBY狀態

application提交時,master改成

--master spark://spark-m1:7077,spark-m2:7077

spark shell 測試

在spark-m1中啓動spark Shell

spark-shell --master spark://spark-m1:7077,spark-m2:7077

鏈接後關閉spark-m1 master

./bin/stop-master.sh

發現spark-shell不會斷開而是轉到spark-m2的master上繼續執行(該過程持續大概1分鐘,woker會從新註冊到spark-m2上),同時spark-m2變爲alive狀態。

能夠在spark-m2的master日誌中看到:

15/08/17 14:45:35 INFO ZooKeeperLeaderElectionAgent: We have gained leadership
15/08/17 14:45:36 INFO Master: I have been elected leader! New state: RECOVERING
15/08/17 14:45:36 INFO Master: Trying to recover worker:...
15/08/17 14:45:36 INFO Master: Trying to recover worker: ...
15/08/17 14:45:36 INFO Master: Trying to recover worker: ...
......
15/08/17 14:45:36 INFO Master: Worker has been re-registered: worker-...
15/08/17 14:45:36 INFO Master: Worker has been re-registered: worker-...
15/08/17 14:45:36 INFO Master: Worker has been re-registered: worker-...
...
15/08/17 14:45:36 INFO Master: Recovery complete - resuming operations!

部署結束

相關文章
相關標籤/搜索