目前Apache Spark支持三種分佈式部署方式,分別是standalone、spark on mesos和 spark on YARN,詳情參考。html
主機名 | 應用 |
---|---|
tvm11 | zookeeper |
tvm12 | zookeeper |
tvm13 | zookeeper、spark(master)、spark(slave)、Scala |
tvm14 | spark(backup)、spark(slave)、Scala |
tvm15 | spark(slave)、Scala |
依賴scala:java
Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is deprecated as of Spark 2.4.1 and will be removed in Spark 3.0.web
zookeeper: Master結點存在單點故障,因此要藉助zookeeper,至少啓動兩臺Master結點來實現高可用,配置方案比較簡單。sql
由上面的說明可知,spark對scala版本依賴較爲嚴格,spark-2.4.5依賴scala-2.12.x,因此首先要安裝scala-2.12.x,在此選用scala-2.12.10。使用二進制安裝:apache
下載安裝包bash
解壓即用。服務器
$ wget https://downloads.lightbend.com/scala/2.12.10/scala-2.12.10.tgz $ tar zxvf scala-2.12.10.tgz -C /path/to/scala_install_dir
若是系統環境也要使用相同版本的scala,能夠將其加入到用戶環境變量(.bashrc或.bash_profile
)。app
打通三臺spark機器的work用戶ssh通道;ssh
如今安裝包到master機器:tvm13;分佈式
注意提示信息,及Hadoop版本(與已有環境匹配,若是不匹配則選非預編譯的版本本身編譯)。
解壓到安裝目錄便可。
spark服務配置文件主要有兩個:spark-env.sh和slaves。
spark-evn.sh:配置spark運行相關環境變量
slaves:指定worker服務器
配置spark-env.sh:cp spark-env.sh.template spark-env.sh
export JAVA_HOME=/data/template/j/java/jdk1.8.0_201export SCALA_HOME=/data/template/s/scala/scala-2.12.10export SPARK_WORKER_MEMORY=2048mexport SPARK_WORKER_CORES=2export SPARK_WORKER_INSTANCES=2export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=tvm11:2181,tvm12:2181,tvm13:2181 -Dspark.deploy.zookeeper.dir=/data/template/s/spark"# 關於 SPARK_DAEMON_JAVA_OPTS 參數含義: # -Dspark.deploy.recoverMode=ZOOKEEPER #表明發生故障使用zookeeper服務 # -Dspark.depoly.zookeeper.url=master.hadoop,slave1.hadoop,slave1.hadoop #主機名的名字 # -Dspark.deploy.zookeeper.dir=/spark #spark要在zookeeper上寫數據時的保存目錄# 其餘參數含義:https://blog.csdn.net/u010199356/article/details/89056304
配置slaves:cp slaves.template slaves
# A Spark Worker will be started on each of the machines listed below.tvm13 tvm14 tvm15
配置 spark-default.sh
,主要用於spark執行任務(能夠命令行動態指定):
# http://spark.apache.org/docs/latest/configuration.html#configuring-logging# spark-defaults.shspark.app.name YunTuSpark spark.driver.cores 2 spark.driver.memory 2g spark.master spark://tvm13:7077,tvm14:7077 spark.eventLog.enabled truespark.eventLog.dir hdfs://cluster01/tmp/event/logs spark.serializer org.apache.spark.serializer.KryoSerializer spark.serializer.objectStreamReset 100 spark.executor.logs.rolling.time.interval daily spark.executor.logs.rolling.maxRetainedFiles 30 spark.ui.enabled truespark.ui.killEnabled truespark.ui.liveUpdate.period 100ms spark.ui.liveUpdate.minFlushPeriod 3s spark.ui.port 4040 spark.history.ui.port 18080 spark.ui.retainedJobs 100 spark.ui.retainedStages 100 spark.ui.retainedTasks 1000 spark.ui.showConsoleProgress truespark.worker.ui.retainedExecutors 100 spark.worker.ui.retainedDrivers 100 spark.sql.ui.retainedExecutions 100 spark.streaming.ui.retainedBatches 100 spark.ui.retainedDeadExecutors 100# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
由於 spark.eventLog.dir
指定爲hdfs存儲,因此須要在hdfs預先建立相應的目錄文件:鄭州不孕不育醫院:http://jbk.39.net/yiyuanfengcai/tsyl_zztjyy/987/
hdfs dfs -mkdir -p hdfs://cluster01/tmp/event/logs
編輯 ~/.bashrc
:鄭州試管嬰兒醫院:http://www.changhong120.com/
export SPARK_HOME=/data/template/s/spark/spark-2.4.5-bin-hadoop2.7export PATH=$SPARK_HOME/bin/:$PATH
以上配置完成後,將 /path/to/spark-2.4.5-bin-hadoop2.7
分發至各個slave節點,並配置各個節點的環境變量。
先在master節點啓動全部服務:./sbin/start-all.sh
而後在backup節點單獨啓動master服務:./sbin/start-master.sh
啓動完成後到web去查看:
master(8081端口):Status: ALIVE
backup(8080端口):Status: STANDBY