Spark的HA部署

1、安裝JDK、Scalajava

2、安裝zookeepernode

3、安裝Hadoopoop

4、安裝Sparkurl

  一、修改spark/conf/spark-env.sh spa

    export JAVA_HOME=/usr/java/jdk1.8.0_65
    export SCALA_HOME=/usr/scala-2.11.8
    export HADOOP_HOME=/usr/hadoop-2.7.2
    export HADOOP_CONF_DIR=/usr/hadoop-2.7.2/etc/hadoop
    export SPARK_MASTER_IP=node1
    export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181 -Dspark.deploy.zookeeper.dir=/spark"
    export SPARK_WORKER_MEMORY=1g
    export SPARK_EXECUTOR_MEMORY=1g
    export SPARK_DRIVER_MEMORY=1G
    export SPARK_WORKER_CORES=2scala

  二、修改 spark/conf/slaves  server

    node2
    node3
    node4three

  三、修改 spark/conf/spark-defaults.confhadoop

    spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
    spark.eventLog.enabled true
    spark.eventLog.dir hdfs://mycluster/historyServerforSpark
    spark.yarn.historyServer.address node1:18080
    spark.history.fs.logDirectory hdfs://mycluster/historyServerforSparkspark

  四、須要到hdfs 系統上建立/historyServerforSpark目錄

  五、複製到各個機器上

  六、啓動spark集羣和啓動history-serve

   ./start-all.sh

   ./start-history-server.sh

 

PS:其餘機器的master須要在其餘機器運行./start-master.sh

相關文章
相關標籤/搜索