4)spark集羣搭建

1.安裝spark

解壓縮(/opt)shell

tar zxvf spark-1.6.1-bin-hadoop2.6.tgz
mv spark-1.6.1-bin-hadoop2.6 spark

設置環境變量bash

vi ~/.bashrc

    export SPARK_HOME=/opt/sparkjsp

    export PATH=$SPARK_HOME/binoop

    export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libspa

source ~/.bashrc

2.修改spark-env.sh

cd /opt/spark/conf
mv spark-env.sh.template spark-env.sh
vi spark-env.sh

    export JAVA_HOME=/usr/jdk1.7.0_55scala

    export SCALA_HOME=/opt/scalacode

    export SPARK_MASTER_IP=192.168.252.164hadoop

    export SPARK_WORKER_MEMORY=1gspark

    export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoopast

3.修改slaves

    hadoop001

    hadoop002

    hadoop003

4.二三節點

複製spark目錄到二三節點

cd /opt
scp -r spark hadoop002:/opt
scp -r spark hadoop003:/opt

複製環境變量到二三節點

scp ~/.bashrc hadoop002:~
scp ~/.bashrc hadoop003:~

5.啓動

啓動spark集羣

    start-all.sh

驗證

    jsp或8080端口驗證集羣啓動成功

    hadoop001:master

    hadoop002:worker

    hadoop003:worker

    進入spark-shell查看是否正常

相關文章
相關標籤/搜索