在Ubuntu上安裝Spark

1.下載spark2.4.3 使用用戶的hadoop的版本,解壓並放到/usr/local下並更名爲spark目錄shell

2.設置spark目錄爲本用戶全部bash

3.設置環境變量oop

(1)#~/.bashrcspa

export SPARK_HOME=/usr/local/sparkxml

source ~/.bashrcblog

(2)cp /usr/local/spark/conf/spark-env.sh.template /usr/local/spark/conf/spark-env.shhadoop

(3)進入  /usr/local/spark/conf/spark-env.shit

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)spark

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoopast

SPARK_LOCAL_IP="127.0.0.1"

4.單機模式

/usr/local/spark/bin/spark-shell

5.增長內容到最外層標籤內,文件是/usr/local/hadoop/etc/hadoop/yarn-site.xml

執行(yarn模式)

/usr/local/spark/bin/spark-shell --master yarn --deploy-mode client

相關文章
相關標籤/搜索