安裝spark

1.安裝hadoophtml

這裏就不講述安裝hadoop的過程,網上有不少java

2.下載spark,選擇合適的版本shell

https://spark.apache.org/downloads.htmlapache

3.安裝scalaapp

4.解壓spark包,複製配置文件oop

spark-env.sh.template >> spark-env.sh
slaves.template >> slavesui

spark-env.sh的內容以下spa

export SCALA_HOME=/usr/local/share/scalascala

export JAVA_HOME=/usr/local/jdk-10.0.2code

export SPARK_DIST_CLASSPATH=/usr/local/hadoop/bin

(備註:若是此時報

WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

則添加這一行export LD_LIBRARY_PATH=/usr/local/hadoop/lib/native,這裏是你安裝hadoop的目錄,而後把這個包放到這個目錄裏hadoop-native-64-2.7.0.tar)

slaves的內容以下

127.0.0.1

5.啓動命令

./sbin/startup.sh

./bin/spark-shell

若是此時執行spark-shell報下面這個錯,網上查了,聽說緣由是scala的2.8以上的版本與spark的緣由,則執行spark-shell命令時,添加參數,即

./bin/spark-shell -Dscala.usejavacp=true

Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
相關文章
相關標籤/搜索