以前介紹過關於Spark的程序運行模式有三種:node
1.Local模式;web
2.standalone(獨立模式)shell
3.Yarn/mesos模式編程
本文將介紹Spark安裝及運行模式的第一、3兩種模式。vim
spark-2.1.0-bin-hadoop2.7.tgz size:195MB服務器
下載連接: https://pan.baidu.com/s/1bphB3Q3 密碼: 9v5happ
安裝步驟:webapp
1.本地模式:分佈式
1.直接將tgz包放置在任一目錄:LZ放在了 /home/mfz/resources 下
2.解壓:
tar -xzvf spark-2.1.0-bin-hadoop2.7.tgz
3.進入spark-2.1.0-bin-hadoop2.7目錄下,啓動spark:
bin/spark-shell --master local
4.下面就能夠在spark命令行上編程scala啦;
在啓動spark時,spark提供了一個RDD,屬性名叫sc。下面咱們來操做一下計算wordcount:
新建文本/home/mfz/scalaWordCount.txt
scala命令以下:
val wordtxt = sc.textFile("file:///home/mfz/scalaWordCount.txt") //加載文本scalaWordCount.txt
//將文本按照空格切分紅Map(word,1),再進行reduceByKey將map的value進行累加,將計算結果落入磁盤(file表示本地磁盤)wordResult.txt目錄中 wordtxt.flatMap(_.split(" ")).map(x => (x,1)).reduceByKey(_+_).saveAsTextFile("file:///home/mfz/wordResult.txt");
查看結果
再看WebUI
scala語法詳見:https://yq.aliyun.com/topic/69
2.Yarn上運行
在Yarn上運行咱們就須要啓動HDFS與Yarn服務了。關於Hadoop安裝步驟詳見博文:大數據系列之Hadoop分佈式集羣部署
1.修改spark配置文件:
vim /home/mfz/spark-2.1.0-bin-hadoop2.7/conf/spark-env.sh
#添加Hadoop配置文件環境變量 export HADOOP_CONF_DIR=/home/mfz/hadoop-2.7.3/etc/hadoop
2.
cp spark-defaults.conf.template spark-defaults.conf vim spark-defaults.conf 添加以下 spark.master=local # 配置historyServer spark.yarn.historyServer.address=master:18080 //master是hadoop服務器hostname spark.history.ui.port=18080 spark.eventLog.enabled=true spark.eventLog.dir=hdfs:///tmp/spark/events spark.history.fs.logDirectory=hdfs:///tmp/spark/events
3.修改$Hadoop_HOME/etc/hadoop下yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:18040</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:18030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:18025</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:18141</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:18088</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log.server.url</name> <value>http://master:19888/jobhistory/logs</value> </property> </configuration>
4.啓動HDFS,Yarn服務
$HADOOP_HOME/sbin/start-dfs.sh $HADOOP_HOME/sbin/start-yarn.sh $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver
5.驗證啓動是否成功:
6.新建dfs目錄
hdfs dfs -mkdir -p /tmp/spark/events hdfs dfs -mkdir -p /tmp/spark/history
#查看目錄
hdfs dfs -ls /tmp/spark
7. 啓動Spark on Yarn
cd spark-2.1.0-bin-hadoop2.7 bin/spark-shell --master yarn-client
8.下面咱們再來執行一次WordCount命令,區別於Local咱們將落盤地址改成HDFS上。
val wordtxt = sc.textFile("file:///home/mfz/scalaWordCount.txt") //加載文本scalaWordCount.txt wordtxt.flatMap(_.split(" ")).map(x => (x,1)).reduceByKey(_+_).saveAsTextFile("/tmp/wordResult");
9.結果以下:
10.查看Yarn WebUi :master:18088。能夠看到在紅色框中的ID是 application_1492617622120_0001,正是咱們上圖spart on Yarn啓動的app id 號,點擊yarn web ui的spark id
可進入spark web ui查看咱們剛纔執行全部操做.
完~~