地址spark.apache.orgshell
複製一臺單獨的虛擬機,名capache
修改其ip,192.168.56.200網絡
修改其hostname爲c,hostnamectl set-hostname c分佈式
修改/etc/hosts加入對本機的解析ide
重啓網絡服務 systemctl restart networkoop
上傳spark安裝文件到root目錄spa
解壓spark到/usr/local下,將其名字修改成sparkrest
cd /usr/local/sparkorm
./bin/spark-submit --class org.apache.spark.examples.SparkPi ./examples/jars/spark-examples_2.11-2.1.0.jar 10000進程
建立root下的文本文件hello.txt
./bin/spark-shell
再次鏈接一個terminal,用jps觀察進程,會看到spark-submit進程
sc
sc.textFile("/root/hello.txt")
val lineRDD = sc.textFile("/root/hello.txt")
lineRDD.foreach(println)
觀察網頁端狀況
val wordRDD = lineRDD.flatMap(line => line.split(" "))
wordRDD.collect
val wordCountRDD = wordRDD.map(word => (word,1))
wordCountRDD.collect
val resultRDD = wordCountRDD.reduceByKey((x,y)=>x+y)
resultRDD.collect
val orderedRDD = resultRDD.sortByKey(false)
orderedRDD.collect
orderedRDD.saveAsTextFile("/root/result")
觀察結果
簡便寫法:sc.textFile("/root/hello.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortByKey().collect
start-dfs.sh
spark-shell執行:sc.textFile("hdfs://192.168.56.100:9000/hello.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortByKey().collect (能夠把ip換成master,修改/etc/hosts)
sc.textFile("hdfs://192.168.56.100:9000/hello.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortByKey().saveAsTextFile("hdfs://192.168.56.100:9000/output1")
在master和全部slave上解壓spark
修改master上conf/slaves文件,加入slave
修改conf/spark-env.sh,export SPARK_MASTER_HOST=master
複製spark-env.sh到每一臺slave
cd /usr/local/spark
./sbin/start-all.sh
在c上執行:./bin/spark-shell --master spark://192.168.56.100:7077 (也能夠使用配置文件)
觀察http://master:8080