Spark包含了大數據領域常見的各類計算框架java
不徹底對python
由於咱們只能使用spark core代替mr作離線計算,數據的存儲仍是要依賴hdfssql
Spark+Hadoop的組合,纔是將來大數據領域最熱門的組合,也是最有前景的組合! shell
速度api
容易使用緩存
一站式解決方案app
能夠運行在任意的平臺框架
抽象層次低,須要手工編寫代碼來完成,使用上難以上手機器學習
只提供兩個操做,Map和Reduce,表達力欠缺分佈式
一個Job只有Map和Reduce兩個階段
中間結果也放在HDFS文件系統中(速度慢)
延遲高,只適用Batch數據處理,對於交互式數據處理,實時數據處理的支持不夠
對於迭代式數據處理性能比較差
==所以,Hadoop MapReduce會被新一代的大數據處理平臺替代是技術發展的趨勢,而在新一代的大數據處理平臺中,Spark目前獲得了最普遍的承認和支持==
準備安裝包spark-2.2.0-bin-hadoop2.7.tgz
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz -C /opt/ mv spark-2.2.0-bin-hadoop2.7/ spark
修改spark-env.sh
export JAVA_HOME=/opt/jdk export SPARK_MASTER_IP=uplooking01 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=4 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=2g export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
配置環境變量
#配置Spark的環境變量 export SPARK_HOME=/opt/spark export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
啓動單機版spark
start-all-spark.sh
查看啓動
http://uplooking01:8080
配置spark-env.sh
[root@uplooking01 /opt/spark/conf] export JAVA_HOME=/opt/jdk #配置master的主機 export SPARK_MASTER_IP=uplooking01 #配置master主機通訊的端口 export SPARK_MASTER_PORT=7077 #配置spark在每一個worker中使用的cpu核數 export SPARK_WORKER_CORES=4 #配置每一個主機有一個worker export SPARK_WORKER_INSTANCES=1 #worker的使用內存是2gb export SPARK_WORKER_MEMORY=2g #hadoop的配置文件中的目錄 export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
配置slaves
[root@uplooking01 /opt/spark/conf] uplooking03 uplooking04 uplooking05
分發spark
[root@uplooking01 /opt/spark/conf] scp -r /opt/spark uplooking02:/opt/ scp -r /opt/spark uplooking03:/opt/ scp -r /opt/spark uplooking04:/opt/ scp -r /opt/spark uplooking05:/opt/
分發uplooking01上配置的環境變量
[root@uplooking01 /] scp -r /etc/profile uplooking02:/etc/ scp -r /etc/profile uplooking03:/etc/ scp -r /etc/profile uplooking04:/etc/ scp -r /etc/profile uplooking05:/etc/
啓動spark
[root@uplooking01 /] start-all-spark.sh
先中止正在運行的spark集羣
修改spark-env.sh
#註釋如下這兩行內容 #export SPARK_MASTER_IP=uplooking01 #export SPARK_MASTER_PORT=7077
添加內容
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=uplooking03:2181,uplooking04:2181,uplooking05:2181 -Dspark.deploy.zookeeper.dir=/spark"
分發修改的[配置
scp /opt/spark/conf/spark-env.sh uplooking02:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking03:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking04:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking05:/opt/spark/conf
啓動集羣
[root@uplooking01 /] start-all-spark.sh
[root@uplooking02 /] start-master.sh
spark-shell --master spark://uplooking01:7077 #spark-shell能夠在啓動時指定spark-shell這個application使用的資源(總核數,每一個work上使用的內存) spark-shell --master spark://uplooking01:7077 --total-executor-cores 6 --executor-memory 1g #若是不指定 默認使用每一個worker上所有的核數,和每一個worker上的1g內存
sc.textFile("hdfs://ns1/sparktest/").flatMap(_.split(",")).map((_,1)).reduceByKey(_+_).collect
Master
Worker
Spark-Submitter ===> Driver