Spark是一個基於內存計算的開源的集羣計算系統,目的是讓數據分析更加快速,Spark 是一種與 Hadoop 類似的開源集羣計算環境,可是二者之間還存在一些不一樣之處,這些有用的不一樣之處使 Spark 在某些工做負載方面表現得更加優越,換句話說,Spark 啓用了內存分佈數據集,除了可以提供交互式查詢外,它還能夠優化迭代工做負載。儘管建立 Spark 是爲了支持分佈式數據集上的迭代做業,可是實際上它是對 Hadoop 的補充,能夠在 Hadoop 文件系統中並行運行。
html
CDH5 Spark安裝 node
1 Spark的相關軟件包 python
spark-core: spark的核心軟件包 spark-worker: 管理spark-worker的腳本 spark-master: 管理spark-master的腳本 spark-python: Spark的python客戶端
2 Spark運行依賴的環境 shell
CDH5 JDK
3 安裝Spark apache
apt-get install spark-core spark-master spark-worker spark-python4 配置運行Spark (Standalone Mode)
1 Configuring Spark(/etc/spark/conf/spark-env.sh) 分佈式
SPARK_MASTER_IP, to bind the master to a different IP address or hostname SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports SPARK_WORKER_CORES, to set the number of cores to use on this machine SPARK_WORKER_MEMORY, to set how much memory to use (for example 1000MB, 2GB) SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT SPARK_WORKER_INSTANCE, to set the number of worker processes per node SPARK_WORKER_DIR, to set the working directory of worker processes
2 Starting, Stopping, and Running Spark oop
service spark-master start service spark-worker start
還有一個GUI界面在<master_host>:18080 優化
5 Running Spark Applications this
1 Spark應用有三種運行模式: spa
Standalone mode:默認模式
YARN client mode:提交spark應用到YARN,spark驅動在spark客戶端進程上。
YARN cluster mode:提交spark應用到YARN,spark驅動運行在ApplicationMaster上。2 運行SparkPi在Standalone模式
source /etc/spark/conf/spark-env.sh CLASSPATH=$CLASSPATH:/your/additional/classpath $SPARK_HOME/bin/spark-class [<spark-config-options>] \ org.apache.spark.examples.SparkPi \ spark://$SPARK_MASTER_IP:$SPARK_MASTER_PORT 10Spark運行參數設置:http://spark.apache.org/docs/0.9.0/configuration.html
3 運行SparkPi在YARN Client模式
在YARN client和YARN cluster模式下, 你首先要上傳spark JAR包到你的HDFS上, 而後設置SPARK_JAR環境變量。source /etc/spark/conf/spark-env.sh hdfs dfs -mkdir -p /user/spark/share/lib hdfs dfs -put $SPARK_HOME/assembly/lib/spark-assembly_*.jar /user/spark/share/lib/spark-assembly.jar SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar source /etc/spark/conf/spark-env.sh SPARK_CLASSPATH=/your/additional/classpath SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar $SPARK_HOME/bin/spark-class [<spark-config-options>] \ org.apache.spark.examples.SparkPi yarn-client 104 運行SparkPi在YARN Cluster模式
source /etc/spark/conf/spark-env.sh SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar APP_JAR=$SPARK_HOME/examples/lib/spark-examples_<version>.jar $SPARK_HOME/bin/spark-class org.apache.spark.deploy.yarn.Client \ --jar $APP_JAR \ --class org.apache.spark.examples.SparkPi \ --args yarn-standalone \ --args 10