Spark集羣搭建簡要

Spark集羣搭建

1 Spark編譯

1.1 下載源代碼

git clone git://github.com/apache/spark.git -b branch-1.6

1.2 修改pom文件

增長cdh5.0.2相關profile,以下:
<profile>
  <id>cdh5.0.2</id>
  <properties>
    <hadoop.version>2.3.0-cdh5.0.2</hadoop.version>
    <hbase.version>0.96.1.1-cdh5.0.2</hbase.version>
    <flume.version>1.4.0-cdh5.0.2</flume.version>
    <zookeeper.version>3.4.5-cdh5.0.2</zookeeper.version>
  </properties>
</profile>

1.3 編譯

build/mvn -Pyarn -Pcdh5.0.2 -Phive -Phive-thriftserver -Pnative -DskipTests package

上述命令,因爲國外maven.twttr.com被牆,添加hosts,199.16.156.89 maven.twttr.com,再次執行。java

2 Spark集羣搭建[SPARK ON YARN]

2.1 修改配置文件

--spark-env.sh--
export SPARK_SSH_OPTS="-p9413"
export HADOOP_CONF_DIR=/opt/hadoop/hadoop-cluster/modules/hadoop-2.3.0-cdh5.0.2/etc/hadoop
export SPARK_EXECUTOR_INSTANCES=1
export SPARK_EXECUTOR_CORES=4
export SPARK_EXECUTOR_MEMORY=1G
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/
--slaves--
192.168.3.211 hadoop-dev-211
192.168.3.212 hadoop-dev-212
192.168.3.213 hadoop-dev-213
192.168.3.214 hadoop-dev-214

2.2 集羣規劃,啓動集羣

--集羣規劃--
hadoop-dev-211  Master、Woker
hadoop-dev-212  Woker
hadoop-dev-213  Woker
hadoop-dev-214  Woker
--啓動Master--
sbin/start-master.sh
--啓動Wokers--
sbin/start-slaves.sh

2.3 查看界面

3 集成hive

將hive-site.xml和hive-log4j.properties至spark中conf目錄

4 Spark實例演示

4.1 讀取mysql數據至hive

# 步驟1,啓動spark-shell
bin/spark-shell --jars lib_managed/jars/hadoop-lzo-0.4.17.jar \
--driver-class-path /opt/hadoop/hadoop-cluster/modules/apache-hive-1.2.1-bin/lib/mysql-connector-java-5.6-bin.jar
# 步驟2,讀取mysql數據
val jdbcDF = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:mysql://hadoop-dev-212:3306/hive","dbtable" -> "VERSION", "user" -> "hive", "password" -> "123456")).load();
# 步驟3,轉成hive表
jdbcDF.saveAsTable("test");
相關文章
相關標籤/搜索