1.建立腳本git
cd /opt/spark-1.4.1-bin-hadoop2.6/conf cp spark-env.sh.template spark-env.sh cp slaves.template slaves
2.程序加入環境變量github
vi spark-env.sh export JAVA_HOME=/opt/jdk1.7.0_75 export SCALA_HOME=/opt/scala-2.11.6 export HADOOP_CONF_DIR=/opt/hadoop-2.6.0/etc/hadoop # spark的work目錄臨時文件自動清理,清理頻率每半小時 export SPARK_WORKER_DIR="/home/hadoop/spark/worker/" export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=1800" vi slaves 填入各節點hostname
3.系統加入環境變量web
vi /etc/profile export SPARK_HOME=/opt/spark-1.4.1-bin-hadoop2.6 export PATH=$SPARK_HOME/bin:$PATH
4.啓動sql
cd ../sbin/ ./start-all.sh
5.查看進程是否啓動shell
jps 4211 Master 4367 Worker
6.進入spark的web頁面 http://spore:8080/apache
7.使用spark-shell函數
cd ../bin/ ./spark-shell
8.sparkUI http://spore:4040
oop
源碼閱讀,查看spark支持哪些sql關鍵字:優化
spark\sql\catalyst\src\main\scala\org\apache\spark\sql\catalyst\SQLParser.scala
ui
spark-sql自定義函數例子
http://colobu.com/2014/12/11/spark-sql-quick-start/
若是要使用bin/spark-sql這個命令
必須啓動hive metastore且conf/hive-site.xml內必需要有hive.metastore.uris的配置,例如
<configuration> <property> <name>hive.metastore.uris</name> <value>thrift://byd0087:9083</value> </property> </configuration>
啓動bin/spark-sql,便可使用hive的HQL語句,速度可比hive快多了
spark優化