系統:windows x64html
內存:4Gjava
spark版本:spark-1.6.0-bin-hadoop2.6git
JDK版本:jdk1.7.0_031github
spark安裝步驟:spring
1.spark安裝包,下載地址 https://spark.apache.org/downloads.html shell
2.解壓下載好的安裝包,彷佛路徑最好不要含空格(沒有試過含空格會怎麼樣)apache
3.環境變量中配置SPARK_HOME,path中指定到SPARK_HOME下的bin目錄。windows
4.如今運行的是本地模式,能夠不用安裝hadoop,可是windows下須要配置HADOOP_HOME,並在HADOOP_HOME/bin下放一個winutils.exe的文件,具體見https://github.com/spring-projects/spring-hadoop/wiki/Using-a-Windows-client-together-with-a-Linux-cluster eclipse
5.打開CMD,試試spark-shell命令可否運行成功。oop
可能遇到問題1:關於xerces.jar包的問題,多是jar包衝突致使,最直接解決辦法:從新下載一個jdk
可能遇到問題2:spark-shell throws java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable.這個錯誤,彷佛是hive的一個BUG,具體能夠見https://issues.apache.org/jira/browse/SPARK-10528
配置Eclipse的java開發環境:
java開發spark程序是依賴一個jar,位於SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.6.0.jar,直接導入eclipse便可,spark運行只支持1.6以上java環
最後附上一個WordCount程序,是經過hdfs讀取和輸出的文件
SparkConf conf = new SparkConf().setAppName("WordCount").setMaster("local"); JavaSparkContext context = new JavaSparkContext(conf); JavaRDD<String> textFile = context.textFile("hdfs://192.168.1.201:8020/data/test/sequence/sequence_in/file1.txt"); JavaRDD<String> words = textFile.flatMap(new FlatMapFunction<String, String>() { public Iterable<String> call(String s) {return Arrays.asList(s.split(" "));} }); JavaPairRDD<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() { public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); } }); JavaPairRDD<String, Integer> counts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() { public Integer call(Integer a, Integer b) { return a + b; } }); counts.saveAsTextFile("hdfs://192.168.1.201:8020/data/test/sequence/sequence_out/");