流處理環境搭建

1 spark背景介紹html

 spark組成java

Spark組成(BDAS):全稱伯克利數據分析棧,經過大規模集成算法、機器、人之間展示大數據應用的一個平臺。也是處理大數據、雲計算、通訊的技術解決方案。

它的主要組件有:

SparkCore:將分佈式數據抽象爲彈性分佈式數據集(RDD),實現了應用任務調度、RPC、序列化和壓縮,併爲運行在其上的上層組件提供API。

SparkSQL:Spark Sql 是Spark來操做結構化數據的程序包,能夠讓我使用SQL語句的方式來查詢數據,Spark支持 多種數據源,包含Hive表,parquest以及JSON等內容。

SparkStreaming: 是Spark提供的實時數據進行流式計算的組件。

MLlib:提供經常使用機器學習算法的實現庫。

GraphX:提供一個分佈式圖計算框架,能高效進行圖計算。

BlinkDB:用於在海量數據上進行交互式SQL的近似查詢引擎。

Tachyon:之內存爲中心高容錯的的分佈式文件系統。

 

 

jdk版本
java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

hadoop 版本
Hadoop 2.6.5 Subversion https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997 Compiled by sjlee on 2016-10-02T23:43Z Compiled with protoc 2.5.0 From source with checksum f05c9fa095a395faa9db9f7ba5d754 This command was run using /utxt/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5.jar

scala 版本
Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL

SPARK 版本
spark-2.4.0-bin-hadoop2.6

 

 

 

2 環境變量node

hadoop setting
export HADOOP_HOME=/utxt/hadoop-2.6.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH


#SPARK setting
export SPARK_HOME=/utxt/spark-2.4.0-bin-hadoop2.6
export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH

#SCALA setting
export SCALA_HOME=/utxt/scala-2.10.5
export PATH=$SCALA_HOME/bin:$PATH


#java settings
#export PATH
export JAVA_HOME=/u01/app/software/jdk1.8.0_144
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

 

3 SPARK 配置git

在/utxt/spark-2.4.0-bin-hadoop2.6/conf
spark-env.sh 添加以下幾行
export SCALA_HOME
=/utxt/scala-2.10.5 export SPARK_MASTER_IP=gc64 export SPARK_WORKER_MEMORY=1500m export JAVA_HOME=/u01/app/software/jdk1.8.0_144

slaves 添加一行
gc64

 

4 啓動SPARKgithub

start-master.sh

在瀏覽器輸入
http://gc64:8080/

啓動worker
start-slaves.sh spark://gc64:7077

啓動spark-shell
spark-shell --master spark://gc64:7077

5 運行例子測試算法

spark_shell(先啓動hadoop)
val file=sc.textFile("hdfs://gc64:9000/user/sms/test/test.txt")
val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
rdd.collect()
rdd.foreach(println)

jar包測試
spark-submit --class JavaWordCount --executor-memory 1G --total-executor-cores 2 /utxt/test/spark-0.0.1.jar hdfs://gc64:9000/user/sms/test/test.txt

java wordcount代碼sql

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import scala.Tuple2;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.sql.SparkSession;

import java.util.Arrays;
import java.util.List;
import java.util.regex.Pattern;

public final class JavaWordCount {
    private static final Pattern SPACE = Pattern.compile(" ");

    public static void main(String[] args) throws Exception {

        if (args.length < 1) {
            System.err.println("Usage: JavaWordCount <file>");
            System.exit(1);
        }

        SparkSession spark = SparkSession
                .builder()
                .appName("JavaWordCount")
                .getOrCreate();

        JavaRDD<String> lines = spark.read().textFile(args[0]).javaRDD();
        JavaRDD<String> words = lines.flatMap(s -> Arrays.asList(SPACE.split(s)).iterator());
        JavaPairRDD<String, Integer> ones = words.mapToPair(s -> new Tuple2<>(s, 1));
        JavaPairRDD<String, Integer> counts = ones.reduceByKey((i1, i2) -> i1 + i2);
        List<Tuple2<String, Integer>> output = counts.collect();

        for (Tuple2<?,?> tuple : output) {
            System.out.println(tuple._1() + ": " + tuple._2());
        }
        spark.stop();
    }
}

Scala 邏輯迴歸 代碼shell

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples

import java.util.Random

import scala.math.exp

import breeze.linalg.{DenseVector, Vector}

import org.apache.spark.sql.SparkSession

/**
 * Logistic regression based classification.
 * Usage: SparkLR [partitions]
 *
 * This is an example implementation for learning how to use Spark. For more conventional use,
 * please refer to org.apache.spark.ml.classification.LogisticRegression.
 */
object SparkLR {
  val N = 10000  // Number of data points
  val D = 10   // Number of dimensions
  val R = 0.7  // Scaling factor
  val ITERATIONS = 5
  val rand = new Random(42)

  case class DataPoint(x: Vector[Double], y: Double)

  def generateData: Array[DataPoint] = {
    def generatePoint(i: Int): DataPoint = {
      val y = if (i % 2 == 0) -1 else 1
      val x = DenseVector.fill(D) {rand.nextGaussian + y * R}
      DataPoint(x, y)
    }
    Array.tabulate(N)(generatePoint)
  }

  def showWarning() {
    System.err.println(
      """WARN: This is a naive implementation of Logistic Regression and is given as an example!
        |Please use org.apache.spark.ml.classification.LogisticRegression
        |for more conventional use.
      """.stripMargin)
  }

  def main(args: Array[String]) {

    showWarning()

    val spark = SparkSession
      .builder
      .appName("SparkLR")
      .getOrCreate()

    val numSlices = if (args.length > 0) args(0).toInt else 2
    val points = spark.sparkContext.parallelize(generateData, numSlices).cache()

    // Initialize w to a random value
    val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
    println(s"Initial w: $w")

    for (i <- 1 to ITERATIONS) {
      println(s"On iteration $i")
      val gradient = points.map { p =>
        p.x * (1 / (1 + exp(-p.y * (w.dot(p.x)))) - 1) * p.y
      }.reduce(_ + _)
      w -= gradient
    }

    println(s"Final w: $w")

    spark.stop()
  }
}

 

其它例子請參考 spark-2.4.0-bin-hadoop2.6/examples/src/mainexpress

 

 6 問題聚集apache

Failed to initialize mapreduce.shuffle
yarn.nodemanager.aux-services項的默認值是「mapreduce.shuffle」
解決方案
將yarn.nodemanager.aux-services項的值改成「mapreduce_shuffle」。

 

 

 

start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh  start historyserver
start-master.sh
start-slaves.sh spark://gc64:7077  
start-history-server.sh 

 

7 參考資料

[1]  搭建Spark的單機版集羣  http://www.javashuo.com/article/p-yskoupgp-m.html
[2]  http://spark.apache.org/

[3]  https://blog.csdn.net/snail_bing/article/details/82905539

相關文章
相關標籤/搜索