Spark standalone模式的安裝(spark-1.6.1-bin-hadoop2.6.tgz)(master、slave1和slave2)

 

 前期博客html

 Spark運行模式概述java

Spark standalone簡介與運行wordcount(master、slave1和slave2)

 

 

 

 

 

 

 

開篇要明白node

  (1)spark-env.sh 是環境變量配置文件jquery

  (2)spark-defaults.conflinux

  (3)slaves 是從節點機器配置文件docker

  (4)metrics.properties 是 監控express

  (5)log4j.properties 是配置日誌apache

  (5)fairscheduler.xml是公平調度bash

  (6)docker.properties 是 dockerapp

  (7)我這裏的Spark standalone模式的安裝,是master、slave1和slave2。

  (8)Spark standalone模式的安裝,其實,是能夠不需安裝hadoop的。(我這裏是沒有安裝hadoop了,看到有些人寫博客也沒安裝,也有安裝的)

  (9)爲了管理,安裝zookeeper,(即管理master、slave1和slave2)

 

 

 

 

 

 

 首先,說下我這篇博客的Spark standalone模式的安裝狀況

 

 

 

 

 

 

個人安裝分區以下,四臺都同樣。

 

 

 

 

 

 

 

 

 

 關於如何關閉防火牆

  我這裏很少說,請移步

hadoop 50070 沒法訪問問題解決彙總

 

 

 

 

 

 

關於如何配置靜態ip和聯網

  我這裏很少說,個人是以下,請移步

CentOS 6.5靜態IP的設置(NAT和橋接聯網方式都適用)

 

DEVICE=eth0
HWADDR=00:0C:29:A9:45:18
TYPE=Ethernet
UUID=50fc177a-f282-4c83-bfbc-cb0f00b92507
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"

IPADDR=192.168.80.10
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS2=8.8.8.8

 

 

DEVICE=eth0
HWADDR=00:0C:29:18:ED:4A
TYPE=Ethernet
UUID=b5d059e4-3b92-41ef-889b-68f2f5684fac
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.80.11
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS2=8.8.8.8

 

 

 

 

DEVICE=eth0
HWADDR=00:0C:29:8B:DE:B0
TYPE=Ethernet
UUID=1ba7be29-2c80-4875-8c11-1ed2a47c0a67
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.80.12
BCAST=192.168.80.255
GATEWAY=192.168.80.2
NETMASK=255.255.255.0

DNS1=192.168.80.2
DNS1=8.8.8.8

 

 

 

 

 

 

 

 

 

 

 

關於新建用戶組和用戶

  我這裏很少說,我是spark,請移步

新建用戶組、用戶、用戶密碼、刪除用戶組、用戶(適合CentOS、Ubuntu)

 

 

 

 

關於安裝ssh、機器自己、機器之間進行免密碼通訊和時間同步

  我這裏很少說,具體,請移步。在這一步,本人深有感覺,有經驗。最好建議拍快照。不然很容易出錯!

  機器自己,即master與master、slave1與slave一、slave2與slave2。

  機器之間,即master與slave一、master與slave2。

        slave1與slave2。

hadoop-2.6.0.tar.gz + spark-1.5.2-bin-hadoop2.6.tgz 的集羣搭建(3節點和5節點皆適用)

hadoop-2.6.0.tar.gz的集羣搭建(5節點)

 

 

 

 

 

 

 

 

 關於如何先卸載自帶的openjdk,再安裝

  我這裏很少說,我是jdk-8u60-linux-x64.tar.gz,請移步

  個人jdk是安裝在/usr/local/jdk下,記得賦予權限組,chown -R spark:spark jdk

Centos 6.5下的OPENJDK卸載和SUN的JDK安裝、環境變量配置

 

#java
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

 

 

 

 關於如何安裝scala

  很少說,我這裏是scala-2.10.5.tgz,請移步

  個人scala安裝在/usr/local/scala,記得賦予用戶組,chown -R spark:spark scala

 

hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz的集羣搭建(單節點)(CentOS系統)

#scala
export SCALA_HOME=/usr/local/scala/scala-2.10.5
export PATH=$PATH:$SCALA_HOME/bin

 

 

 

 關於如何安裝spark

  我這裏很少說,請移步見

  個人spark安裝目錄是在/usr/local/spark/,記得賦予用戶組,chown -R spark:spark sparl

    只需去下面的博客,去看如何安裝就好,至於spark的怎麼配置。請見下面的spark  standalone模式的配置文件講解。

hadoop-2.6.0.tar.gz + spark-1.6.1-bin-hadoop2.6.tgz的集羣搭建(單節點)(CentOS系統)

#spark
export SPARK_HOME=/usr/local/spark/spark-1.6.1-bin-hadoop2.6
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

 

 

 

 

 

 

關於zookeeper的安裝

  我這裏很少說,請移步

hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3節點集羣搭建(含zookeeper集羣安裝)

 以及,以後,在spark 裏怎麼配置zookeeper。

Spark standalone簡介與運行wordcount(master、slave1和slave2)

 

 

 

 

 

 

這裏,我帶你們來看官網

http://spark.apache.org/docs/latest

 

 

 

 

 

 

http://spark.apache.org/docs/latest/spark-standalone.html

 

 

http://spark.apache.org/docs/latest/spark-standalone.html#starting-a-cluster-manually

 

 

 

 

 

 

 

Spark Standalone部署配置---經過腳本啓動集羣

修改以下配置:

● slaves--指定在哪些節點上運行worker。

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# A Spark Worker will be started on each of the machines listed below. slave1 slave2

 

spark-defaults.conf---spark提交job時的默認配置

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
# spark.master                     spark://master:7077
# spark.eventLog.enabled           true
# spark.eventLog.dir               hdfs://namenode:8021/directory
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
# spark.driver.memory              5g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

  你們,能夠在這個配置文件裏指定好,之後每次不需在命令行下指定了。固然咯,也能夠不配置啦!(我通常是這裏不配置,即這個文件不動它

 

 

 

 

spark-defaults.conf (這個做爲可選可不選)(是由於或者是在spark-submit裏也是能夠加入的)(通常不選,否則固定死了)(我通常是這裏不配置,即這個文件不動它

spark.master                       spark://master:7077 spark.eventLog.enabled true spark.eventLog.dir hdfs://master:9000/sparkHistoryLogs spark.eventLog.compress true spark.history.fs.update.interval 5 spark.history.ui.port 7777 spark.history.fs.logDirectory hdfs://master:9000/sparkHistoryLogs

  

 

 

 

spark-env.sh—spark的環境變量

#!/usr/bin/env bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is sourced when running various Spark programs. # Copy it as spark-env.sh and edit that to configure Spark for your site.  # Options read when launching programs locally with # ./bin/run-example or ./bin/spark-submit # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # Options read by executors and drivers running inside the cluster # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2) # - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1). # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G) # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G) # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark) # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’) # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job. # - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job. # Options for the daemons used in the standalone deploy mode # - SPARK_MASTER_IP, to bind the master to a different IP address or hostname # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master 
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y") # - SPARK_WORKER_CORES, to set the number of cores to use on this machine # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g) # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker # - SPARK_WORKER_INSTANCES, to set the number of worker processes per node # - SPARK_WORKER_DIR, to set the working directory of worker processes # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y") # - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g). # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y") # - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y") # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y") # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers # Generic options for the daemons used in the standalone deploy mode # - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf) # - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs) # - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp) # - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER) # - SPARK_NICENESS The scheduling priority for daemons. (Default: 0)



export JAVA_HOME=/usr/local/jdk/jdk1.8.0_60
export SCALA_HOME=/usr/local/scala/scala-2.10.5

export SPARK_MASTER_IP=192.168.80.10
export SPARK_WORKER_MERMORY=1G (官網上說是1g)
# SPARK_MASTER_WEBUI_PORT=8888 (這裏自行能夠去修改,我這裏不作演示)

注意:SPARK_MASTER_PORT默認是8080,SPARK_MASTER_WEBUI_PORT默認是7077

   由於,我說了,個人這篇博文定位是對spark的standalone模式的安裝,因此,它是能夠不用安裝hadoop的,因此這裏就不需配置hadoop了。

大家你們如有看到這裏要配置,好比HADOOP_HOMEHADOOP_CONF_DIR等。那是spark的yarn模式的安裝。!!!(注意)

 

 

 

 

在打算做爲master的節點上啓動集羣—sbin/start-all.sh

 

 

 

 

 

 

相關文章
相關標籤/搜索