========================================================================================
1、基礎環境
========================================================================================
一、服務器分佈
10.217.145.244 主名字節點
10.217.145.245 備名字節點
10.217.145.246 數據節點1
10.217.145.247 數據節點2
10.217.145.248 數據節點3java
--------------------------------------------------------------------------------------------------------------------------------------------
二、HOSTS 設置
在每臺服務器的「/etc/hosts」文件,添加以下內容:
10.217.145.244 namenode1
10.217.145.245 namenode2
10.217.145.246 datanode1
10.217.145.247 datanode2
10.217.145.248 datanode3node
-------------------------------------------------------------------------------------------------------------------------------------------linux
三、SSH 免密碼登陸
可參考文章:git
http://blog.csdn.net/codepeak/article/details/14447627github
......web
========================================================================================
2、Hadoop 2.2.0 編譯安裝【官方提供的二進制版本爲32位版本,64位環境需從新編譯】
========================================================================================
一、JDK 安裝
http://download.oracle.com/otn-pub/java/jdk/7u45-b18/jdk-7u45-linux-x64.tar.gzshell
# tar xvzf jdk-7u45-linux-x64.tar.gz -C /usr/local
# cd /usr/local
# ln -s jdk1.7.0_45 jdkapache
# vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$PATH:$JAVA_HOME/binbootstrap
# source /etc/profilevim
------------------------------------------------------------------------------------------------------------------------------------------
二、MAVEN 安裝
http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz
# tar xvzf apache-maven-3.1.1-bin.tar.gz -C /usr/local
# cd /usr/local
# ln -s apache-maven-3.1.1 maven
# vim /etc/profile
export MAVEN_HOME=/usr/local/maven
export PATH=$PATH:$MAVEN_HOME/bin
# source /etc/profile
# mvn -v
------------------------------------------------------------------------------------------------------------------------------------------
三、PROTOBUF 安裝
https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
# tar xvzf protobuf-2.5.0.tar.gz
# ./configure --prefix=/usr/local/protobuf
# make && make install
# vim /etc/profile
export PROTO_HOME=/usr/local/protobuf
export PATH=$PATH:$PROTO_HOME/bin
# source /etc/profile
# vim /etc/ld.so.conf
/usr/local/protobuf/lib
# /sbin/ldconfig
------------------------------------------------------------------------------------------------------------------------------------------
四、其餘依賴庫安裝
http://www.cmake.org/files/v2.8/cmake-2.8.12.1.tar.gz
http://ftp.gnu.org/pub/gnu/ncurses/ncurses-5.9.tar.gz
http://www.openssl.org/source/openssl-1.0.1e.tar.gz
# tar xvzf cmake-2.8.12.1.tar.gz
# cd cmake-2.8.12.1
# ./bootstrap --prefix=/usr/local
# gmake && gmake install
# tar xvzf ncurses-5.9.tar.gz
# cd ncurses-5.9
# ./configure --prefix=/usr/local
# make && make install
# tar xvzf openssl-1.0.1e.tar.gz
# cd openssl-1.0.1e
# ./config shared --prefix=/usr/local
# make && make install
# /sbin/ldconfig
------------------------------------------------------------------------------------------------------------------------------------------
五、編譯 Hadoop
http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz
(1)、maven源設置【在<mirrors></mirros>裏添加】
# vim /usr/local/maven/conf/settings.xml
1
2
3
4
5
6
|
<
mirror
>
<
id
>nexus-osc</
id
>
<
mirrorOf
>*</
mirrorOf
>
<
name
>Nexusosc</
name
>
<
url
>http://maven.oschina.net/content/groups/public/</
url
>
</
mirror
>
|
(2)、編譯Hadoop
# tar xvzf hadoop-2.2.0-src.tar.gz
# cd hadoop-2.2.0-src
# mvn clean install -DskipTests
# mvn package -Pdist,native -DskipTests -Dtar
## 編譯成功後,生成的二進制包所在路徑
hadoop-dist/target/hadoop-2.2.0
# cp -a hadoop-dist/target/hadoop-2.2.0 /usr/local
# cd /usr/local
# ln -s hadoop-2.2.0 hadoop
【注意:編譯過程當中,可能會失敗,須要多嘗試幾回】
========================================================================================
3、Hadoop YARN 分佈式集羣配置【注:全部節點都作一樣配置】
========================================================================================
一、環境變量設置
# vim /etc/profile
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_PID_DIR=/data/hadoop/pids
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# source /etc/profile
------------------------------------------------------------------------------------------------------------------------------------------
二、相關路徑建立
mkdir -p /data/hadoop/{pids,storage}
mkdir -p /data/hadoop/storage/{hdfs,tmp}
mkdir -p /data/hadoop/storage/hdfs/{name,data}
------------------------------------------------------------------------------------------------------------------------------------------
三、配置 core-site.xml
# vim /usr/local/hadoop/etc/hadoop/core-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
<
configuration
>
<
property
>
<
name
>fs.defaultFS</
name
>
<
value
>hdfs://namenode1:9000</
value
>
</
property
>
<
property
>
<
name
>io.file.buffer.size</
name
>
<
value
>131072</
value
>
</
property
>
<
property
>
<
name
>hadoop.tmp.dir</
name
>
<
value
>file:/data/hadoop/storage/tmp</
value
>
</
property
>
<
property
>
<
name
>hadoop.proxyuser.hadoop.hosts</
name
>
<
value
>*</
value
>
</
property
>
<
property
>
<
name
>hadoop.proxyuser.hadoop.groups</
name
>
<
value
>*</
value
>
</
property
>
<
property
>
<
name
>hadoop.native.lib</
name
>
<
value
>true</
value
>
</
property
>
</
configuration
>
|
------------------------------------------------------------------------------------------------------------------------------------------
四、配置 hdfs-site.xml
# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
<
configuration
>
<
property
>
<
name
>dfs.namenode.secondary.http-address</
name
>
<
value
>namenode2:9000</
value
>
</
property
>
<
property
>
<
name
>dfs.namenode.name.dir</
name
>
<
value
>file:/data/hadoop/storage/hdfs/name</
value
>
</
property
>
<
property
>
<
name
>dfs.datanode.data.dir</
name
>
<
value
>file:/data/hadoop/storage/hdfs/data</
value
>
</
property
>
<
property
>
<
name
>dfs.replication</
name
>
<
value
>3</
value
>
</
property
>
<
property
>
<
name
>dfs.webhdfs.enabled</
name
>
<
value
>true</
value
>
</
property
>
</
configuration
>
|
------------------------------------------------------------------------------------------------------------------------------------------
五、配置 mapred-site.xml
# vim /usr/local/hadoop/etc/hadoop/mapred-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
<
configuration
>
<
property
>
<
name
>mapreduce.framework.name</
name
>
<
value
>yarn</
value
>
</
property
>
<
property
>
<
name
>mapreduce.jobhistory.address</
name
>
<
value
>namenode1:10020</
value
>
</
property
>
<
property
>
<
name
>mapreduce.jobhistory.webapp.address</
name
>
<
value
>namenode1:19888</
value
>
</
property
>
</
configuration
>
|
------------------------------------------------------------------------------------------------------------------------------------------
六、配置 yarn-site.xml
# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
<
configuration
>
<
property
>
<
name
>yarn.nodemanager.aux-services</
name
>
<
value
>mapreduce_shuffle</
value
>
</
property
>
<
property
>
<
name
>yarn.nodemanager.aux-services.mapreduce.shuffle.class</
name
>
<
value
>org.apache.hadoop.mapred.ShuffleHandler</
value
>
</
property
>
<
property
>
<
name
>yarn.resourcemanager.scheduler.address</
name
>
<
value
>namenode1:8030</
value
>
</
property
>
<
property
>
<
name
>yarn.resourcemanager.resource-tracker.address</
name
>
<
value
>namenode1:8031</
value
>
</
property
>
<
property
>
<
name
>yarn.resourcemanager.address</
name
>
<
value
>namenode1:8032</
value
>
</
property
>
<
property
>
<
name
>yarn.resourcemanager.admin.address</
name
>
<
value
>namenode1:8033</
value
>
</
property
>
<
property
>
<
name
>yarn.resourcemanager.webapp.address</
name
>
<
value
>namenode1:80</
value
>
</
property
>
</
configuration
>
|
------------------------------------------------------------------------------------------------------------------------------------------
七、配置 hadoop-env.sh、mapred-env.sh、yarn-env.sh【在開頭添加】
文件路徑:
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/etc/hadoop/mapred-env.sh
/usr/local/hadoop/etc/hadoop/yarn-env.sh
添加內容:
export JAVA_HOME=/usr/local/jdk
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_PID_DIR=/data/hadoop/pids
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
------------------------------------------------------------------------------------------------------------------------------------------
八、數據節點配置
# vim /usr/local/hadoop/etc/hadoop/slaves
datanode1
datanode2
datanode3
------------------------------------------------------------------------------------------------------------------------------------------
九、Hadoop 簡單測試
# cd /usr/local/hadoop
## 首次啓動集羣時,作以下操做【主名字節點上執行】
# hdfs namenode -format
# sbin/start-dfs.sh
## 檢查進程是否正常啓動
# jps
主名字節點:
備名字節點:
數據節點:
## hdfs與mapreduce測試
# hdfs dfs -mkdir -p /user/rocketzhang
# hdfs dfs -put bin/hdfs.cmd /user/rocketzhang
# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /user/rocketzhang /user/out
# hdfs dfs -ls /user/out
## hdfs信息查看
# hdfs dfsadmin -report
# hdfs fsck / -files -blocks
## 集羣的後續維護
# sbin/start-all.sh
# sbin/stop-all.sh
## 監控頁面URL
http://10.217.145.244:80/
========================================================================================
4、Spark 分佈式集羣配置【注:全部節點都作一樣配置】
========================================================================================
一、Scala 安裝
http://www.scala-lang.org/files/archive/scala-2.9.3.tgz
# tar xvzf scala-2.9.3.tgz -C /usr/local
# cd /usr/local
# ln -s scala-2.9.3 scala
# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
# source /etc/profile
------------------------------------------------------------------------------------------------------------------------------------------
二、Spark 安裝
http://d3kbcqa49mib13.cloudfront.net/spark-0.8.1-incubating-bin-hadoop2.tgz
# tar xvzf spark-0.8.1-incubating-bin-hadoop2.tgz -C /usr/local
# cd /usr/local
# ln -s spark-0.8.1-incubating-bin-hadoop2 spark
# vim /etc/profile
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
# source /etc/profile
# cd /usr/local/spark/conf
# mv spark-env.sh.template spark-env.sh
# vim spark-env.sh
export JAVA_HOME=/usr/local/jdk
export SCALA_HOME=/usr/local/scala
export HADOOP_HOME=/usr/local/hadoop
## worker節點的主機名列表
# vim slaves
datanode1
datanode2
datanode3
# mv log4j.properties.template log4j.properties
## 在Master節點上執行
# cd /usr/local/spark && .bin/start-all.sh
## 檢查進程是否啓動【在master節點上出現「Master」,在slave節點上出現「Worker」】
# jps
Master節點:
Slave節點:
------------------------------------------------------------------------------------------------------------------------------------------
三、相關測試
## 監控頁面URL
http://10.217.145.244:8080/
## 先切換到「/usr/local/spark」目錄
(1)、本地模式
# ./run-example org.apache.spark.examples.SparkPi local
(2)、普通集羣模式
# ./run-example org.apache.spark.examples.SparkPi spark://namenode1:7077
# ./run-example org.apache.spark.examples.SparkLR spark://namenode1:7077
# ./run-example org.apache.spark.examples.SparkKMeans spark://namenode1:7077 file:/usr/local/spark/kmeans_data.txt 2 1
(3)、結合HDFS的集羣模式
# hadoop fs -put README.md .
# MASTER=spark://namenode1:7077 ./spark-shell
scala> val file = sc.textFile("hdfs://namenode1:9000/user/root/README.md")
scala> val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)
scala> count.collect()
scala> :quit
(4)、基於YARN模式
# SPARK_JAR=./assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.1-incubating-hadoop2.2.0.jar \
./spark-class org.apache.spark.deploy.yarn.Client \
--jar examples/target/scala-2.9.3/spark-examples_2.9.3-assembly-0.8.1-incubating.jar \
--class org.apache.spark.examples.SparkPi \
--args yarn-standalone \
--num-workers 3 \
--master-memory 4g \
--worker-memory 2g \
--worker-cores 1
執行結果:
/usr/local/hadoop/logs/userlogs/application_*/container*_000001/stdout
(5)、其餘一些樣例程序
examples/src/main/scala/org/apache/spark/examples/
(6)、問題定位【數據節點上的日誌】
/data/hadoop/storage/tmp/nodemanager/logs
(7)、一些優化
# vim /usr/local/spark/conf/spark-env.sh
export SPARK_WORKER_MEMORY=16g 【根據內存大小進行實際配置】
......
(8)、最終的目錄結構
========================================================================================
5、Shark 數據倉庫【後續補上】
========================================================================================
https://github.com/amplab/shark/releases
本文出自 「人生理想在於堅持不懈」 博客,請務必保留此出處http://sofar.blog.51cto.com/353572/1352713