2018-07-08期 Hadoop單節點僞分佈式集羣配置

1、安裝介質java

下載地址:http://archive.apache.org/dist/hadoop/core/node

安裝版本:hadoop-2.4.1.tar.gzapache

2、安裝步驟windows

一、解壓hadoop-2.4.1.tar.gzapp

[root@hadoop-server01 hadoop-2.4.1]# tar -xvf hadoop-2.4.1.tar.gz -C /usr/local/apps/ssh

[root@hadoop-server01 hadoop-2.4.1]# pwdide

/usr/local/apps/hadoop-2.4.1oop

[root@hadoop-server01 hadoop-2.4.1]# llui

total 52this

drwxr-xr-x. 2 67974 users  4096 Jun 20  2014 bin

drwxr-xr-x. 3 67974 users  4096 Jun 20  2014 etc

drwxr-xr-x. 2 67974 users  4096 Jun 20  2014 include

drwxr-xr-x. 3 67974 users  4096 Jun 20  2014 lib

drwxr-xr-x. 2 67974 users  4096 Jun 20  2014 libexec

-rw-r--r--. 1 67974 users 15458 Jun 20  2014 LICENSE.txt

-rw-r--r--. 1 67974 users   101 Jun 20  2014 NOTICE.txt

-rw-r--r--. 1 67974 users  1366 Jun 20  2014 README.txt

drwxr-xr-x. 2 67974 users  4096 Jun 20  2014 sbin

drwxr-xr-x. 4 67974 users  4096 Jun 20  2014 share

[root@hadoop-server01 hadoop-2.4.1]#

二、修改配置文件

[root@hadoop-server01 etc]# cd /usr/local/apps/hadoop-2.4.1/etc/hadoop/

--修改hadoop-env.sh

[root@hadoop-server01 hadoop]# vi hadoop-env.sh

# The only required environment variable is JAVA_HOME.  All others are

# optional.  When running a distributed configuration it is best to

# set JAVA_HOME in this file, so that it is correctly defined on

# remote nodes.

# The java implementation to use.

export JAVA_HOME=/usr/local/apps/jdk1.7.0_80/

# The jsvc implementation to use. Jsvc is required to run secure datanodes.

#export JSVC_HOME=${JSVC_HOME}

--修改core-site.xml

[root@hadoop-server01 hadoop]# vi core-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://hadoop-server01:9000/</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/apps/hadoop-2.4.1/tmp/</value>

</property>

</configuration>

--修改hdfs-site.xml

[root@hadoop-server01 hadoop]# vi hdfs-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>

--修改mapred-site.xml

[root@hadoop-server01 hadoop]# mv mapred-site.xml.template mapred-site.xml

[root@hadoop-server01 hadoop]# vi mapred-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>mapreduce.framework.name</name>

</value>yarn</value>

</property>

</configuration>

--修改yarn-site.xml

[root@hadoop-server01 hadoop]# vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

<name>yarn.resourcemanager.hostname</name>

<value>hadoop-server01</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce-shuffle</value>

</property>

</configuration>

--修改slaves  

[root@hadoop-server01 hadoop]# vi slaves

hadoop-server01

hadoop-server02

hadoop-server03


三、啓動服務

--格式化

[root@hadoop-server01 hadoop]# cd /usr/local/apps/hadoop-2.4.1/bin/

[root@hadoop-server01 bin]# ./hadoop namenode -format

18/06/15 00:44:09 INFO util.GSet: capacity      = 2^15 = 32768 entries

18/06/15 00:44:09 INFO namenode.AclConfigFlag: ACLs enabled? false

18/06/15 00:44:09 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1300855425-192.168.1.201-1529048649163

18/06/15 00:44:09 INFO common.Storage: Storage directory /usr/local/apps/hadoop-2.4.1/tmp/dfs/name has been successfully formatted.

18/06/15 00:44:09 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

18/06/15 00:44:09 INFO util.ExitUtil: Exiting with status 0

18/06/15 00:44:09 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoop-server01/192.168.1.201

************************************************************/

3.1 手動啓動

(1)啓動HDFS

[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start namenode

[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start datanode

[root@hadoop-server01 sbin]# ./hadoop-daemon.sh start secondarynamenode

[root@hadoop-server01 sbin]# jps

28993 Jps

28925 SecondaryNameNode

4295 DataNode

4203 NameNode

--訪問地址

http://192.168.1.201:50070/

(2)啓動yarn

[root@hadoop-server01 sbin]# ./yarn-daemon.sh start resourcemanager

[root@hadoop-server01 sbin]# ./yarn-daemon.sh start nodemanager

[root@hadoop-server01 sbin]# jps

29965 NodeManager

28925 SecondaryNameNode

29062 ResourceManager

4295 DataNode

4203 NameNode

3.2 腳本自動啓動

--前提條件須要配置ssh免密登陸

[root@hadoop-server01 sbin]# ssh-keygen

[root@hadoop-server01 sbin]# ssh-copy-id hadoop-server01

[root@hadoop-server01 sbin]# shh hadoop-server01

(1)啓動HDFS

[root@hadoop-server01 sbin]# ./start-dfs.sh

[root@hadoop-server01 sbin]# jps

31538 Jps

31423 SecondaryNameNode

31271 DataNode

31152 NameNode

(2)啓動yarn

[root@hadoop-server01 sbin]# ./start-yarn.sh

[root@hadoop-server01 sbin]# jps

32009 Jps

31423 SecondaryNameNode

31271 DataNode

31697 NodeManager

31593 ResourceManager

31152 NameNode


備註:本文檔全部配置都採用主機名配置,所以須要先配置hosts文件 ,非windows下配置/etc/hosts windows下配置/windows/systemm32/drivers/etc/hosts文件,配置格式 :IP    主機名

相關文章
相關標籤/搜索