Hadoop-2.6.0 集羣的 安裝與配置

1.  配置節點bonnie1 hadoop環境html

 (1) 下載hadoop- 2.6.0 並解壓縮java

[root@bonnie1 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz -C /usr/localnode

[root@bonnie1 ~]# cd /usr/local/shell

[root@bonnie1 local]# tar -xvf hadoop-2.6.0.tarapache

 

    (2) 配置環境變量瀏覽器

[root@bonnie1 local]# vi /etc/profiletomcat

export JAVA_HOME=/usr/local/jdk1.7.0_79安全

export HADOOP_HOME=/usr/local/hadoop-2.6.0less

export HADOOP_COMMON_HOME=$HADOOP_HOMEssh

export HADOOP_HDFS_HOME=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_YARN_HOME=$HADOOP_HOME

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6

export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[root@bonnie1 local]# cd hadoop-2.6.0/etc/hadoop/

[root@bonnie1 hadoop]# vi hadoop-env.sh

# 追加以下字段

export JAVA_HOME=/usr/local/jdk1.7.0_79

export HADOOP_PREFIX=/usr/local/hadoop-2.6.0

[root@bonnie1 hadoop]# vi core-site.xml

<configuration>

        <property>

                <name>fs.defaultFS</name>

                <value>hdfs://ns1</value>

        </property>

        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop-2.6.0/tmp</value>

        </property>

        <property>

                <name>ha.zookeeper.quorum</name>

                <value>bonnie1:2181,bonnie2:2181,bonnie3:2181</value>

        </property>

</configuration>

[root@bonnie1 hadoop]# vi hdfs-site.xml

<configuration>

        <property>

                <name>dfs.nameservices</name>

                <value>ns1</value>

        </property>

        <property>

                <name>dfs.ha.namenodes.ns1</name>

                <value>nn1,nn2</value>

        </property>

        <property>

                <name>dfs.namenode.rpc-address.ns1.nn1</name>

                <value>bonnie1:9000</value>

        </property>

        <property>

                <name>dfs.namenode.http-address.ns1.nn1</name>

                <value>bonnie1:50070</value>

        </property>

        <property>

                <name>dfs.namenode.rpc-address.ns1.nn2</name>

                <value>bonnie2:9000</value>

        </property>

        <property>

                <name>dfs.namenode.http-address.ns1.nn2</name>

                <value>bonnie2:50070</value>

        </property>

        <property>

                <name>dfs.namenode.shared.edits.dir</name>

                <value>qjournal://bonnie1:8485;bonnie2:8485;bonnie3:8485/ns1</value>

        </property>

        <property>

                <name>dfs.journalnode.edits.dir</name>

                <value>/usr/local/hadoop-2.6.0/journal</value>

        </property>

        <property>

                <name>dfs.ha.automatic-failover.enabled</name>

                <value>true</value>

        </property>

        <property>

                <name>dfs.client.failover.proxy.provider.ns1</name>

                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

        </property>

 

       <property>

                <name>dfs.ha.fencing.methods</name>

                <value>

                        sshfence

                        shell(/bin/true)

                </value>

        </property>

        <property>

                <name>dfs.ha.fencing.ssh.private-key-files</name>

                <value>/root/.ssh/id_rsa</value>

        </property>

        <property>

                <name>dfs.ha.fencing.ssh.connect-timeout</name>

                <value>30000</value>

        </property>

</configuration>

[root@bonnie1 hadoop]# vi mapred-site.xml

<configuration>

        <property>

                <name>mapreduce.framework.name</name>

                <value>yarn</value>

        </property>

</configuration>

[root@bonnie1 hadoop]# vi yarn-site.xml 

<configuration>

        <property>

                <name>yarn.resourcemanager.hostname</name>

                <value>bonnie3</value>

        </property>

        <property>

                <name>yarn.nodemanager.aux-services</name>

                <value>mapreduce_shuffle</value>

        </property>

</configuration>

[root@bonnie1 hadoop]# vi slaves

bonnie1

bonnie2

bonnie3

2. 將配置好的hadoop文件拷貝到其餘幾個節點

[root@bonnie1 hadoop]# cd /usr/local

[root@bonnie1 local]# scp -r hadoop-2.6.0 bonnie2:/usr/local/

3. 集羣初始化 

(1)啓動 zookeeper ,zookeeper 安裝見上一篇  zookeeper 安裝與配置

[root@bonnie1 hadoop]# cd /usr/local/zookeeper-3.4.6/bin/

[root@bonnie1 bin]# ./zkServer.sh start

[root@bonnie2 hadoop]# cd /usr/local/zookeeper-3.4.6/bin/

[root@bonnie2 bin]# ./zkServer.sh start

[root@bonnie3 hadoop]# cd /usr/local/zookeeper-3.4.6/bin/

[root@bonnie3 bin]# ./zkServer.sh start

# 查看 zookeeper 狀態

[root@bonnie1 bin]# ./zkServer.sh status

[root@bonnie2 bin]# ./zkServer.sh status 

[root@bonnie3 bin]# ./zkServer.sh status

一個leader,兩個follower 

(2) 啓動journalnode

* 分別在bonnie一、bonnie二、bonnie3上執行啓動命令

[root@bonnie1 ~]# cd /usr/local/hadoop-2.6.0/sbin

[root@bonnie1 sbin]# ./hadoop-daemon.sh start journalnode

[root@bonnie2 sbin]# ./hadoop-daemon.sh start journalnode

[root@bonnie3 sbin]# ./hadoop-daemon.sh start journalnode

* 運行jps命令檢驗是否存在啓動進程JournalNode

[root@bonnie1 sbin]# jps

5335 JournalNode

7303 Jps

3124 QuorumPeerMain

(3) 格式化HDFS

* 在節點bonnie1上執行格式化命令

[root@bonnie3 sbin]# hdfs namenode -format

# 出現以下日誌表示成功

16/12/04 14:19:38 INFO common.Storage: Storage directory /usr/local/hadoop-2.6.0/tmp/dfs/name has been successfully formatted.

HA兩節點保持同步,將invin01上的tmp目錄拷貝到invin02節點

格式化後會在根據core-site.xml中的hadoop.tmp.dir配置生成個文件,這裏我配置的是/usr/local/hadoop-2.6.0/tmp,而後將/usr/local/hadoop-2.6.0/tmp拷貝到bonnie2的/usr/local/hadoop-2.6.0/下

[root@bonnie1 hadoop-2.6.0]# scp -r tmp/ bonnie2:/usr/local/hadoop-2.6.0/

(4) 格式化ZK

[root@bonnie1 hadoop-2.6.0]# hdfs zkfc -formatZK 

出現以下日誌表示成功

16/12/04 14:20:48 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.

  

4. 啓動集羣

(1) jps (kill -9 [進程號])關閉(除QuorumPeerMain)全部進程

(2) 啓動HDFS(在bonnie1上執行)

sbin/start-dfs.sh

(3) 啓動YARN(在bonnie3上執行)

sbin/start-yarn.sh

 

 

5. hadoop-2.6.0集羣配置完畢,經過瀏覽器訪問驗證

 

http://10.211.55.21:50070 

NameNode 'bonnie1:9000' (active) 

http://10.211.55.22:50070 

NameNode 'bonnie2:9000' (standby) 

http://10.211.55.23:8088 

 

  6. HDFS測試-上傳下載文件

hadoop fs -mkdir /tmp/input                     #在HDFS上新建文件夾

hadoop fs -put input1.txt /tmp/input           #把本地文件input1.txt傳到HDFS的/tmp/input目錄

hadoop fs -get  input1.txt /tmp/input/input1.txt   #把HDFS文件拉到本地

hadoop fs -ls /tmp/output                        #列出HDFS的某目錄

hadoop fs -cat /tmp/ouput/output1.txt          #查看HDFS上的文件

hadoop fs -rm -r /home/less/hadoop/tmp/output     #刪除HDFS上的目錄

 

# 查看HDFS狀態,好比有哪些datanode,每一個datanode的狀況

hadoop dfsadmin -report 

 

hadoop dfsadmin -safemode leave  # 離開安全模式

hadoop dfsadmin -safemode enter  # 進入安全模式

 

 

  7. YARN測試-WordCount

vi test.csv

hello tomcat

help yumily

cat bonnie

上傳測試數據文件

hadoop fs -put test.csv /

運行wordcount

cd hadoop-2.6.0/share/hadoop/mapreduce/

hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /test.csv /out

 

8. Hadoop啓動與關閉流程 

(1) 啓動流程

分別在bonnie1,bonnie2,bonnie3上執行啓動命令

[root@bonnie1 ~]# cd /usr/local/zookeeper-3.4.6/bin/

[root@bonnie1 bin]# ./zkServer.sh start

(2) 查看各ZK節點狀態

[root@bonnie1 bin]# ./zkServer.sh status

一個leader,兩個follower

(3) 啓動HDFS(在invin01上執行)

[root@bonnie1 hadoop-2.6.0]# sbin/start-dfs.sh

(4) 啓動YARN

[root@bonnie1 hadoop-2.6.0]# sbin/start-yarn.sh 

9 關閉流程

  (1)關閉HDFS(在invin01上執行)

[root@bonnie1 hadoop-2.6.0]#sbin/stop-dfs.sh

(2)關閉YARN(在invin03上執行)

[root@bonnie1 hadoop-2.6.0]# sbin/stop-yarn.sh

(3)關閉ZK(在invin04,invin05,invin06上執行)

[root@bonnie1 hadoop-2.6.0]# cd /home/hduser/zookeeper-3.4.6/bin/

[root@bonnie1 bin]# ./zkServer.sh stop

相關文章
相關標籤/搜索