VMware下Hadoop 2.4.1徹底分佈式集羣平臺安裝與設置

1 VM下Ubuntu安裝和配置
1.1 安裝Ubuntu系統
 這個就不說了,不知道的能夠去看看其餘的博文。
 
1.2 集羣配置
    搭建一個由3臺機器組成的集羣:
IP user/passwd hostname role System
192.168.174.160 hadoop/hadoop master nn/snn/rm Ubuntu-14.04-32bit
192.168.174.161 hadoop/hadoop slave1 dn/nm Ubuntu-14.04-32bit
192.168.174.162 hadoop/hadoop slave2 dn/nm Ubuntu-14.04-32bit
        nn:    namenode
        snn:  secondary namenode
        dn:    datanode
        rm:    resourcemanager
        nm:    nodemanager
 
1.3 建立用戶
    我這裏給每臺機器建立一個hadoop的用戶,密碼也爲hadoop,修改 /etc/sudoers文件,增長:hadoop  ALL=(ALL) ALL,給每一個帳戶分配sudo的權限。
root@master:/home/duanwf# useradd --create-home hadoop

root@master:/home/duanwf# passwd hadoop
 
root@master:~# vi /etc/sudoers
# User privilege specification
root ALL=(ALL:ALL) ALL
duanwf ALL=(ALL:ALL) ALL
hadoop ALL=(ALL:ALL) ALL

 

1.4 設定電腦的IP爲靜態地址
 
1.5 設置各個主機的hostname
  打開/etc/hostname文件:
root@master:~# vi /etc/hostname 
master

  將/etc/hostname文件中的機器名改成你想取的機器名, 重啓系統後纔會生效。html

 
1.6 在以上三臺電腦的/etc/hosts添加以上配置的hostname
root@master:~# vi /etc/hosts 
127.0.0.1 localhost 
127.0.1.1 ubuntu 

# The following lines are desirable for IPv6 capable hosts 
::1 ip6-localhost ip6-loopback 
fe00::0 ip6-localnet 
ff00::0 ip6-mcastprefix 
ff02::1 ip6-allnodes 
ff02::2 ip6-allrouters 

192.168.174.160 master
192.168.174.161 slave1
192.168.174.162 slave2

   

1.7 設置SSH無密碼登錄
  安裝SSH
duanwf@master:~$ sudo apt-get install ssh 

 

  查看SSH是否安裝成功以及版本java

duanwf@master:~$ ssh -V 
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014

 

  安裝完成後會在~目錄(當前用戶主目錄,即這裏的/home/hadoop)下產生一個隱藏文件夾.ssh(ls  -a 能夠查看隱藏文件)。若是沒有這個文件,本身新建便可(mkdir .ssh)。node

duanwf@master:~$ cd /home/hadoop 

duanwf@master:~$ ls -a

duanwf@master:~$ mkdir .ssh

 

  進入.ssh文件夾linux

duanwf@master:~$ cd .ssh

 

  產生祕鑰web

duanwf@master:~/.ssh$ ssh-keygen -t rsa 
Generating public/private rsa key pair. 
Enter file in which to save the key (/home/duanwf/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/duanwf/.ssh/id_rsa. 
Your public key has been saved in /home/duanwf/.ssh/id_rsa.pub. 
The key fingerprint is: 
49:ad:12:42:36:15:c8:f6:42:08:c1:d9:a6:04:27:a1 duanwf@master 
The key's randomart image is: 
+--[ RSA 2048]----+ 
|O++o+oo. | 
|.*.==. . | 
|E oo... . . | 
| . ...o o | 
| .. S | 
| . | 
| | 
| | 
| | 
+-----------------+

 

  id_rsa.pub 追加到受權的 key 裏面去shell

duanwf@master:~/.ssh$ cat id_rsa.pub >> authorized_keys

 

  重啓 SSH 服務命令使其生效apache

duanwf@master:~/.ssh$ service ssh restart 
stop: Rejected send message, 1 matched rules; type="method_call", sender=":1.109" (uid=1000 pid=8874 comm="stop ssh ") interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") 
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.110" (uid=1000 pid=8868 comm="start ssh ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")

 

   將master的authorized_keys追加到slave1和slave2的authorized_keys:json

  把全部主機都配好以後再來完成該步驟。

   這裏只有一個是master,若是有多個namenode,或者rm的話則須要打通全部master都其餘剩餘節點的免密碼登錄。ubuntu

duanwf@master:~/.ssh$ scp authorized_keys duanwf@slave1:~/.ssh/authorized_keys_from_master 
The authenticity of host 'slave1 (192.168.174.161)' can't be established. 
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42. 
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added 'slave1,192.168.174.161' (ECDSA) to the list of known hosts. 
duanwf@slave1's password: 
authorized_keys 100% 395 0.4KB/s 00:00
 
duanwf@master:~/.ssh$ scp authorized_keys duanwf@slave2:~/.ssh/authorized_keys_from_master 
The authenticity of host 'slave2 (192.168.174.162)' can't be established. 
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42. 
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added 'slave2,192.168.174.162' (ECDSA) to the list of known hosts. 
duanwf@slave2's password: 
authorized_keys 100% 395 0.4KB/s 00:00

 

  進入slave1和slave2.ssh目錄api

duanwf@slave1:~$ cd .ssh
duanwf@slave1:~/.ssh$ ssh -V 
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014
duanwf@slave1:~/.ssh$ cat authorized_keys_from_master >> authorized_keys 
duanwf@slave1:~/.ssh$ ls 
authorized_keys authorized_keys_from_master
 
duanwf@slave2:~/.ssh$ ssh -V 
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014 
duanwf@slave2:~/.ssh$ cat authorized_keys_from_master >> authorized_keys 
duanwf@slave2:~/.ssh$ ls 
authorized_keys authorized_keys_from_master

 

  驗證SSH無密碼登陸

duanwf@master:~/.ssh$ ssh slave1 
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic i686) 

* Documentation: https://help.ubuntu.com/ 

208 packages can be updated. 
110 updates are security updates. 

Last login: Tue Oct 7 18:25:31 2014 from 192.168.174.1

 

  如上顯示,說明已經安裝成功,第一次登陸時會詢問你是否繼續連接,輸入yes便可以進入。 

  實際上,在hadoop的安裝過程當中,是否無密碼登錄不是必須的,可是若是不配置無密碼登錄的話,每次啓動hadoop,都須要輸入密碼以登錄到每臺daotanode,考慮到通常的hadoop集羣動輒數十數百臺機器,所以通常來講都會配置ssh的無密碼登錄。

 

2 JDK安裝和配置
2.1 JDK下載
 
2.2 解壓下載的jdk到/opt目錄下
hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf jdk-7u51-linux-i586.tar.gz -C /opt/

 

2.3 jdk環境變量配置

在終端輸入如下命令來修改文件/etc/profile

hadoop@master:~$ sudo vi /etc/profile 

 

在文件最後加上

export JAVA_HOME=/opt/jdk1.7.0_51 
export JRE_HOME=/opt/jdk1.7.0_51/jre 
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH 
export PATH=$JAVA_HOME/bin:$PATH

 

使修改生效,執行命令:  

hadoop@master:~$ source /etc/profile

 

驗證java環境變量是否配置成功:

hadoop@master:~$ java -version 
java version "1.7.0_51" 
Java(TM) SE Runtime Environment (build 1.7.0_51-b13) 
Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode)

 

三、防火牆配置
ubuntu下關閉防火牆,執行命令:
hadoop@master:~$ sudo ufw disable

 

四、Hadoop-2.4.1的安裝和配置
4.1 編譯hadoop-2.4.1-src.tar.gz源包
對於64位操做系統,須要從新編譯源碼包。
 
4.2 解壓安裝包hadoop-2.4.1.tar.gz
hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf hadoop-2.4.1.tar.gz -C /opt/

 

4.3 Hadoop環境變量配置
修改/etc/profile文件,加入如下內容:
hadoop@master:~$ sudo vi /etc/profile
export HADOOP_DEV_HOME=/home/hadoop/hadoop-2.4.1/
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop

export PATH=$HADOOP_DEV_HOME/bin:$HADOOP_DEV_HOME/sbin:$PATH

 

使修改的配置生效,在終端輸入命令:
hadoop@master:~$ source /etc/profile

 

查看Hadoop環境變量是否生效,在終端執行命令:

hadoop@master:~$ hadoop 
Usage: hadoop [--config confdir] COMMAND 
where COMMAND is one of: 
fs run a generic filesystem user client 
version print the version 
jar <jar> run a jar file 
checknative [-a|-h] check native hadoop and compression libraries availability 
distcp <srcurl> <desturl> copy file or directories recursively 
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive 
classpath prints the class path needed to get the 
Hadoop jar and the required libraries 
daemonlog get/set the log level for each daemon 
or 
CLASSNAME run the class named CLASSNAME 

Most commands print help when invoked w/o parameters.

     

4.4 hadoop配置

配置以前,須要在master本地文件系統建立如下文件夾:

~/dfs/name

~/dfs/data

~/temp
hadoop@master:~$ mkdir ~/dfs 

hadoop@master:~$ mkdir ~/temp 

hadoop@master:~$ mkdir ~/dfs/name 

hadoop@master:~$ mkdir ~/dfs/data

     

這裏要涉及到的配置文件有7個:

~/hadoop-2.4.1/etc/hadoop/hadoop-env.sh

~/hadoop-2.4.1/etc/hadoop/yarn-env.sh

~/hadoop-2.4.1/etc/hadoop/slaves

~/hadoop-2.4.1/etc/hadoop/core-site.xml

~/hadoop-2.4.1/etc/hadoop/hdfs-site.xml

~/hadoop-2.4.1/etc/hadoop/mapred-site.xml.template

~/hadoop-2.4.1/etc/hadoop/yarn-site.xml

 
<---------------------------------- hadoop-env.sh --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/opt/jdk1.7.0_51/
<---------------------------------- yarn-env.sh --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi yarn-env.sh
# some Java parameters
export JAVA_HOME=/opt/jdk1.7.0_51/
<---------------------------------- slaves --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi slaves 
slave1 
slave2
<---------------------------------- core-site.xml --------------------------------->
hadoop@master:~/hadoop-2.4.1/etc/hadoop$ sudo vi core-site.xml
<configuration>
       <property>
                 <name>fs.defaultFS</name>
                 <value>hdfs://master:9000</value>
       </property>
       <property>
                 <name>io.file.buffer.size</name>
                 <value>131072</value>
       </property>
       <property>
                 <name>hadoop.tmp.dir</name>
                 <value>/home/hadoop/temp/</value>
                 <description>Abase for other temporary directories.</description>
       </property>
       <property>
               <name>hadoop.proxyuser.hduser.hosts</name>
               <value>*</value>
       </property>
       <property>
                        <name>hadoop.proxyuser.hduser.groups</name>
               <value>*</value>
       </property>
</configuration>
<---------------------------------- hdfs-site.xml --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi hdfs-site.xml
<configuration>
 <property>
         <name>dfs.namenode.secondary.http-address</name>
         <value>master:9001</value>
        </property>
        <property>
         <name>dfs.namenode.name.dir</name>
         <value>/home/hadoop/dfs/name/</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoop/dfs/data/</value>
        </property>
        <property>
         <name>dfs.replication</name>
         <value>3</value>
        </property>
        <property>
         <name>dfs.webhdfs.enabled</name>
                <value>true</value>
         </property>
</configuration>
<---------------------------------- mapred-site.xml.template --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi mapred-site.xml.template
<configuration>
 <property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.address</name>
   <value>master:10020</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>master:19888</value>
       </property>
</configuration>
<---------------------------------- yarn-site.xml --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
  <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
 <property>
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>master:8032</value>
       </property>
 <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value> master:8030</value>
 </property>
 <property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
   <value> master:8031</value>
  </property>
 <property>
   <name>yarn.resourcemanager.admin.address</name>
    <value> master:8033</value>
 </property>
  <property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value> master:8088</value>
        </property>
</configuration>
Hadoop配置文件修改

   

4.5 複製到其餘節點
  進入slave1:
hadoop@slave1:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/

  進入slave2:

hadoop@slave2:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/

 

4.6 Hadoop啓動
(1)格式化HDFS
hadoop@master:~/hadoop-2.4.1$ ./bin/hdfs namenode -format
14/10/08 18:43:05 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = master/192.168.174.160 
STARTUP_MSG: args = [-format] 
STARTUP_MSG: version = 2.4.1 
STARTUP_MSG: classpath = /home/hadoop/hadoop-2.4.1//etc/hadoop:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-common-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/contrib/capacity-scheduler/*.jar 
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1604318; compiled by 'jenkins' on 2014-06-21T05:43Z 
STARTUP_MSG: java = 1.7.0_51 
************************************************************/ 
14/10/08 18:43:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
14/10/08 18:43:05 INFO namenode.NameNode: createNameNode [-format] 
14/10/08 18:43:06 WARN common.Util: Path /home/hadoop/dfs/name/ should be specified as a URI in configuration files. Please update hdfs configuration. 
14/10/08 18:43:06 WARN common.Util: Path /home/hadoop/dfs/name/ should be specified as a URI in configuration files. Please update hdfs configuration. 
Formatting using clusterid: CID-f1441872-89ef-4733-98df-454c18da5043 
14/10/08 18:43:06 INFO namenode.FSNamesystem: fsLock is fair:true 
14/10/08 18:43:06 INFO namenode.HostFileManager: read includes: 
HostSet( 
) 
14/10/08 18:43:06 INFO namenode.HostFileManager: read excludes: 
HostSet( 
) 
14/10/08 18:43:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 
14/10/08 18:43:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map BlocksMap 
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit 
14/10/08 18:43:06 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 
14/10/08 18:43:06 INFO util.GSet: capacity = 2^22 = 4194304 entries 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: defaultReplication = 3 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxReplication = 512 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: minReplication = 1 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: encryptDataTransfer = false 
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 
14/10/08 18:43:06 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 
14/10/08 18:43:06 INFO namenode.FSNamesystem: supergroup = supergroup 
14/10/08 18:43:06 INFO namenode.FSNamesystem: isPermissionEnabled = true 
14/10/08 18:43:06 INFO namenode.FSNamesystem: HA Enabled: false 
14/10/08 18:43:06 INFO namenode.FSNamesystem: Append Enabled: true 
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map INodeMap 
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit 
14/10/08 18:43:06 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 
14/10/08 18:43:06 INFO util.GSet: capacity = 2^21 = 2097152 entries 
14/10/08 18:43:06 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map cachedBlocks 
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit 
14/10/08 18:43:06 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 
14/10/08 18:43:06 INFO util.GSet: capacity = 2^19 = 524288 entries 
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 
14/10/08 18:43:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 
14/10/08 18:43:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit 
14/10/08 18:43:06 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 
14/10/08 18:43:06 INFO util.GSet: capacity = 2^16 = 65536 entries 
14/10/08 18:43:06 INFO namenode.AclConfigFlag: ACLs enabled? false 
Re-format filesystem in Storage Directory /home/hadoop/dfs/name ? (Y or N) Y 
14/10/08 18:43:10 INFO namenode.FSImage: Allocated new BlockPoolId: BP-215877782-192.168.174.160-1412764990823 
14/10/08 18:43:10 INFO common.Storage: Storage directory /home/hadoop/dfs/name has been successfully formatted. 
14/10/08 18:43:11 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 
14/10/08 18:43:11 INFO util.ExitUtil: Exiting with status 0 
14/10/08 18:43:11 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.174.160 
************************************************************/
格式化HDFS:hadoop@master:~/hadoop-2.4.1$ ./bin/hdfs namenode -format

 

(2)啓動HDFS
執行一下命令啓動HDFS,會自動啓動全部master的namenode和slave1,slave2的datanode:

 

hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh
Starting namenodes on [master] 
The authenticity of host 'master (192.168.174.160)' can't be established. 
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42. 
Are you sure you want to continue connecting (yes/no)? yes 
master: Warning: Permanently added 'master,192.168.174.160' (ECDSA) to the list of known hosts. 
hadoop@master's password: 
master: mkdir: 沒法建立目錄"/opt/hadoop-2.4.1/logs": 權限不夠 
master: chown: 沒法訪問"/opt/hadoop-2.4.1/logs": 沒有那個文件或目錄 
master: starting namenode, logging to /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out 
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 151: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 沒有那個文件或目錄 
master: head: 沒法打開"/opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out" 讀取數據: 沒有那個文件或目錄 
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 166: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 沒有那個文件或目錄 
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 167: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 沒有那個文件或目錄 
The authenticity of host 'slave2 (192.168.174.162)' can't be established. 
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42. 
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'slave1 (192.168.174.161)' can't be established. 
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42. 
Are you sure you want to continue connecting (yes/no)? yes 
slave2: Warning: Permanently added 'slave2,192.168.174.162' (ECDSA) to the list of known hosts. 
hadoop@slave2's password: Please type 'yes' or 'no': 
slave1: Warning: Permanently added 'slave1,192.168.174.161' (ECDSA) to the list of known hosts. 
hadoop@slave1's password: 
slave2: mkdir: 沒法建立目錄"/opt/hadoop-2.4.1/logs": 權限不夠 
slave2: chown: 沒法訪問"/opt/hadoop-2.4.1/logs": 沒有那個文件或目錄 
slave2: starting datanode, logging to /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out 
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 151: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 沒有那個文件或目錄 
slave2: head: 沒法打開"/opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out" 讀取數據: 沒有那個文件或目錄 
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 166: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 沒有那個文件或目錄 
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 167: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 沒有那個文件或目錄
hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh

 

【出現問題】
mkdir: 沒法建立目錄"/home/hadoop/hadoop-2.4.1/logs": 權限不夠 
 
【解決辦法】
在master上都執行命令:
hadoop@master:~$ sudo chown -R hadoop:hadoop hadoop-2.4.1/

slave1和slave2一樣須要執行。

 

從新啓動HDFS
hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh 
Starting namenodes on [master] 
hadoop@master's password: 
master: starting namenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out 
hadoop@slave1's password: hadoop@slave2's password: 
slave1: starting datanode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave1.out 


hadoop@slave2's password: slave2: Permission denied, please try again. 

slave2: starting datanode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out 
Starting secondary namenodes [master] 
hadoop@master's password: 
master: starting secondarynamenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-master.out
hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh

 

檢查Hadoop集羣是否安裝好了,在master上面運行jps,若是有NameNode這個進程,說明master安裝好了:
hadoop@master:~/hadoop-2.4.1$ jps 
31711 SecondaryNameNode 
31464 NameNode 
31857 Jps

 

在slave1上面運行jps,若是有DataNode這個進程,說明slave1安裝好了。
hadoop@slave1:~$ jps 
5529 DataNode 
5610 Jps

 

在slave2上面運行jps,若是有DataNode這個進程,說明slave1安裝好了。
hadoop@slave2:~$ jps 
8119 Jps 
8035 DataNode

 


本文出自 「Forever Love」 博客,轉載請務必保留此出處http://www.cnblogs.com/dwf07223/p/4012406.html

相關文章
相關標籤/搜索