本次安裝是在windows7環境下安裝redhat虛擬機進行的,所需要的軟件例如如下:html
VirtualBox-5.0.16-105871-Win.exejava
rhel-server-5.4-x86_64-dvd.isonode
首先安裝虛擬機軟件,而後在此基礎上安裝redhat。安裝redhat的時候記得關閉防火牆還有其餘的一些服務都disabled掉。linux
首先在windows7上開一個共享文件夾,將例如如下軟件放入共享文件夾:web
jdk-7u71-linux-x64.tar.gzapache
hadoop-2.7.2.tar(hadoop2.7.2原始文件是一個gz文件,在本地解壓可以獲得tar文件)windows
安裝完redhat後,安裝加強工具,安裝完成之後可以看到windows下的共享文件夾,複製jdk,hadoop文件到home文件夾。在此以前配置一個免passwordSSH工做環境。自行百度搜索。瀏覽器
安裝完jdk之後,編輯/etc/profile文件。最後加上app
export JAVA_HOME=/home/jdk1.7 export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile檢查jdk是否成功安裝。dom
解壓hadoop壓縮文件。home文件夾下的文件結構例如如下:
在/home/hadoop2文件夾下建立數據存放的文件夾。tmp、hdfs、hdfs/data、hdfs/name
編輯hadoop-env.sh,改動例如如下:
export JAVA_HOME=/home/jdk1.7 export HADOOP_CONF_DIR=/home/hadoop2/etc/hadoop
編輯yarn-env.sh,改動例如如下:
export JAVA_HOME=/home/jdk1.7編輯core-site.xml文件
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://127.0.0.1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop2/tmp</value> </property> <property> <name>io.file.buffer.size</name> <value>131702</value> </property> </configuration>編輯mapred-site.xml文件
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>127.0.0.1:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>127.0.0.1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>127.0.0.1:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>127.0.0.1:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>127.0.0.1:8088</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>768</value> </property> </configuration>編輯hdfs-site.xml文件
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop2/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop2/hdfs/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>127.0.0.1:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>第一次安裝完畢之後,需要運行例如如下命令
hadoop namenode –format
而後啓動hadoop
[root@localhost sbin]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/03/29 00:56:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [localhost.localdomain] localhost.localdomain: starting namenode, logging to /home/hadoop2/logs/hadoop-root-namenode-localhost.localdomain.out localhost: starting datanode, logging to /home/hadoop2/logs/hadoop-root-datanode-localhost.localdomain.out Starting secondary namenodes [localhost.localdomain] localhost.localdomain: starting secondarynamenode, logging to /home/hadoop2/logs/hadoop-root-secondarynamenode-localhost.localdomain.out 16/03/29 00:56:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /home/hadoop2/logs/yarn-root-resourcemanager-localhost.localdomain.out localhost: starting nodemanager, logging to /home/hadoop2/logs/yarn-root-nodemanager-localhost.localdomain.out用戶jps查看進程,假設是6個代表成功安裝:
[root@localhost sbin]# jps
4571 NameNode 3065 DataNode 3479 NodeManager 3373 ResourceManager 3221 SecondaryNameNode 3774 Jps也可以從瀏覽器中訪問相關資源
相關問題:
1.共享目錄找不到,通常要更新gcc kernel等
2.報Unable To Load Native-Hadoop Library For Your Platform錯誤。通常按上面的步驟就不會報此錯誤。