Ubuntu 12.04.2 LTS hadoop-1.0.3
sudo apt-get install openjdk-7-jdk
vi /etc/hostname 設置java
namenode: hadoop-namenode datanode: hadoop-datanode1
以此類推node
vi /etc/profile ,添加shell
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 export HADOOP_HOME=/home/sfuser/hadoop export PATH=$PATH:/home/sfuser/hadoop/bin:/home/sfuser/hbase/bin:/home/sfuser/flume/bin:/home/sfuser/lib/grails-2.1.0/bin:/usr/lib/jvm/java-7-openjdk
127.0.0.1 localhost 10.6.115.8 hadoop-secondnamenode 10.6.115.9 hadoop-namenode 10.6.115.10 hadoop-datanode1 10.6.115.11 hadoop-datanode2 10.6.115.12 hadoop-datanode3 10.6.115.13 hadoop-datanode4
編輯conf/hadoop-env.sh ,請自行忽略我這裏hbase的部分.apache
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 export HBASE_HOME=/home/sfuser/hbase export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/hbase-0.94.1.jar:$HBASE_HOME/lib/zookeeper-3.4.3.jar:$HBASE_HOME/lib/protobuf-java-2.4.0a.jar:$HBASE_HOME/lib/guava-11.0.2.jar:$HBASE_HOME/lib:$HBASE_HOME/conf export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS" export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS" export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS" export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS" export HADOOP_PID_DIR=/tmp/
編輯conf/mapred-site.xml設置app
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>hadoop-namenode:54311</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx1024m</value> </property> </configuration>
編輯conf/hdfs-site.xml設置jvm
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.support.append</name> <value>true</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>65535</value> </property> </configuration>
編輯conf/core-site.xml設置ide
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://hadoop-namenode:54310</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/sfuser/hadoop/dir</value> </property> </configuration>
編輯conf/masters設置oop
hadoop-namenode
編輯conf/slaves設置this
hadoop-datanode1 hadoop-datanode2 hadoop-datanode3 hadoop-datanode4
scp -r hadoop hadoop-datanode1:/home/sfuser/hadoop scp -r hadoop hadoop-datanode2:/home/sfuser/hadoop scp -r hadoop hadoop-datanode3:/home/sfuser/hadoop scp -r hadoop hadoop-datanode4:/home/sfuser/hadoop
hadoop namenode –format
在master上執行:spa
hadoop/bin/start-all.sh
在master上執行jps命令,顯示以下
14745 Jps 11214 SecondaryNameNode 10903 NameNode 11316 JobTracker
在slave上執行jps命令,顯示以下
29543 Jps 18606 TaskTracker 18382 DataNode
配置結束後,能夠查看http://hadoop-namenode:50070 顯示節點信息
http://hadoop-namenode:50030 顯示任務信息
在master上執行:
hadoop/bin/stop-all.sh
解決方法
hadoop dfsadmin -safemode leave
查看current version是否一致,改爲一致便可。
新作的環境,能夠直接刪除各節點< hadoop.tmp.dir >下的文件,再格式化namenode便可
一般遇到ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times 的問題,也多是由於這個版本問題