(六)hadoop系列之__hadoop分佈式集羣環境搭建

配置hadoop(master,slave1,slave2)
  說明:
  	NameNode: master
  	DataNode: slave1,slave2

  --------------------------------------------------------
   A. 修改主機的master 和 slaves		
      i. 配置slaves
         # vi hadoop/conf/slaves
         添加:192.168.126.20
               192.168.126.30
               ...節點 ip
               
               
       ii. 配置master
         添加:192.168.126.10
               ...主機 ip
   -------------------------------------------------------- 
                
   B. 配置master .xml文件
        i. 配置core-site.xml
		<?xml version="1.0"?>
		<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
		<!-- Put site-specific property overrides in this file. -->
		<configuration>
			<property>  
        <name>hadoop.tmp.dir</name>  
        <value>/home/had/hadoop/data</value>  
        <description>A base for other temporary directories.</description>  
			</property> 
			<property>
				<name>fs.default.name</name>
				<value>hdfs://192.168.126.10:9000</value>
			</property>
		</configuration>

	 ii. 配置hdfs-site.xml
		<?xml version="1.0"?>
		<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
		<!-- Put site-specific property overrides in this file. -->
		<configuration>
			<property>
				<name>dfs.replication</name>
				<value>3</value>
				<description>Default block replication.
					The actual number of replications can be specified when the file is created.
					The default is used if replication is not specified in create time.
				</description>
			</property>
		</configuration>
		
	iii.mapred-site.xml
		 	<?xml version="1.0"?>
		 	<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
			<!-- Put site-specific property overrides in this file. -->
			<configuration>
				<property>
					<name>mapred.job.tracker</name>
					<value>192.168.126.10:9001</value>
				</property>
			</configuration>
			
		-------------------------------------------------------------	
   C. 配置slave1,slave2 (同上)
       i. core-site.xml 
       ii. mapred-site.xml 		
	 ---------------------------------------------------------------	
	 	
   D. 配置 master,slave1,slave2的hadoop系統環境
     	$ vi /home/hadoop/.bashrc
    	添加:
			export HADOOP_HOME=/home/hadoop/hadoop-0.20.2
			export HADOOP_CONF_DIR=$HADOOP_HOME/conf    
			export PATH=/home/hadoop/hadoop-0.20.2/bin:$PATH
   ----------------------------------------------------------------

 

初始化文件系統

注意:有時候會出現如下錯誤信息html

 。。。java

 11/08/18 17:02:35 INFO ipc.Client: Retrying connect to server: localhost/192.168.126 .10:9000. Already tried 0 time(s).node

 Bad connection to FS. command aborted.web

 此時須要把根目錄下的tmp文件裏面的內容刪掉,而後從新格式化便可。apache

啓動Hadoop:安全

完成後進行測試:bash

測試
						
  $ bin/hadoop fs -put ./README.txt test1
  $ bin/hadoop fs -ls
  Found 1 items
  drwxr-xr-x   - hadoop supergroup          0 2013-07-14 00:51 /user/hadoop/test1
  $hadoop jar hadoop-0.20.2-examples.jar wordcount /user/hadoop/test1/README.txt output1 
結果出現如下問題

 

注:測試過程中會有一些錯誤信息。一下是我在安裝的過程中碰到的幾個問題。jsp

1.org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /home/hadoop/hadoop-datastore
/hadoop-hadoop/mapred/system/job_201307132331_0005. Name node is in safe mode. 關閉安全模式: bin/hadoop dfsadmin -safemode leave 2.org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/test1/README.txt could only be replicated to 0 nodes,
instead of 1 狀況1. hadoop.tmp.dir 磁盤空間不足。 解決方法: 換個足夠空間的磁盤便可。 狀況2. 查看防火牆狀態 /etc/init.d/iptables status /etc/init.d/iptables stop//關閉全部的防火牆 狀況3.前後啓動namenode、datanode(個人是這種狀況)
參考文章http://sjsky.iteye.com/blog/1124545

 最後執行界面以下:ide

 查看hdfs運行狀態(web):http://192.168.126.10:50070/dfshealth.jspoop

 

 查看map-reduce信息(web):http://192.168.126.10:50030/jobtracker.jsp

 


整個Hadoop集羣搭建結束。

相關文章
相關標籤/搜索