hadoop集羣

端午節,無聊試試,hadoop集羣。部署成功,相關資料,記錄下來,僅供本身參考~html

master 192.168.234.20java

node1 192.168.234.21node


vi /opt/modules/hadoop/hadoop-1.0.3/conf/core-site.xmllinux

vi /opt/modules/hadoop/hadoop-1.0.3/conf/hdfs-site.xmlapache

vi /opt/modules/hadoop/hadoop-1.0.3/conf/mapred-site.xml安全


mkdir -p /opt/data/hadoop/bash

mkdir -p /opt/data/hadoop/mapred/mrlocalssh

mkdir -p /opt/data/hadoop/mapred/mrsystemjsp

mkdir -p /opt/data/hadoop/hdfs/nameide

mkdir -p /opt/data/hadoop/hdfs/data

mkdir -p /opt/data/hadoop/hdfs/namesecondary

chown -R hadoop:hadoop /opt/data/*


#格式化文件

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop namenode -format

#啓動 Master node :

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start namenode

#啓動 JobTracker:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start jobtracker

#啓動 secondarynamenode:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start secondarynamenode

#啓動 DataNode && TaskTracker:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start datanode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start tasktracker



/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop namenode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop jobtracker

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop secondarynamenode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop datanode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop tasktracker



http://master:50070

http://master:50030/


http://192.168.80.200:50070/dfshealth.jsp


http://node1:50070

http://node1:50030/


rm -r /tmp/hsperfdata_root/*

rm -r /tmp/hsperfdata_hadoop/*


ll /opt/modules/hadoop/hadoop-1.0.3/logs/hadoop-hadoop*

rm -r /opt/modules/hadoop/hadoop-1.0.3/logs/hadoop-hadoop*


hadoop用戶,刪除/tmp後,登陸報錯

GDM could not write to your authorization file. This could mean that you are out of disk space or that your home directory could not be opened for writing. Please contact your system administrator。

用管理員登陸:root

chown hadoop:hadoop /tmp



==========core-site.xml==========

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://master:9000</value>

</property>

<property>

<name>fs.checkpoint.dir</name>

<value>/opt/data/hadoop/hdfs/namesecondary</value>

</property>

<property>

<name>fs.checkpoint.period</name>

<value>1800</value>

</property>

<property>

<name>fs.checkpoint.size</name>

<value>33554432</value>

</property>

<property>

<name>io.compression.codecs</name>

<value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</value>

</property>

<property>

<name>fs.trash.interval</name>

<value>1440</value>

</property>

</configuration>

==============hdfs-site.xml==============

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<configuration>

<property>

<name>dfs.name.dir</name>

<value>/opt/data/hadoop/hdfs/name</value>

<!--HDFS namenode p_w_picpath 文件保存地址-->

<description>

</description>

</property>

<property>

<name>dfs.data.dir</name>

<value>/opt/data/hadoop/hdfs/data</value>

<description>

</description>

</property>

<property>

<name>dfs.http.address</name>

<value>master:50070</value>

</property>

<property>

<name>dfs.secondary.http.address</name>

<value>node1:50090</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.datanode.du.reserved</name>

<value>1073741824</value>

</property>

<property>

<name>dfs.block.size</name>

<value>134217728</value>

</property>


<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

==========mapred-site.xml==========

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>master:9001</value>

</property>

<property>

<name>mapred.local.dir</name>

<value>/opt/data/hadoop/mapred/mrlocal</value>

<final>true</final>

</property>

<property>

<name>mapred.system.dir</name>

<value>/opt/data/hadoop/mapred/mrsystem</value>

<final>true</final>

</property>

<property>

<name>mapred.tasktracker.map.tasks.maximum</name>

<value>2</value>

<final>true</final>

</property>

<property>

<name>mapred.tasktracker.reduce.tasks.maximum</name>

<value>1</value>

<final>true</final>

</property>


<property>

<name>io.sort.mb</name>

<value>32</value>

<final>true</final>

</property>


<property>

<name>mapred.child.java.opts</name>

<value>-Xmx64M</value>

</property>

<property>

<name>mapred.compress.map.output</name>

<value>true</value>

</property>

</configuration>



http://www.cnblogs.com/jdksummer/articles/2521550.html

博客園博問閃存首頁新隨筆聯繫管理訂閱 隨筆- 0  文章- 82  評論- 3

linux下設置ssh無密碼登陸

ssh配置  


主機A:10.0.5.199

主機B:10.0.5.198

須要配置主機A無密碼登陸主機A,主機B

先確保全部主機的防火牆處於關閉狀態。

在主機A上執行以下:

 1. $cd ~/.ssh

 2. $ssh-keygen -t rsa  --------------------而後一直按回車鍵,就會按照默認的選項將生成的密鑰保存在.ssh/id_rsa文件中。

 3. $cp id_rsa.pub authorized_keys

        這步完成後,正常狀況下就能夠無密碼登陸本機了,即ssh localhost,無需輸入密碼。

 4. $scp authorized_keys summer@10.0.5.198:/home/summer/.ssh   ------把剛剛產生的authorized_keys文件拷一份到主機B上.  

 5. $chmod 600 authorized_keys      

     進入主機B的.ssh目錄,改變authorized_keys文件的許可權限。

   (4和5能夠合成一步,執行:  $ssh-copy-id -i summer@10.0.5.198 )


正常狀況下上面幾步執行完成後,從主機A所在機器向主機A、主機B所在機器發起ssh鏈接,只有在第一次登陸時須要輸入密碼,之後則不須要。


可能遇到的問題:


1.進行ssh登陸時,出現:」Agent admitted failure to sign using the key「 .

  執行: $ssh-add

  強行將私鑰 加進來。

2.若是無任何錯誤提示,能夠輸密碼登陸,但就是不能無密碼登陸,在被鏈接的主機上(如A向B發起ssh鏈接,則在B上)執行如下幾步:

  $chmod o-w ~/

  $chmod 700 ~/.ssh

  $chmod 600 ~/.ssh/authorized_keys

3.若是執行了第2步,仍是不能無密碼登陸,再試試下面幾個

  $ps -Af | grep agent

       檢查ssh代理是否開啓,若是有開啓的話,kill掉該代理,而後執行下面,從新打開一個ssh代理,若是沒有開啓,直接執行下面:

      $ssh-agent

  仍是不行的話,執行下面,重啓一下ssh服務

      $sudo service sshd restart

4. 執行ssh-add時提示「Could not open a connection to your authenticationh agent」而失敗

執行: ssh-agent bash

====================================================================================================

error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-*/mapred/system. Name node is in safe mode.

請不要急,NameNode會在開始啓動階段自動關閉安全模式,而後啓動成功。若是你不想等待,能夠運行:

bin/hadoop dfsadmin -safemode leave 強制結束。

====================================================================================================

相關文章
相關標籤/搜索