Ubuntu 僞分佈式安裝

1. install ubuntun with user name gavin/gavin  in VM Ware.java

2. in terminal, type sudo su, then type the password of current user, it will switch to user rootnode

[安裝完Ubuntu後突然意識到沒有設置root密碼,不知道密碼天然就沒法進入根用戶下。到網上搜了一下,原來是這麼回事。Ubuntu的默認root密碼是隨機的,即每次開機都有一個新的root密碼。咱們能夠在終端輸入命令 sudo passwd,而後輸入當前用戶的密碼,enter,終端會提示咱們輸入新的密碼並確認,此時的密碼就是root新密碼。修改爲功後,輸入命令 su root,再輸入新的密碼就ok了。]linux

3. create a folder java and give it all access,ubuntu

sudo mkdir /usr/local/javavim

chmod 777 /usr/local/javassh

4. copy the downloaded java file to java folder.(drag and throw in the vmware)oop

jdk-7u9-linux-i586.tar.gzspa

5. untar java fileorm

tar xzvf /usr/local/java/jdk-7u9-linux-i586.tar.gzserver

6. add envoriment variables to /etc/profile, 

gedit /etc/profile

below part to be added into profile file

export JAVA_HOME=/usr/local/java/jdk1.7.0_09     
export JRE_HOME=/usr/local/java/jdk1.7.0_09/jre  
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH  
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH

7. activate the enviroment vairables

source /etc/profile

8. create hadoop user and user group

sudo addgroup hadoop
sudo adduser -ingroup hadoop hadoop
su to root (sudo su)
chmod 640 /etc/sudoers
gedit /etc/sudoers
在root   ALL=(ALL:ALL)   ALL下添加hadoop   ALL=(ALL:ALL)  ALL
chmod 440 /etc/sudoers
exit

9. install ssh service
sudo apt-get install openssh-server

switch to user  hadoop:
sudo -i -u hadoop

10.

創建ssh無密碼登陸本機
建立ssh-key,採用rsa方式:
ssh-keygen -t rsa
回車後會在~/.ssh/下生成兩個文件:id_rsa和id_rsa.pub這兩個文件是成對出現的
進入~/.ssh/目錄下,將id_rsa.pub追加到authorized_keys受權文件中,開始是沒有authorized_keys文件的
cat id_rsa.pub >> authorized_keys  or cp id_rsa.pub authorized_keys 
無密碼登陸localhost:
ssh localhost

for below, need use yes

The authenticity of host 'localhost (127.0.0.1)' can't be established.          ECDSA key fingerprint is 86:07:88:db:34:94:f8:09:6d:f4:7d:19:48:67:fe:e1.          Are you sure you want to continue connecting (yes/no)? yes

11.

install hadoop


cd /usr/local  在/usr/local下安裝hadoop
sudo tar -xzf hadoop-0.20.2.tar.gz
sudo mv hadoop-0.20.2 hadoop
將該hadoop文件夾的屬主用戶設爲hadoop:
sudo chown –R hadoop:hadoop hadoop(注意空格)
cd hadoop/conf/
配置conf/hadoop-env.sh,找到#export JAVA_HOME=...,去掉#,而後加上本機jdk的路徑
vim hadoop-env.sh
編輯conf/core-site.xml文件:
<configuration>  
< property>   
  <name>fs.default.name</name>   
  <value>hdfs://localhost:9000</value>     
</property>   
< /configuration>   
編輯conf/mapred-site.xml文件:
    <configuration>   
     <property>     
      <name>mapred.job.tracker</name>   
      <value>localhost:9001</value>     
     </property>   
    </configuration>   
編輯conf/hdfs-site.xml文件:
    <configuration>  
    <property>  
    <name>dfs.name.dir</name>  
    <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value>  
    </property>  
    <property>  
    <name>dfs.data.dir</name>  
    <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value>  
    </property>  
    <property>  
    <name>dfs.replication</name>  
    <value>2</value>  
    </property>  
    </configuration>  
編輯conf/masters文件,添加做爲secondarynamenode的主機名,單機版環境只需填寫localhost。
編輯conf/slaves文件,添加做爲slave的主機名,一行一個。單機版只需填寫localhost。

 

 6.在單機上運行hadoop首次運行,需格式化HDFS:(進入hadoop安裝主目錄) bin/hadoop namenode -format運行hadoop進程bin/start-all.sh jps 查看進程啓動狀況 查看 http://localhost:50030 ---for jobtracker   http://localhost:50070 ---for namenode

相關文章
相關標籤/搜索