hadoop單節點

環境 rhel6.5
hadoop 安裝與測試
[root@server6 ~]# useradd -u 800 hadoop
##id隨意,須要注意的是全部節點id必須一致
[root@server6 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) 組=800(hadoop
[root@server6 ~]# su - hadoop
[hadoop@server6 ~]$ ls
hadoop-2.7.3.tar.gz jdk-7u79-linux-x64.tar.gz
[hadoop@server6 ~]$ tar -zxf hadoop-2.7.3.tar.gz
[hadoop@server6 ~]$ tar -zxf jdk-7u79-linux-x64.tar.gz
[hadoop@server6 ~]$ ln -s hadoop-2.7.3 hadoop
[hadoop@server6 ~]$ ln -s jdk1.7.0_79/ jdk
[hadoop@server6 ~]$ source ~/.bash_profile
[hadoop@server6 ~]$ echo $JAVA_HOME
/home/hadoop/jdk
[hadoop@server6 ~]$ cd hadoop
[hadoop@server6 hadoop]$ mkdir input
[hadoop@server6 hadoop]$ cp etc/hadoop/.xml input/
[hadoop@server6 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
[hadoop@server6 hadoop]$ ls output/
part-r-00000 _SUCCESS
[hadoop@server6 hadoop]$ cat output/

1 dfsadmin
[hadoop@server6 hadoop]$ vim etc/hadoop/hadoop-env.sh java

The java implementation to use.

export JAVA_HOME=/home/hadoop/jdk
##注意java變量路徑,不然後面系統起不來。
[hadoop@server6 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output
僞分佈式操做(須要ssh免密)
[hadoop@server6 hadoop]$ vim etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://172.25.35.6:9000</value>
</property>
</configuration>
[hadoop@server6 hadoop]$ vim etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
[hadoop@server6 hadoop]$ vim etc/hadoop/slaves
172.25.35.6 ##將默認localhost改成master本主機ip地址
ssh 免密
[hadoop@server6 hadoop]$ exit
logout
[root@server6 ~]# passwd hadoop
[root@server6 ~]# su - hadoop
[hadoop@server6 ~]$ ssh-keygen
[hadoop@server6 ~]$ ssh-copy-id 172.25.35.6
[hadoop@server6 ~]$ ssh 172.25.35.6 ##測試登錄,不須要輸密碼就ok
也可這樣:
[hadoop@server6 ~]$ ssh-keygen
獲得公鑰後:
[hadoop@server6 ~]$ cd .ssh/
[hadoop@server6 .ssh]$ ls
[hadoop@server6 .ssh]$ mv id_rsa.pub authorized_keys
[hadoop@server6 hadoop]$ bin/hdfs namenode -format ##進行格式化
[hadoop@server6 hadoop]$ sbin/start-dfs.sh ##啓動hadoop
[hadoop@server6 hadoop]$ jps ##用jps檢驗各後臺進程是否成功啓動,看到如下四個進程,就成功了
6376 DataNode
6274 NameNode
6544 SecondaryNameNode
6687 Jps
瀏覽器輸入:172.25.35.5:70050node

僞分佈的操做:
[hadoop@server6 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server6 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server6 hadoop]$ bin/hdfs dfs -put input test ##上傳本地的 input 並更名爲 test
那怎麼查看呢?用下面的命令
[hadoop@server6 hadoop]$ bin/hdfs dfs -cat output/*
...
within 1
without 1
work 1
writing, 8
you 9
[hadoop@server6 hadoop]$ bin/hdfs dfs -get output . ##將output下載到本地
[hadoop@server6 hadoop]$ ls
bin include libexec logs output sbin
etc lib LICENSE.txt NOTICE.txt README.txt share
[hadoop@server6 hadoop]$ bin/hdfs dfs -rm -r output ##刪除
17/10/24 21:11:24 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted outputlinux

hadoop 徹底分佈模式搭建
用nfs網絡文件系統,就不用每一個節點安裝一遍了,須要rpcbind和nfs開啓
hadoop@server6 hadoop]$ sbin/stop-dfs.sh
[hadoop@server6 hadoop]$ logout
[root@server6 ~]# yum install -y rpcbind
root@server6 ~]# yum install -y nfs-utils
[root@server6 ~]# vim /etc/exports
/home/hadoop (rw,anonuid=800,anongid=800)
[root@server6 ~]# /etc/init.d/rpcbind start
[root@server6 ~]# /etc/init.d/rpcbind status
[root@server6 ~]# /etc/init.d/nfs restart
啓動 NFS 服務: [肯定]
關掉 NFS 配額: [肯定]
啓動 NFS mountd: [肯定]
啓動 NFS 守護進程:
[root@server6 ~]# showmount -e
Export list for server6:
/home/hadoop

[root@server6 ~]# exportfs -v
/home/hadoop <world>(rw,wdelay,root_squash,no_subtree_check,anonuid=800,anongid=800)
[hadoop@server6 hadoop]$ vim etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>172.25.35.7:2181,172.25.35.8:2181</value>
</property>
</configuration>vim

相關文章
相關標籤/搜索