1.入門
HDFS 存儲
MapReduce 計算
Spark Flink
Yarn 資源做業調度java
僞分佈式部署
要求 環境配置文件 參數文件 ssh無密碼 啓動node
jps命令
[hadoop@hadoop002 ~]$ jps
28288 NameNode NN
27120 Jps
28410 DataNode DN
28575 SecondaryNameNode SNNweb
1.MapReduce job on Yarn
[hadoop@hadoop002 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@hadoop002 hadoop]$ ssh
Configure parameters as follows:
etc/hadoop/mapred-site.xml:分佈式
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
etc/hadoop/yarn-site.xml:ide
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Start ResourceManager daemon and NodeManager daemon:
$ sbin/start-yarn.shoop
open web: http://47.75.249.8:8088/ui
3.運行MR JOB
Linux 文件存儲系統 mkdir ls
HDFS 分佈式文件存儲系統
-format
hdfs dfs -???spa
Make the HDFS directories required to execute MapReduce jobs:
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
Copy the input files into the distributed filesystem:
$ bin/hdfs dfs -put etc/hadoop input
Run some of the examples provided:
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar grep input output 'dfs[a-z.]+'
Examine the output files:
Copy the output files from the distributed filesystem to the local filesystem and examine them:.net
$ bin/hdfs dfs -get output output
$ cat output/*
or
View the output files on the distributed filesystem:
$ bin/hdfs dfs -cat output/*
-------------------------------------------------
bin/hdfs dfs -mkdir /user/hadoop/input
bin/hdfs dfs -put etc/hadoop/core-site.xml /user/hadoop/input
bin/hadoop jar \
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar \
grep \
/user/hadoop/input \
/user/hadoop/output \
'fs[a-z.]+'
4.HDFS三個進程啓動以hadoop002啓動
NN: core-site.xml fs.defaultFS參數
DN: slaves
SNN:
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop002:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>hadoop002:50091</value>
</property>
5.jps
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ jps
16188 DataNode
16379 SecondaryNameNode
16566 Jps
16094 NameNode
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$
5.1 位置
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ which jps
/usr/java/jdk1.7.0_80/bin/jps
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$
5.2 其餘用戶
[root@hadoop002 ~]# jps
16188 -- process information unavailable
16607 Jps
16379 -- process information unavailable
16094 -- process information unavailable
[root@hadoop002 ~]#
[root@hadoop002 ~]# useradd jepson
[root@hadoop002 ~]# su - jepson
[jepson@hadoop002 ~]$ jps
16664 Jps
[jepson@hadoop002 ~]$
process information unavailable
真正可用的
[root@hadoop002 ~]# kill -9 16094
[root@hadoop002 ~]#
[root@hadoop002 ~]# jps
16188 -- process information unavailable
16379 -- process information unavailable
16702 Jps
16094 -- process information unavailable
[root@hadoop002 ~]#
[root@hadoop002 ~]# ps -ef|grep 16094
root 16722 16590 0 22:19 pts/4 00:00:00 grep 16094
[root@hadoop002 ~]#
process information unavailable
真正不可用的
正確的作法: process information unavailable
1.找到進程號 pid
2.ps -ef|grep pid 是否存在
3.假如存在,
第二步是能夠知道哪一個用戶運行這個進程,
su - 用戶,進去查看
假如刪除rm -f /tmp/hsperfdata_${user}/pid文件
進程不掛,可是jps命令不顯示了,所依賴的腳本都會有問題
4.假如不存在,怎樣清空殘留信息
rm -f /tmp/hsperfdata_${user}/pid文件
http://blog.itpub.net/30089851/viewspace-1994344/
6.補充命令
ssh root@ip -p 22
ssh root@47.75.249.8 date
rz sz
兩個Linux系統怎樣傳輸呢?
hadoop000-->hadoop002
[ruoze@hadoop000 ~]$ scp test.log root@47.75.249.8:/tmp/
將當前的Linux系統文件 scp到 遠程的機器上
hadoop000<--hadoop002
[ruoze@hadoop002 ~]$ scp test.log root@hadoop000:/tmp/
可是 hadoop002屬於生產機器 你不可登錄
scp root@47.75.249.8:/tmp/test.log /tmp/rz.log
可是: 生產上 絕對不可能給你密碼
ssh多臺機器互相信任關係
http://blog.itpub.net/30089851/viewspace-1992210/
坑:
scp 傳輸 pub文件
/etc/hosts文件裏面配置多臺機器的ip和name
--------------------------------------------
做業:
1.Yarn僞分佈式部署 +1 blog
2.MR JOB +1 blog
3.HDFS進程啓動 hadoop002 + 1blog
4.jps整理爲1blog
5.再裝1臺VM虛擬機
ssh多臺信任關係 1blog
6.拓展: rm -rf ~/.ssh A機器無密碼訪問B機器,請問誰的pub文件拷貝給誰?