主機名稱html |
外網IPjava |
內網IPnode |
操做系統apache |
備註vim |
安裝軟件windows |
mini01瀏覽器 |
10.0.0.11bash |
172.16.1.11網絡 |
CentOS 7.4app |
ssh port:22 |
Hadoop 【NameNode SecondaryNameNode】 |
mini02 |
10.0.0.12 |
172.16.1.12 |
CentOS 7.4 |
ssh port:22 |
Hadoop 【ResourceManager】 |
mini03 |
10.0.0.13 |
172.16.1.13 |
CentOS 7.4 |
ssh port:22 |
Hadoop 【DataNode NodeManager】 |
mini04 |
10.0.0.14 |
172.16.1.14 |
CentOS 7.4 |
ssh port:22 |
Hadoop 【DataNode NodeManager】 |
mini05 |
10.0.0.15 |
172.16.1.15 |
CentOS 7.4 |
ssh port:22 |
Hadoop 【DataNode NodeManager】 |
添加hosts信息,保證每臺均可以相互ping通
[root@mini01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.11 mini01 10.0.0.12 mini02 10.0.0.13 mini03 10.0.0.14 mini04 10.0.0.15 mini05
# 使用一個專門的用戶,避免直接使用root用戶 # 添加用戶、指定家目錄並指定用戶密碼 useradd -d /app yun && echo '123456' | /usr/bin/passwd --stdin yun # sudo提權 echo "yun ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers # 讓其它普通用戶能夠進入該目錄查看信息 chmod 755 /app/
要求:根據規劃實現 mini01 到 mini0一、mini0二、mini0三、mini0四、mini05 免祕鑰登陸 實現 mini02 到 mini0一、mini0二、mini0三、mini0四、mini05 免祕鑰登陸 # 能夠使用ip也能夠是hostname 可是因爲咱們計劃使用的是 hostname 方式交互,因此使用hostname # 同時hostname方式分發,能夠經過hostname遠程登陸,也能夠IP遠程登陸
# 實現 mini01 到 mini0二、mini0三、mini0四、mini05 免祕鑰登陸 [yun@mini01 ~]$ ssh-keygen -t rsa # 一路回車便可 Generating public/private rsa key pair. Enter file in which to save the key (/app/.ssh/id_rsa): Created directory '/app/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /app/.ssh/id_rsa. Your public key has been saved in /app/.ssh/id_rsa.pub. The key fingerprint is: SHA256:rAFSIyG6Ft6qgGdVl/7v79DJmD7kIDSTcbiLtdKyTQk yun@mini01 The key's randomart image is: +---[RSA 2048]----+ |. o.o . | |.. o . o.. | |... . . o= | |..o. oE+B | |.o .. .*S* | |o .. +oB.. .= . | |o.o .* ..++ + | |oo . . oo. | |. .++o | +----[SHA256]-----+ # 生成以後會在用戶的根目錄生成一個 「.ssh」的文件夾 [yun@mini01 ~]$ ll -d .ssh/ drwx------ 2 yun yun 38 Jun 9 19:17 .ssh/ [yun@mini01 ~]$ ll .ssh/ total 8 -rw------- 1 yun yun 1679 Jun 9 19:17 id_rsa -rw-r--r-- 1 yun yun 392 Jun 9 19:17 id_rsa.pub
# 能夠使用ip也能夠是hostname 可是因爲咱們使用的是 hostname 方式通訊,因此使用hostname [yun@mini01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.1.11 # IP方式【這裏不用】 # 分發 [yun@mini01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03 # 主機名方式【全部的都這樣 從mini01到mini05】 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/app/.ssh/id_rsa.pub" The authenticity of host '[mini03]:22 ([10.0.0.13]:22)' can't be established. ECDSA key fingerprint is SHA256:pN2NUkgCTt+b9P5TfQZcTh4PF4h7iUxAs6+V7Slp1YI. ECDSA key fingerprint is MD5:8c:f0:c7:d6:7c:b1:a8:59:1c:c1:5e:d7:52:cb:5f:51. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys yun@mini03's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh -p '22' 'mini03'" and check to make sure that only the key(s) you wanted were added.
mini01分發密鑰
[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01 [yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02 [yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03 [yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04 [yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05
mini02分發密鑰
[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01 [yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02 [yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03 [yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04 [yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05
遠程登陸測試【最好都測試一下】
[yun@mini02 ~]$ ssh mini05 Last login: Sat Jun 9 19:47:43 2018 from 10.0.0.11 Welcome You Login [yun@mini05 ~]$ # 表示遠程登陸成功
[yun@mini01 .ssh]$ pwd /app/.ssh [yun@mini01 .ssh]$ ll total 16 -rw------- 1 yun yun 784 Jun 9 19:43 authorized_keys -rw------- 1 yun yun 1679 Jun 9 19:17 id_rsa -rw-r--r-- 1 yun yun 392 Jun 9 19:17 id_rsa.pub -rw-r--r-- 1 yun yun 1332 Jun 9 19:41 known_hosts ######################################################################################## authorized_keys:存放遠程免密登陸的公鑰,主要經過這個文件記錄多臺機器的公鑰 id_rsa : 生成的私鑰文件 id_rsa.pub : 生成的公鑰文件 know_hosts : 已知的主機公鑰清單
[yun@mini01 software]# pwd /app/software [yun@mini01 software]# tar xf jdk1.8.0_112.tar.gz [yun@mini01 software]# ll total 201392 drwxr-xr-x 8 10 143 4096 Dec 20 13:27 jdk1.8.0_112 -rw-r--r-- 1 root root 189815615 Mar 12 16:47 jdk1.8.0_112.tar.gz [yun@mini01 software]# mv jdk1.8.0_112/ /app/ [yun@mini01 software]# cd /app/ [yun@mini01 app]# ll total 8 drwxr-xr-x 8 10 143 4096 Dec 20 13:27 jdk1.8.0_112 [yun@mini01 app]# ln -s jdk1.8.0_112/ jdk [yun@mini01 app]# ll total 8 lrwxrwxrwx 1 root root 13 May 16 23:19 jdk -> jdk1.8.0_112/ drwxr-xr-x 8 10 143 4096 Dec 20 13:27 jdk1.8.0_112
[root@mini01 ~]$ pwd /app [root@mini01 ~]$ ll -d jdk* # 能夠根據實際狀況選擇jdk版本,其中jdk1.8 能夠兼容 jdk1.7 lrwxrwxrwx 1 yun yun 11 Mar 15 14:58 jdk -> jdk1.8.0_112 drwxr-xr-x 8 yun yun 4096 Dec 20 13:27 jdk1.8.0_112 [root@mini01 profile.d]$ pwd /etc/profile.d [root@mini01 profile.d]$ cat jdk.sh # java環境變量 export JAVA_HOME=/app/jdk export JRE_HOME=/app/jdk/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$PATH [root@mini01 profile.d]# source /etc/profile [root@mini01 profile.d]$ java -version java version "1.8.0_112" Java(TM) SE Runtime Environment (build 1.8.0_112-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
[yun@mini01 software]$ pwd /app/software [yun@mini01 software]$ ll total 194152 -rw-r--r-- 1 yun yun 198811365 Jun 8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz [yun@mini01 software]$ tar xf CentOS-7.4_hadoop-2.7.6.tar.gz [yun@mini01 software]$ mv hadoop-2.7.6/ /app/ [yun@mini01 software]$ cd [yun@mini01 ~]$ ln -s hadoop-2.7.6/ hadoop [yun@mini01 ~]$ ll total 4 lrwxrwxrwx 1 yun yun 13 Jun 9 16:21 hadoop -> hadoop-2.7.6/ drwxr-xr-x 9 yun yun 149 Jun 8 16:36 hadoop-2.7.6 lrwxrwxrwx 1 yun yun 12 May 26 11:18 jdk -> jdk1.8.0_112 drwxr-xr-x 8 yun yun 255 Sep 23 2016 jdk1.8.0_112
[root@mini01 profile.d]# pwd /etc/profile.d [root@mini01 profile.d]# vim hadoop.sh export HADOOP_HOME="/app/hadoop" export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@mini01 profile.d]# source /etc/profile # 生效
1 [yun@mini01 hadoop]$ pwd 2 /app/hadoop/etc/hadoop 3 [yun@mini01 hadoop]$ vim core-site.xml 4 <?xml version="1.0" encoding="UTF-8"?> 5 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 6 …………………… 7 <!-- Put site-specific property overrides in this file. --> 8 9 <configuration> 10 <!-- 指定HADOOP所使用的文件系統schema(URI),HDFS的老大(NameNode)的地址 --> 11 <property> 12 <name>fs.defaultFS</name> 13 <value>hdfs://mini01:9000</value> <!-- mini01 是hostname信息 --> 14 </property> 15 <property> 16 <name>hadoop.tmp.dir</name> 17 <value>/app/hadoop/tmp</value> 18 </property> 19 20 <!-- 啓用垃圾箱功能,單位分鐘 --> 21 <property> 22 <name>fs.trash.interval </name> 23 <value>1440</value> 24 </property> 25 26 </configuration>
[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim hdfs-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> ……………… <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定HDFS副本的數量 --> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <!-- 兩個name標籤均可以。週期性的合併fsimage和edits log文件而且使edits log保持在必定範圍內。最好和namenode不在一臺機器,由於所需內存和namenode同樣 --> <!-- <name>dfs.secondary.http.address</name> --> <name>dfs.namenode.secondary.http-address</name> <value>mini01:50090</value> </property> <!-- dfs namenode 的目錄,能夠有多個目錄,而後每一個目錄掛不一樣的磁盤,每一個目錄下的文件信息是同樣的,至關於備份 --> <!-- 若有須要,放開註釋便可 <property> <name>dfs.namenode.name.dir</name> <value> file://${hadoop.tmp.dir}/dfs/name,file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value> </property> --> <!-- 也能夠配置dfs.datanode.data.dir 配置爲多個目錄,至關於擴容 --> </configuration>
[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ mv mapred-site.xml.template mapred-site.xml [yun@mini01 hadoop]$ vim mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> ……………… <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
[yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ vim yarn-site.xml <?xml version="1.0"?> …………………… <configuration> <!-- Site specific YARN configuration properties --> <!-- 指定YARN的老大(ResourceManager)的地址 --> <property> <name>yarn.resourcemanager.hostname</name> <value>mini02</value> <!-- 根據規劃 mini02 爲ResourceManager --> </property> <!-- reducer獲取數據的方式 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
# 該配置和Hadoop服務無關,只是用於Hadoop腳本的批量使用 [yun@mini01 hadoop]$ pwd /app/hadoop/etc/hadoop [yun@mini01 hadoop]$ cat slaves mini03 mini04 mini05
(是對namenode進行初始化)
[yun@mini01 hadoop]$ hdfs namenode -format 18/06/09 17:44:56 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = mini01/10.0.0.11 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.6 ……………… STARTUP_MSG: java = 1.8.0_112 ************************************************************/ 18/06/09 17:44:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/06/09 17:44:56 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-72e356f5-7723-4960-885a-72e522e19be1 18/06/09 17:44:57 INFO namenode.FSNamesystem: No KeyProvider found. 18/06/09 17:44:57 INFO namenode.FSNamesystem: fsLock is fair: true 18/06/09 17:44:57 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/06/09 17:44:57 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 09 17:44:57 18/06/09 17:44:57 INFO util.GSet: Computing capacity for map BlocksMap 18/06/09 17:44:57 INFO util.GSet: VM type = 64-bit 18/06/09 17:44:57 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 18/06/09 17:44:57 INFO util.GSet: capacity = 2^21 = 2097152 entries 18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/06/09 17:44:57 INFO blockmanagement.BlockManager: defaultReplication = 3 18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplication = 512 18/06/09 17:44:57 INFO blockmanagement.BlockManager: minReplication = 1 18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 18/06/09 17:44:57 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/06/09 17:44:57 INFO blockmanagement.BlockManager: encryptDataTransfer = false 18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 18/06/09 17:44:57 INFO namenode.FSNamesystem: fsOwner = yun (auth:SIMPLE) 18/06/09 17:44:57 INFO namenode.FSNamesystem: supergroup = supergroup 18/06/09 17:44:57 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/06/09 17:44:57 INFO namenode.FSNamesystem: HA Enabled: false 18/06/09 17:44:57 INFO namenode.FSNamesystem: Append Enabled: true 18/06/09 17:44:58 INFO util.GSet: Computing capacity for map INodeMap 18/06/09 17:44:58 INFO util.GSet: VM type = 64-bit 18/06/09 17:44:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 18/06/09 17:44:58 INFO util.GSet: capacity = 2^20 = 1048576 entries 18/06/09 17:44:58 INFO namenode.FSDirectory: ACLs enabled? false 18/06/09 17:44:58 INFO namenode.FSDirectory: XAttrs enabled? true 18/06/09 17:44:58 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/06/09 17:44:58 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/06/09 17:44:58 INFO util.GSet: Computing capacity for map cachedBlocks 18/06/09 17:44:58 INFO util.GSet: VM type = 64-bit 18/06/09 17:44:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 18/06/09 17:44:58 INFO util.GSet: capacity = 2^18 = 262144 entries 18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/06/09 17:44:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/06/09 17:44:58 INFO util.GSet: VM type = 64-bit 18/06/09 17:44:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 18/06/09 17:44:58 INFO util.GSet: capacity = 2^15 = 32768 entries 18/06/09 17:44:58 INFO namenode.FSImage: Allocated new BlockPoolId: BP-925531343-10.0.0.11-1528537498201 18/06/09 17:44:58 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted. 18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 319 bytes saved in 0 seconds. 18/06/09 17:44:58 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/06/09 17:44:58 INFO util.ExitUtil: Exiting with status 0 18/06/09 17:44:58 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.11 ************************************************************/ [yun@mini01 hadoop]$ pwd /app/hadoop [yun@mini01 hadoop]$ ll total 112 drwxr-xr-x 2 yun yun 194 Jun 8 16:36 bin drwxr-xr-x 3 yun yun 20 Jun 8 16:36 etc drwxr-xr-x 2 yun yun 106 Jun 8 16:36 include drwxr-xr-x 3 yun yun 20 Jun 8 16:36 lib drwxr-xr-x 2 yun yun 239 Jun 8 16:36 libexec -rw-r--r-- 1 yun yun 86424 Jun 8 16:36 LICENSE.txt -rw-r--r-- 1 yun yun 14978 Jun 8 16:36 NOTICE.txt -rw-r--r-- 1 yun yun 1366 Jun 8 16:36 README.txt drwxr-xr-x 2 yun yun 4096 Jun 8 16:36 sbin drwxr-xr-x 4 yun yun 31 Jun 8 16:36 share drwxrwxr-x 3 yun yun 17 Jun 9 17:44 tmp # 該目錄以前是沒有的 [yun@mini01 hadoop]$ ll tmp/ total 0 drwxrwxr-x 3 yun yun 18 Jun 9 17:44 dfs [yun@mini01 hadoop]$ ll tmp/dfs/ total 0 drwxrwxr-x 3 yun yun 21 Jun 9 17:44 name [yun@mini01 hadoop]$ ll tmp/dfs/name/ total 0 drwxrwxr-x 2 yun yun 112 Jun 9 17:44 current [yun@mini01 hadoop]$ ll tmp/dfs/name/current/ total 16 -rw-rw-r-- 1 yun yun 319 Jun 9 17:44 fsimage_0000000000000000000 -rw-rw-r-- 1 yun yun 62 Jun 9 17:44 fsimage_0000000000000000000.md5 -rw-rw-r-- 1 yun yun 2 Jun 9 17:44 seen_txid -rw-rw-r-- 1 yun yun 199 Jun 9 17:44 VERSION
# 在mini01上啓動 [yun@mini01 sbin]$ pwd /app/hadoop/sbin [yun@mini01 sbin]$ ./hadoop-daemon.sh start namenode # 中止使用: hadoop-daemon.sh stop namenode starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out [yun@mini01 sbin]$ jps 6066 Jps 5983 NameNode [yun@mini01 sbin]$ ps -ef | grep 'hadoop' yun 5983 1 6 17:55 pts/0 00:00:07 /app/jdk/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,console -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop-yun-namenode-mini01.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode yun 6160 2337 0 17:57 pts/0 00:00:00 grep --color=auto hadoop [yun@mini01 sbin]$ netstat -lntup | grep '5983' (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 5983/java tcp 0 0 10.0.0.11:9000 0.0.0.0:* LISTEN 5983/java
http://10.0.0.11:50070
# mini0三、mini0四、mini05 啓動datanode # 因爲添加環境變量,因此能夠在任何目錄啓動 [yun@mini02 ~]$ hadoop-daemon.sh start datanode # 中止使用: hadoop-daemon.sh stop datanode starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini02.out [yun@mini02 ~]$ jps 5349 Jps 5263 DataNode
# 根據規劃在mini01啓動 [yun@mini01 hadoop]$ start-dfs.sh Starting namenodes on [mini01] mini01: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out mini04: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini04.out mini03: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini03.out mini05: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini05.out Starting secondary namenodes [mini01] mini01: starting secondarynamenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-secondarynamenode-mini01.out
URL地址:(HDFS管理界面)
http://10.0.0.11:50070
# 根據規劃在mini02啓動 # 啓動yarn [yun@mini02 hadoop]$ start-yarn.sh starting yarn daemons starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini02.out mini05: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini05.out mini04: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini04.out mini03: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini03.out
URL地址:(MR管理界面)
http://10.0.0.12:8088
##### mini01 [yun@mini01 hadoop]$ jps 16336 NameNode 16548 SecondaryNameNode 16686 Jps ##### mini02 [yun@mini02 hadoop]$ jps 10936 ResourceManager 11213 Jps ##### mini03 [yun@mini03 ~]$ jps 9212 Jps 8957 DataNode 9039 NodeManager ##### mini04 [yun@mini04 ~]$ jps 4130 NodeManager 4296 Jps 4047 DataNode ##### mini05 [yun@mini05 ~]$ jps 7011 DataNode 7091 NodeManager 7308 Jps