報錯信息:node
./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [master] master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-master.out spark-slave1: ssh: Could not resolve hostname spark-slave1: Name or service not known spark-slave2: ssh: Could not resolve hostname spark-slave2: Name or service not known Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-master.out starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn--resourcemanager-master.out spark-slave2: ssh: Could not resolve hostname spark-slave2: Name or service not known spark-slave1: ssh: Could not resolve hostname spark-slave1: Name or service not known
分析:報錯信息大概意思是沒法解析spark-slave1和spark-slave2主機名,我子節點的主機名明明是node1和node2,找了好久終於找到了問題所在vim
在slaves文件中ssh
vim /usr/local/hadoop/etc/hadoop/slaves spark-slave1 spark-slave2
設置了默認的子節點主機名,改成本身的子節點便可oop
vim /usr/local/hadoop/etc/hadoop/slaves
node1
node2
而後重啓hadoopspa
./stop-all.sh //關閉 ./start-all.sh //啓動
而後發現就不報錯了,子節點啓動成功code
@master:/usr/local/hadoop/sbin# jps 5698 ResourceManager 6403 Jps 5547 SecondaryNameNode 5358 NameNode
@node1:~# jps 885 Jps 744 NodeManager 681 DataNode
@node2:~# jps 914 Jps 773 NodeManager 710 DataNode
總結:hadoop下的slaves文件官方解釋爲:集羣裏的一臺機器被指定爲 NameNode,另外一臺不一樣的機器被指定爲JobTracker。這些機器是masters,餘下的機器即做爲DataNode也做爲TaskTracker,這些機器是slaves,在slaves文件中列出全部slave的主機名或者IP地址,一行一個。 意思就是子節點主機名或ip,固然也能夠設置爲172之類的ip地址。blog