hadoop 基礎命令

 bin/zkServer.sh startnode

 

 air00:bootstrap

 hdfs zkfc -formatZKvim

 hadoop-daemon.sh start journalnodeoop

 #hadoop namenode -format myclusterspa

 hadoop namenode -format myclusterorm

 hadoop-daemon.sh start namenodeserver

 hadoop-daemon.sh start zkfc進程

 hadoop-daemon.sh start datanodehadoop

 yarn-daemon.sh start resourcemanager資源

 yarn-daemon.sh start nodemanager

 mr-jobhistory-daemon.sh start historyserver

 air01

  hadoop-daemon.sh start journalnode

  hadoop-daemon.sh start datanode

  yarn-daemon.sh start nodemanager

 air02:

  hadoop-daemon.sh start journalnode

  hdfs namenode –bootstrapStandby

  hadoop-daemon.sh start namenode

  hadoop-daemon.sh start zkfc

  hadoop-daemon.sh start datanode

  yarn-daemon.sh start resourcemanager

  yarn-daemon.sh start nodemanager

  

namespaceID=284002105

clusterID=CID-2d58d0cc-6eb2-4f0a-b53a-6f6939960491

blockpoolID=BP-462449294-192.168.119.135-1415001575870

layoutVersion=-55




格式化 namenode

# hdfs namenode –format

 # hadoop namenode -format  

開始守護進程

# hadoop-daemon.sh start namenode

# hadoop-daemon.sh start datanode

能夠同時啓動:

# start-dfs.sh

開始 Yarn 守護進程

# yarn-daemon.sh start resourcemanager

# yarn-daemon.sh start nodemanager

或同時啓動:

# start-yarn.sh

檢查守護進程是否啓動

# jps


2539 NameNode
2744 NodeManager
3075 Jps
3030 DataNode
2691 ResourceManager

瀏覽UI

打開 localhost:8088 查看資源管理頁面

hdfs dfsadmin -refreshNodes  刷新節點

 hdfs dfsadmin -report  報告節點健康狀況



map方法中獲取當前數據塊所在的文件名

InputSplit inputSplit=(InputSplit)context.getInputSplit();
String filename=((FileSplit)inputSplit).getPath().getName();

vim /etc/sysconfig/network


NETWORKING=yes

HOSTNAME=zw_76_42

fedora19 修改hostname

/etc/hostname 

相關文章
相關標籤/搜索