sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager shell腳本 xxx.sh ls mkdir hadoop-start.sh sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager chmod 744 hadoop-start.sh
1. 相對路徑
./hadoop-start.sh
2. 絕對路徑
/opt/install/hadoop-2.5.2/hadoop-stop.sh
HDFS配置集羣的原理分析java
ssh免密登錄node
經過工具生成公私鑰對linux
ssh-keygen -t rsa
公鑰發送遠程主機shell
ssh-copy-id 用戶@ip
修改slave文件安全
vi /opt/install/hadoop2.5.2/etc/hadoop/slaves
slavesip
HDFS的集羣搭建網絡
ssh免密登錄app
ssh-keygen -t rsa
ssh-copy-id 用戶@ip
清除mac地址的影響ssh
rm -rf /etc/udev/rule.d/70-persistence.net.rules
設置網絡分佈式
1. ip地址設置 主機名 映射 關閉防火牆 關閉selinux
安裝hadoop,jdk工具
1. 安裝jdk 2. hadoop解壓縮 3. 配置文件 hadoop-env.sh core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml slaves 一致 4. 格式化 NameNode所在的節點 格式化 [清空原有 data/tmp 內容] bin/hdfs namenode -format 5. 啓動相關服務 sbin/start-dfs.sh 出現以下則成功:(從節點鏈接不成功能夠先手動ssh連一下,確保能夠無密碼無驗證纔可進行如下) [root@hadoop hadoop-2.5.2]# sbin/start-dfs.sh 19/01/23 04:09:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-namenode-hadoop.out hadoop2: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop2.out hadoop: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop.out hadoop1: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop1.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-hadoop.out 19/01/23 04:10:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 6.運行jps [root@hadoop hadoop-2.5.2]# jps 3034 DataNode 3178 SecondaryNameNode 3311 Jps 2946 NameNode 2824 GetConf 7.在從節點運行jps,出現以下則正常 [root@hadoop1 etc]# jps 1782 Jps 1715 DataNode 訪問hadoop:50070查看datanode:
NameNode持久化[瞭解]
什麼是NameNode的持久化
FSImage和EditsLog文件默認存儲的位置
#默認存儲位置: /opt/install/hadoop-2.5.2/data/tmp/dfs/name
hadoop.tmp.dir=/opt/install/hadoop-2.5.2/data/tmp
dfs.namenode.name.dir=file://${hadoop.tmp.dir}/dfs/name
dfs.namenode.edits.dir = ${dfs.namenode.name.dir}
自定義FSImage和EditsLog的存儲位置?
hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/xxx/xxx</value>
</property>
<property>
<name>dfs.namenode.edits.dir</name>
<value>/xxx/xxx<</value>
</property>
安全模式 safemode
每一次重新啓動NameNode時,都會進行EditsLog與FSImage的彙總,爲了不這個過程當中,用戶寫操做會對系統形成影響,HDFS設置了安全模式(safemode),在安全模式中,不容許用戶作寫操做.完成合並後,安全模式會自動退出
手工干預安全模式
bin/hdfs dfsadmin -safemode enter|leave|get
SecondaryNameNode
按期合併FSImage和EditsLog
能夠在NameNode進程宕機,FSImage和EditsLog硬盤損壞的狀況下,部分還原NameNode數據
SecondaryNameNode獲取的FSImage和EditsLog 存儲位置 /opt/install/hadoop2.5.2/data/tmp/dfs/namesecondary #secondarynamenode還原namenode數據的方式 #rm -rf /opt/install/hadoop2.5.2/data/tmp/dfs/namesecondary/in_use.lock 1. 指定namenode持久化中FSImage 和 EditsLog的新位置 hdfs-site.xml <property> <name>dfs.namenode.name.dir</name> <value>file:///opt/install/nn/fs</value> </property> <property> <name>dfs.namenode.edits.dir</name> <value>file:///opt/install/nn/edits</value> </property> 2. kill namenode 目的爲了演示 namenode 當機 日誌查看/logs/hadoop-root-namenode-hadoop.log tail -100 查看最新的100行 3. 經過SecondaryNameNode恢復NameNode sbin/hadoop-daemon.sh start namenode -importCheckpoint 若是namenode沒啓動,查看查看hadoop2.5/data/tmp/dfs/namesecondary目錄是否被鎖,若是鎖掉則刪掉該目錄下的in_use.lock