工具:java
xshell ()node
安裝包:linux
hadoop-2.6.0.tar.gz->2.4.1 http://archive.apache.org/dist/hadoop/core/hadoop-2.4.1/shell
----------5/19/2017----------startapache
https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gzvim
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.tar.gzwget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gzcookie
----------5/19/2017----------end網絡
jdk-7u9-linux-i586.tar.gzoracle
後續用到的安裝包ssh
hbase-0.94.2.tar.gz
hive-0.9.0.tar.gz
pig-0.10.0.tar.gz
zookeeper-3.4.3.tar.gz
添加用戶和組
groupadd hadoop
useradd hadoop -g hadoop
切換用戶
su hadoop
退出
exit
JDK安裝(root用戶下進行安裝)
plan a: rpm
plab b: 解壓便可
mkdir /usr/java
tar -zxvf jdk-7u9-linux-i506.tar.gz -C /usr/java
創建連接:
ls -s /usr/java/jdk1.6.0_30 /usr/java/jdk
配置環境變量:
修改vi /etc/profile,在最後添加
export JAVA_HOME=/usr/java/jdk
export PATH=$JAVA_HOME/bin:$PATH
讓環境變量生效 source /etc/profile
檢查echo $PATH 和java -version
--------------------------------------------------------------------------
SSH和無密碼登陸
安裝SSH客戶端:
yum -y install openssh-clients
=>此時可進行復制虛擬機
ssh master
生成無密碼的公私鑰對:
ssh-keygen -t rsa
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
(之後能夠把公鑰發給其餘機器ssh-copy-id 192.168.137.44)
--------------------------------------------------------------------------
複製虛擬機
複製->徹底複製
vi /etc/sysconfig/network-scripts/ifcfg-eth0
根據虛擬機真實的mac修改 設置-網絡,可查看到
DEVICE="eth1" HWADDR=... IPADDR=192.168.56.3
eth0改成eht1
mv /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-script/ifcfg-eth1
重啓網卡
經過以上方法能夠複製多個虛擬
---------------------------------------------------------------
安裝hadoop
下載地址 http://archive.apache.org/dist/hadoop/core/stable
解壓:
tar -zxvf hadoop-1.0.3.tar.gz -C /opt/ #之前做法是安裝在/usr/local,如今通常安裝在opt
mv /opt/hadoop-1.0.3 /opt/hadoop #重命名方便使用
chown -R hadoop:hadoop /opt/hadoop #把文件夾的權限賦給hadoop用戶
su hadoop #在hadoop用戶下配置
配置0:
vi /etc/profile
export JAVA_HOME/usr/java/jdk
export HADOOP_HOME=/opt/hadoopp-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
source /etc/profile
配置1:
hadoop-evn.sh
export JAVA_HOME/usr/java/jdk
配置2:vim core-site.xml (建議用hostname,不用ip)
<configuration>
<!--指定HDFS的Namenode地址-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.137.2:9000</value>
</property>
<!--指定Hadoop運行時產生文件的地址-->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.6.0/tmp</value>
</property>
</configuration>
配置3:hdfs-site.xml
<configuration>
<!--HDFS保存數據的副本數量-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
配置4: mv mapred-site.xml.template mapred-site.xml
<configuration>
<!-- MR運行在YARN上-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
配置5: yarn-site.xml
<configuration>
<!-- NodeManager獲取數據的方式是shuffle-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--指定YARN的ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
</configuration>
hadoop-env.sh
設置JAVA_HOME
初始化HDFS
hdfs namenode -format
底下生成tmp文件夾
啓動hadoop
./start-all.sh
檢驗-jps命令查看進程
ResourceManager
NodeManager
NameNode
Jps
SecondaryNameNode
DataNode
http://192.168.137.2:50070/dfsnodelist.jsp?whatNodes=LIVE
http://192.168.137.2:50075/browseDirectory.jsp?dir=%2F&go=go&namenodeInfoPort=50070&nnaddr=192.168.137.2%3A9000
http://192.168.137.2:8088
若是沒法訪問,需關閉防火牆 service iptables stop
Error:
Could not get the namenode ID of this node.
hadoop-hdfs-2.6.0.jar(hdfs-default.xml) dfs.ha.namenode.id
原理: http://blog.csdn.net/chenpingbupt/article/details/7922004
public static String getNameNodeId(Configuration conf, String nsId) { String namenodeId = conf.getTrimmed(DFS_HA_NAMENODE_ID_KEY); if (namenodeId != null) { return namenodeId; } String suffixes[] = DFSUtil.getSuffixIDs(conf, DFS_NAMENODE_RPC_ADDRESS_KEY, nsId, null, DFSUtil.LOCAL_ADDRESS_MATCHER); if (suffixes == null) { String msg = "Configuration " + DFS_NAMENODE_RPC_ADDRESS_KEY + " must be suffixed with nameservice and namenode ID for HA " + "configuration."; throw new HadoopIllegalArgumentException(msg); } return suffixes[1]; }
DFS_HA_NAMENODE_ID_KEY = "dfs.ha.namenode.id";
DFS_NAMENODE_RPC_ADDRESS_KEY = "dfs.namenode.rpc-address";
請先確保iptables關閉
0 檢查各臺機子的全部配置文件
1 是否沒有配置文件
2 各臺機子間的ssh免登陸是否正常
=>因爲namenode配錯機子
Tips:
跨機複製
例如:scp ./id_rsa.pub root@10.28.8.20:/home/hadoop