概念:
是一個可靠的、可伸縮的、分佈式計算的開源軟件。
是一個框架,容許跨越計算機集羣的大數據及分佈式處理,使用簡單的編程模型(mapreduce)
可從單臺服務器擴展至幾千臺主機,每一個節點提供了計算和存儲功能。
不依賴於硬件處理HA,在應用層面實現java特性4V:
volumn 體量大
velocity 速度快
variaty 樣式多
value 價值密度低node模塊:
hadoop common 公共類庫,支持其餘模塊
HDFS hadoop distributed file system,hadoop分佈式文件系統
Hadoop yarn 做業調度和資源管理框架
hadoop mapreduce 基於yarn系統的大數據集並行處理技術。web
主機名稱 | IP地址 | 安裝節點應用 |
---|---|---|
hadoop-1 | 172.20.2.203 | namenode/datanode/nodemanager |
hadoop-2 | 172.20.2.204 | secondarynode/datanode/nodemanager |
hadoop-3 | 172.20.2.205 | resourcemanager/datanode/nodemanager |
a.配置java環境apache
yum install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel -y cat >/etc/profile.d/java.sh<<EOF export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-3.b14.el6_9.x86_64 export CLASSPATH=.:\$JAVA_HOME/jre/lib/rt.jar:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar export PATH=\$PATH:\$JAVA_HOME/bin EOF source /etc/profile.d/java.sh
b.修改主機名添加hosts編程
hostname hadoop-1 cat >>/etc/hosts<<EOF 172.20.2.203 hadoop-1 172.20.2.204 hadoop-2 172.20.2.205 hadoop-3 EOF
c.建立用戶及目錄服務器
useradd hadoop echo "hadoopwd" |passwd hadoop --stdin mkdir -pv /data/hadoop/hdfs/{nn,snn,dn} chown -R hadoop:hadoop /data/hadoop/hdfs/ mkdir -p /var/log/hadoop/yarn mkdir -p /dbapps/hadoop/logs chmod g+w /dbapps/hadoop/logs/ chown -R hadoop.hadoop /dbapps/hadoop/
d.配置hadoop環境變量app
cat>/etc/profile.d/hadoop.sh<<EOF export HADOOP_PREFIX=/usr/local/hadoop export PATH=\$PATH:\$HADOOP_PREFIX/bin:\$HADOOP_PREFIX/sbin export HADOOP_COMMON_HOME=\${HADOOP_PREFIX} export HADOOP_HDFS_HOME=\${HADOOP_PREFIX} export HADOOP_MAPRED_HOME=\${HADOOP_PREFIX} export HADOOP_YARN_HOME=\${HADOOP_PREFIX} EOF source /etc/profile.d/hadoop.sh
e.下載並解壓軟件包框架
mkdir /software cd /software wget -c http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz tar -zxf hadoop-2.6.5.tar.gz -C /usr/local ln -sv /usr/local/hadoop-2.6.5/ /usr/local/hadoop chown hadoop.hadoop /usr/local/hadoop-2.6.5/ -R
f.hadoop用戶免密鑰配置ssh
su - hadoop ssh-keygen -t rsa for num in `seq 1 3`;do ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@hadoop-$num;done
配置master節點webapp
hadoop-1節點運行namenode/datanode/nodemanager,修改hadoop-1的hadoop配置文件
core-site.xml
(定義namenode節點)
cat>/usr/local/hadoop/etc/hadoop/core-site.xml <<EOF <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop-1:8020</value> <final>true</final> </property> </configuration> EOF
hdfs-site.xml
修改replication爲data節點數目 (定義secondary節點)
cat >/usr/local/hadoop/etc/hadoop/hdfs-site.xml <<EOF <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop-2:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///data/hadoop/hdfs/nn</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///data/hadoop/hdfs/dn</value> </property> <property> <name>fs.checkpoint.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> <property> <name>fs.checkpoint.edits.dir</name> <value>file:///data/hadoop/hdfs/snn</value> </property> </configuration> EOF
添加mapred-site.xml
cat >/usr/local/hadoop/etc/hadoop/mapred-site.xml <<EOF <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> EOF
yarn-site.xml
修改對應values爲master的主機名(定義resourcemanager節點)
cat >/usr/local/hadoop/etc/hadoop/yarn-site.xml<<EOF <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>yarn.resourcemanager.address</name> <value>hadoop-3:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop-3:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop-3:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop-3:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop-3:8088</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> </property> </configuration> EOF
slaves
(定義數據節點)
cat >/usr/local/hadoop/etc/hadoop/slaves<<EOF hadoop-1 hadoop-2 hadoop-3 EOF
一樣的步驟操做hadoop-2/3,建議將hadoop-1的文件直接分發至hadoop-2/3
在NameNode機器上(hadoop-1)執行格式化:
hdfs namenode -format
在namenode hadoop-1執行start-all.sh
啓動服務
在hadoop-3啓動resourcemanager服務``
hadoop-2服務查看
hadoop-3服務查看
yarn jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar pi 2 10
HDFS-NameNode
YARN-ResourceManager