hadoop安裝文檔

1、準備

  該準備工做在三臺機器上都須要進行,首先使用 vmvare 建立 1 個虛擬機,這臺虛擬機是 master,一會須要把 master 克隆出兩臺 slave
java

    

    

    

     

    

    

    

    

    

    

    點肯定而後開啓此虛擬機node

    

    

    

    

    

    

    

    而後添加/boot 分區,大小爲 1G,文件系統選 ext4 linux

    

    而後添加 swap 分區,注意,swap 分區爲內存的 2 倍,文件系統則選擇爲 swap web

    

    而後點完成 shell

    

    

    

    

    

    

    而後等待安裝完成,而後點重啓apache

    

    

    到此係統安裝就完成了,而後設置網絡 vim

    

    

    點完肯定後,而後再進去查看下網關瀏覽器

    

    

    點取消,記住這個網關 緩存

    一、我先換下主機名 網絡

[root@localhost ~]# hostnamectl set-hostname wangmaster 
[root@localhost ~]# hostname wangmaster 
[root@localhost ~]# exit

    二、從新登陸,而後設置網卡 

[root@wangmaster ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736  
TYPE=Ethernet 
BOOTPROTO=static 
DEFROUTE=yes 
PEERDNS=yes 
PEERROUTES=yes 
IPV4_FAILURE_FATAL=no 
IPV6INIT=yes 
IPV6_AUTOCONF=yes 
IPV6_DEFROUTE=yes 
IPV6_PEERDNS=yes 
IPV6_PEERROUTES=yes 
IPV6_FAILURE_FATAL=no 
NAME=eno16777736 
DEVICE=eno16777736 
ONBOOT=yes  //啓用網卡 
IPADDR=192.168.225.100  //設置 IP 
NETMASK=255.255.255.0  //設置掩碼 
GATEWAY=192.168.225.2  //設置網關,就是記住的網關 
DNS1=114.114.114.114 //設置 DNS 
DNS2=114.114.114.115   //設置備用 DNS 
[root@wangmaster ~]# systemctl restart network.service //重啓網絡

    三、設置網絡 YUM 源 

    

    選擇遠程登陸工具登陸操做 (能夠用Xshell)

    

    點擊文件傳輸按鈕,進入Xftp軟件,進行傳輸文件。

    

    將上面保存的文件傳入/etc/yum.repos.d文件夾下。

        

[root@wangmaster ~]# cd /etc/yum.repos.d/ 
[root@wangmaster yum.repos.d]# ls
CentOS7-Base-163.repo  CentOS-Debuginfo.repo  CentOS-Sources.repo 
CentOS-Base.repo       CentOS-fasttrack.repo  CentOS-Vault.repo 
CentOS-CR.repo         CentOS-Media.repo 
[root@wangmaster yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak //使原來的 yum 失效 
[root@wangmaster yum.repos.d]# yum clean all  //清除 yum 緩存 
已加載插件:fastestmirror 
正在清理軟件源: base extras updates 
Cleaning up everything 
[root@wangmaster yum.repos.d]# yum repolist  //更新 yum 庫 
已加載插件:fastestmirror 
base                                                     | 3.6 kB     00:00      
extras                                                   | 3.4 kB     00:00      
updates                                                  | 3.4 kB     00:00      
(1/4): base/7/x86_64/group_gz                              | 155 kB   00:00      
(2/4): extras/7/x86_64/primary_db                          | 139 kB   00:00      
(3/4): base/7/x86_64/primary_db                            | 5.6 MB   00:09      
(4/4): updates/7/x86_64/primary_db                         | 3.9 MB   00:11      
Determining fastest mirrors 
源標識                         源名稱                                      狀態 
base/7/x86_64                  CentOS-7 - Base - 163.com                 9,363 
extras/7/x86_64                CentOS-7 - Extras - 163.com               311 
updates/7/x86_64               CentOS-7 - Updates - 163.com              1,126 
repolist: 10,800 
[root@wangmaster yum.repos.d]# 
[root@wangmaster yum.repos.d]# yum install -y vim //安裝 VIM 工具 

    四、關閉 selinux

[root@wangmaster yum.repos.d]# vim /etc/selinux/config  
# This file controls the state of SELinux on the system. 
# SELINUX= can take one of these three values: 
#     enforcing - SELinux security policy is enforced. 
#     permissive - SELinux prints warnings instead of enforcing. 
#     disabled - No SELinux policy is loaded. 
SELINUX=disabled 
# SELINUXTYPE= can take one of three two values: 
#     targeted - Targeted processes are protected, 
#     minimum - Modification of targeted policy. Only selected processes are 
pro 
tected.  
#     mls - Multi Level Security protection. 
SELINUXTYPE=targeted

    五、中止防火牆功能

[root@wangmaster ~]# systemctl stop firewalld.service 中止防火牆 
[root@wangmaster ~]# systemctl disable firewalld.service  中止防火牆開機自啓動 
[root@wangmaster ~]# systemctl status firewalld 查看防火牆狀態

    六、規劃爲 3 個虛擬機,分別爲 master,slave1,slave2,在/etx/hosts 文件中修改 

[root@wangmaster ~]# vim /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 
localhost6.localdomain6 
 
192.168.225.100 wangmaster 
192.168.225.101 wangslave1 
192.168.225.102 wangslave2 

    而後重啓虛擬機(必定要重啓,由於 selinux 設置重啓才生效)

    (注意:在全部三臺虛擬機中都進行這樣的修改,ip 地址根據實際狀況進行修改)

    七、使用以下命令在線安裝 

[root@wangmaster ~]$ yum install –y wget 
[root@wangmaster ~]$ yum install –y net-tools 

    八、建立目錄

[root@wangmaster ~]# mkdir /opt/bigdata

    九、將 jdk 拷貝到 192.168.225.100 的 opt 的 bigdata 目錄 

[root@wangmaster bigdata]# ls 
hadoop-2.7.3.tar.gz  jdk1.8.tar.gz 
這裏我提早在 bigdata 中傳入了所須要的軟件

    十、在 master 中建立用戶 hadoop 

[root@wangmaster bigdata]# useradd hadoop 
[root@wangmaster bigdata]# id hadoop 
uid=1000(hadoop) gid=1000(hadoop) 組=1000(hadoop) 
[root@wangmaster ~]# passwd hadoop 
更改用戶 hadoop 的密碼 。我設置的密碼是 123456,須要打兩遍 
新的 密碼: 
無效的密碼: 密碼少於 8 個字符 
從新輸入新的 密碼: 
passwd:全部的身份驗證令牌已經成功更新。 
[root@wangmaster ~]# 

    十一、使用戶成爲 sudoers,以 root 用戶修改文件/etc/sudoers,修改方式以下: 

[root@wangmaster bigdata]# vim /etc/sudoers 
## Allow root to run any commands anywhere 
root    ALL=(ALL)       ALL 
hadoop  ALL=(ALL)       ALL 

    十二、修改/opt/bigdata 文件夾的權限

[root@wangmaster ~]# chmod -R 777 /opt/bigdata 
[root@wangmaster ~]# chown -R hadoop.hadoop /opt/bigdata 
[root@wangmaster ~]# ll /opt 
總用量 4 
drwxrwxrwx. 2 hadoop hadoop 4096 49 05:28 bigdata 

    1三、安裝 JDK 運行環境 

[hadoop@wangmaster bigdata]# tar -zxvf jdk1.8.tar.gz  
[hadoop@wangmaster bigdata]# mv /opt/bigdata/jdk1.8 /opt/bigdata/ 
hadoop@wangmaster bigdata]# ls 
hadoop-2.7.3.tar.gz  jdk1.8  jdk1.8.tar.gz  opt 

    1四、在修改/etc/profile 文件,配置 java 環境: 

[root@wangmaster ~]$ vim /etc/profile 
#java configuration 
JAVA_HOME=/opt/bigdata/jdk1.8 
JAVA_BIN=/opt/bigdata/jdk1.8/bin 
PATH=$PATH:$JAVA_HOME/bin 
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib/rt.jar 
export JAVA_HOME 
export JAVA_BIN  
export PATH  
export CLASSPATH 
[hadoop@wangmaster ~]# source /etc/profile 
[hadoop@wangmaster bigdata]$ java -version 
java version "1.8.0_111" 
Java(TM) SE Runtime Environment (build 1.8.0_111-b14) 
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode) 
[hadoop@wangmaster bigdata]$ javac -version 
javac 1.8.0_111 

    (注:上圖中,JAVA_HOME 爲你安裝的 JDK 路徑) 

    1五、安裝 hadoop

[hadoop@wangmaster bigdata]$ tar -zxvf hadoop-2.7.3.tar.gz  
[hadoop@wangmaster bigdata]$ ll 
總用量 386500 
drwxr-xr-x. 9 hadoop hadoop      4096 818 2016 hadoop-2.7.3 
-rwxrwxrwx. 1 hadoop hadoop 214092195 313 19:16 hadoop-2.7.3.tar.gz 
drwxrwxrwx. 8 hadoop hadoop      4096 313 00:14 jdk1.8 
-rwxrwxrwx. 1 hadoop hadoop 181668321 322 23:31 jdk1.8.tar.gz

    1六、在 hadoop 目錄下創建 tmp 目錄,並將權限設定爲 777 

[hadoop@wangmaster bigdata]$ cd hadoop-2.7.3  
[hadoop@wangmaster hadoop-2.7.3]$ mkdir tmp 
[hadoop@wangmaster hadoop-2.7.3]$ chmod 777 tmp 
[hadoop@wangmaster hadoop-2.7.3]$ mkdir dfs 
[hadoop@wangmaster hadoop-2.7.3]$ mkdir dfs/name 
[hadoop@wangmaster hadoop-2.7.3]$ mkdir dfs/data

 2、 Hadoop 安裝與配置 

    Hadoop 是大數據生態圈的基石,下面首先以 Hadoop 的安裝與配置開始。 

    一、進入安裝目錄 

[hadoop@wangmaster ~]$ cd /opt/bigdata/hadoop-2.7.3

    二、環境配置 

[hadoop@wangmaster hadoop-2.7.3]$ cd etc/hadoop 
[hadoop@wangmaster hadoop]$ vim yarn-env.sh 
# some Java parameters 
export JAVA_HOME=/opt/bigdata/jdk1.8 

    三、core 配置

[hadoop@wangmaster hadoop]$ vim core-site.xml  
<property> 
        <name>hadoop.tmp.dir</name> 
<value>/opt/bigdata/hadoop-2.7.3/tmp</value> 
</property> 
<property> 
 <name>fs.default.name</name> 
 <value>hdfs://wangmaster:9000</value> 
</property> 
<property> 
        <name>hadoop.proxyuser.hadoop.hosts</name> 
        <value>*</value> 
</property> 
<property> 
       <name>hadoop.proxyuser.hadoop.groups</name> 
        <value>*</value> 
</property> 

    四、hdfs 配置 

[hadoop@wangmaster hadoop]$ vim hdfs-site.xml 
<configuration> 
<property> 
    <name>dfs.replication</name>   
    <value>3</value> 
</property> 
<property> 
  <name>dfs.namenode.name.dir</name> <value>/opt/bigdata/hadoop-2.7.3/dfs/name</value> </property> <property>   <name>dfs.datanode.data.dir</name> <value>/opt/bigdata/hadoop-2.7.3/dfs/data</value>
</property> <property>   <name>dfs.web.ugi</name> <value>hdfs,hadoop</value> </property> <property>   <name>dfs.permissions</name>   <value>false</value> </property> </configuration>

    五、yarn 配置 

[hadoop@wangmaster hadoop]$ vim yarn-site.xml 
<configuration> 
 
<!-- Site specific YARN configuration properties --> 
<property> 
    <name>yarn.resourcemanager.hostname</name> 
    <value>wangmaster</value> 
</property> 
 
<property> 
    <name>yarn.resourcemanager.webapp.address</name> 
    <value>wangmaster:8088</value> 
</property> 
 
<property> 
    <name>yarn.resourcemanager.scheduler.address</name> 
    <value>wangmaster:8081</value> 
</property>
<property> 
    <name>yarn.resourcemanager.resource-tracker.address</name> 
    <value>wangmaster:8082</value> 
</property> 
 
<property> 
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce_shuffle</value> 
</property> 
<property> 
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
    <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
</property> 
<property> 
    <name>yarn.web-proxy.address</name> 
    <value>wangmaster:54315</value> 
</property> 
 
</configuration> 

    六、mapreduce 配置

[hadoop@wangmaster hadoop]$ vim mapred-site.xml 
<configuration> 
<property> 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value> 
</property> 
<property> 
    <name>mapred.job.tracker</name>   
    <value>wangmaster:9001</value> 
</property> 
<property>    
      <name>mapreduce.jobhistory.address</name>    
      <value>wangmaster:10020</value>    
</property> 
</configuration> 

    七、slaves 配置(master、slave1 和 slave2 均做爲 datanode) 

[hadoop@wangmaster hadoop]$ vim slaves  
wangmaster 
wangslave1 
wangslave2
master02 
slave01 
slave02 
slave03

    八、配置系統環境 

[root@wangmaster bin]# vim /etc/profile  
末尾添加這兩句 
export HADOOP_HOME=/opt/bigdata/hadoop-2.7.3 
export PATH=$HADOOP_HOME/bin:$PATH 

    使配置生效:

[hadoop@wangmaster hadoop]$ source /etc/profile 

    九、將虛擬機複製成 wangslave1 和 wangslave2 上。

    

    把 CentOS 64 位 Minimal 重名爲 wangmaster,克隆 wangmaster,創建 wangslave1 和 wangslave2 節點,克隆教程

    首先給 wangmaster 關機

    

    

    

    複製事後的虛擬機不能直接使用,須要進行以下操做:

    

    

    

 [root@wangmaster ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777736 
HWADDR=00:50:56:36:BF:60  //修改這個裏 mac 地址,改爲剛纔生成的 
TYPE="Ethernet" 
BOOTPROTO="static" 
DEFROUTE="yes" 
PEERDNS="yes" 
PEERROUTES="yes" 
IPV4_FAILURE_FATAL="no" 
IPV6INIT="yes" 
IPV6_AUTOCONF="yes" 
IPV6_DEFROUTE="yes" 
IPV6_PEERDNS="yes" 
IPV6_PEERROUTES="yes" 
IPV6_FAILURE_FATAL="no" 
NAME="eno16777736" 
DEVICE="eno16777736" 
ONBOOT="yes" 
IPADDR=192.168.225.101 //這裏改爲 wangslave1 的 ip 192.168.225.101 
NETMASK=255.255.255.0 
GATEWAY=192.168.225.2 
DNS1=114.114.114.114 
DNS2=114.114.114.115 
[root@wangmaster ~]# systemctl restart network.service //重啓網絡

    而後修改主機名 

[root@wangmaster ~]# hostnamectl set-hostname wangslave1 
[root@wangmaster ~]# hostname wangslave1 
[root@wangmaster ~]# exit 
從新登陸 
測試 ping 本身 
[root@wangslave1 ~]# ping wangslave1 
PING wangslave1 (192.168.225.101) 56(84) bytes of data. 
64 bytes from wangslave1 (192.168.225.101): icmp_seq=1 ttl=64 time=0.012ms

    根據上面操做,克隆出 wangslave2

    十、須要在 hadoop 用戶下進行 master 和 slave 之間的免密 

    Master 給本身和 slave1,slave2 發證書 

[hadoop@wangmaster ~]$ ssh-keygen 
[hadoop@wangmaster ~]$ ssh-copy-id -i .ssh/id_rsa.pub hadoop@wangslave1 
[hadoop@wangmaster ~]$ ssh-copy-id -i .ssh/id_rsa.pub hadoop@wangslave2 
[hadoop@wangmaster ~]$ ssh-copy-id -i .ssh/id_rsa.pub hadoop@wangmaster 
這步完成後,正常狀況下就能夠無密碼登陸本機了,即 ssh localhost,無需輸入密碼。

    而後 slave1 給 master 發證書 

[hadoop@ wangslave1 ~]$ ssh-keygen 
[hadoop@ wangslave1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub hadoop@wangmaste

    而後 slave2 給 master 發證書 (略)

    十一、首次啓動以前格式化 hdfs: 

[hadoop@wangmaster ~]$ hdfs namenode -format

    啓動各組件 

[hadoop@wangmaster ~]$ cd /opt/bigdata/hadoop-2.7.3/sbin

    所有啓動: 

[hadoop@wangmaster sbin]$ ./start-all.sh  
驗證 
[hadoop@wangmaster sbin]$ jps 
1666 DataNode 
2099 NodeManager 
2377 Jps 
1853 SecondaryNameNode 
1998 ResourceManager 
1567 NameNode 
[hadoop@wangslave1 ~]$ jps 
1349 NodeManager 
1452 Jps 
1245 DataNode 
[hadoop@wangslave2 ~]$ jps 
1907 Jps 
1703 DataNode 
1807 NodeManager 

    在瀏覽器裏輸入 http://192.168.225.100:50070 ,判斷是否啓動成功

    十二、所有中止

[hadoop@wangmaster sbin]$ ./stop-all.sh

3、實驗

    一、使用

[hadoop@wangmaster ~]$ hadoop fs -mkdir /wang 
[hadoop@wangmaster ~]$ cd /opt/bigdata/hadoop-2.7.3 
[hadoop@wangmaster hadoop-2.7.3]$ hadoop fs -put LICENSE.txt  /wang 
[hadoop@wangmaster ~]$ hadoop fs -ls /wang

    二、實驗,進行 wordcount 程序(可選)。一個統計文本單詞個數的程序,它會統計放入文件夾內的文本的總共單詞的出現個數

[hadoop@wangmaster hadoop-2.7.3]$ hadoop fs -ls /wang 查看                      
Found 1 items 
-rw-r--r--   3 hadoop supergroup      84854 2017-04-09 07:34 
/wang/LICENSE.txt  
[hadoop@wangmaster hadoop-2.7.3]$ cd 
/opt/bigdata/hadoop-2.7.3/share/hadoop/mapreduce 
[hadoop@wangmaster mapreduce]$ hadoop jar 
hadoop-mapreduce-examples-2.7.3.jar wordcount /wang /output  執行程序 
[hadoop@wangmaster mapreduce]$ hadoop fs -ls /output  查看程序運行的輸出文件 
Found 2 items 
-rw-r--r--   3 hadoop supergroup          0 2017-04-09 07:38 
/output/_SUCCESS 
-rw-r--r--   3 hadoop supergroup      22002 2017-04-09 07:38 
/output/part-r-00000 [hadoop@wangmaster mapreduce]$ hadoop fs -cat 
/output/part-r-00000  查看結果 
相關文章
相關標籤/搜索