hadoop-2.6.0-cdh5.15.0集羣環境搭建

1、部署前準備

1. 機器數量

測試環境可以使用3臺服務器進行集羣搭建:html

主機名稱 主機IP 系統
data1 192.168.66.152 CentOS7
data2 192.168.66.153 CentOS7
data3 192.168.66.154 CentOS7

2. 組件版本及下載

組件 版本 下載地址
hadoop hadoop-2.6.0-cdh5.15.0 https://archive.cloudera.com/...
hive hive-1.1.0-cdh5.15.0 https://archive.cloudera.com/...
zookeeper zookeeper-3.4.5-cdh5.15.0 https://archive.cloudera.com/...
hbase hbase-1.2.0-cdh5.15.0 https://archive.cloudera.com/...
kafka kafka_2.12-0.11.0.3 http://kafka.apache.org/downl...
flink flink-1.10.1-bin-scala_2.12 https://flink.apache.org/down...
jdk jdk-8u251-linux-x64 https://www.oracle.com/java/t...

3.集羣節點規劃

機器名稱 服務名稱
data1 NameNode、DataNode、ResourceManager、NodeManager、JournalNode、QuorumPeerMain、DFSZKFailoverController、HMaster、HRegionServer、Kafka
data2 NameNode、DataNode、ResourceManager、NodeManager、JournalNode、QuorumPeerMain、DFSZKFailoverController、HMaster、HRegionServer、Kafka
data3 DataNode、NodeManager、HRegionServer、JournalNode、QuorumPeerMain、Kafka

2、開始部署

1.更改hostname

3臺服務器的默認主機名爲localhost,爲了便於後續使用hostname來通訊,須要更改下這3臺服務器的hostname。
登陸服務器,分別更改3臺服務器的/etc/hostname文件,給3臺機器分別命名爲data一、data二、data3。
繼續更改3臺機器的/etc/hosts文件,添加3臺機器的相互映射訪問:
image.png
在/etc/hosts文件末尾增長上圖中紅框部分。
以上更改hostname須要重啓機器才能生效。java

2.添加hadoop用戶和用戶組

在3臺服務器上專門添加一個名叫hadoop的用戶組和用戶名,用於操做hadoop集羣。node

# 添加hadoop用戶組
sudo groupadd hadoop

# 添加hadoop用戶,並使之屬於hadoop用戶組
sudo useradd -g hadoop hadoop

# 給haoop用戶設置密碼
sudo passwd hadoop

# 給hadoop用戶添加sudo權限,編輯/etc/sudoers文件
sudo vi /etc/sudoers

# 在"root    ALL=(ALL)     ALL"的下面增長一行
hadoop  ALL=(ALL)       ALL

# 切換到剛剛添加的hadoop用戶,後續集羣安裝過程當中都是用該hadoop用戶來操做
su hadoop

3.ssh免密配置

在Hadoop集羣安裝過程當中,須要屢次將配置好的服務包分發到其餘機器上,爲了不每次ssh都須要輸入密碼,可配置ssh免密登陸。mysql

# 在data1機器上使用ssh-keygen生成公鑰/私鑰
# -t 指定rsa加密算法
# -P 表示密碼,-P '' 就表示空密碼;也能夠不用-P參數,這樣就要三車回車,用-P就一次回車
# -f 指定祕鑰生成的文件路徑
ssh-keygen  -t rsa -P '' -f ~/.ssh/id_rsa

# 進入到.ssh目錄,能夠看到該目錄下有id_rsa(私鑰)和id_rsa.pub(公鑰)
cd ~/.ssh

# 將公鑰拷貝到一個authorized_keys文件中
cat id_rsa.pub >> authorized_keys

# 將上面生成的authorized_keys分別拷貝到data2和data3主機上
scp ~/.ssh/authorized_keys hadoop@data2:~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop@data3:~/.ssh/authorized_keys

# 將authorized_keys的權限設置爲600
chmod 600 ~/.ssh/authorized_keys

# 驗證ssh免密登陸是否設置成功
# 在data1機器上ssh data2或ssh data3時,再也不提示輸入密碼,則表示ssh免密登陸設置成功
ssh data2
ssh data3

4.關閉防火牆

因hadoop集羣都是在內網環境部署,爲了不在部署過程當中出現一些奇怪問題,建議將防火牆事先關閉linux

# 查看防火牆狀態(也可以使用systemctl status firewalld命令查看)
firewall-cmd --state

# 臨時關閉防火牆,重啓後不生效
sudo systemctl stop firewalld

# 開機禁止啓動防火牆
sudo systemctl disable firewalld

5.服務器時間同步配置

由於在集羣環境中,有些服務是須要服務器進行時間同步的,特別是HBase服務,若是3臺機器的時間相差太大,HBase服務啓動會報錯,故須要事先配置服務器時間同步。時間同步方式有ntp和chrony方式(推薦使用chrony),在centOS7下,默認已經安裝了chrony,只須要增長配置便可。web

5.1 Chrony服務端配置

咱們將data1機器做爲chrony的服務端,另外兩臺機器(data二、data3)做爲chrony客戶端,即data2和data3機器將會從data1機器上進行同步時間。算法

# 登陸data1機器
# 編輯/etc/chrony.conf
sudo vi /etc/chrony.conf

# 註釋掉默認的時間同步服務器配置
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

# 增長一行本身的時間同步服務配置
# 該IP爲data1機器的IP,表示以自身的機器時間爲準進行同步(不能訪問外網的狀況下可以使用這種方式)
# 也能夠配置成阿里雲的時間同步服務器地址(須要能訪問外網),以下:
# server ntp1.aliyun.com iburst 
# server ntp2.aliyun.com iburst
# server ntp3.aliyun.com iburst
# server ntp4.aliyun.com iburst
server 192.168.66.152 iburst

# 設置容許被同步時間的機器IP網段
allow 192.168.66.0/24

# 設置時間同步服務級別
local stratum 10

# 重啓chrony服務
sudo systemctl restart chronyd.service

# 將chrony服務設爲開機啓動
sudo systemctl enable chronyd.service

# 查看chrony服務狀態
systemctl status chronyd.service

5.2 Chrony客戶端配置

在data2和data3機器上進行操做:sql

# 登陸data1和data2機器
# 編輯/etc/chrony.conf
sudo vi /etc/chrony.conf

# 註釋掉默認的時間同步服務器配置
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

# 增長一行本身的時間同步服務配置
# 該IP爲data1機器的IP,表示將同步data1機器的時間
server 192.168.66.152 iburst

# 重啓chrony服務
sudo systemctl restart chronyd.service

# 將chrony服務設爲開機啓動
sudo systemctl enable chronyd.service

# 查看chrony服務狀態
systemctl status chronyd.service

5.3 查看是否同步成功

# 可以使用timedatectl命令查看是否同步成功,分別在data一、data二、data3機器上查看
timedatectl

# 命令返回以下信息:
      Local time: Wed 2020-06-17 18:46:41 CST
  Universal time: Wed 2020-06-17 10:46:41 UTC
        RTC time: Wed 2020-06-17 10:46:40
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes  (同步成功後此處爲yes)
 RTC in local TZ: no
      DST active: n/a

# 若是上面的NTP synchronized爲no,說明同步失敗,需檢查配置是否正確
# 若是配置正確,上面仍是顯示no,可嘗試設置:sudo timedatectl set-local-rtc 0
sudo timedatectl set-local-rtc 0

6.安裝jdk

分別在3臺機器上安裝jdk8,並配置好環境變量,注意更改環境變量的配置文件後必定記得source一下。shell

7.部署zookeeper

# 將zookeeper的安裝包上傳到data1機器上
# 先將zookeeper壓縮包解壓到指定目錄下
tar -zxvf zookeeper-3.4.5-cdh5.15.0.tar.gz /usr/local/zookeeper

# 進入到zookeeper的conf目錄下,修改配置文件
# 將默認的zoo_sample.cfg文件複製並重命名爲zoo.cfg
cp zoo_sample.cfg zoo.cfg

# 編輯該zoo.cfg配置文件
vi zoo.cfg

# 更改dataDir=/tmp/zookeeper參數
dataDir=/usr/local/zookeeper/data

# 在zoo.cfg文件末尾增長zk集羣服務器配置
# 配置參數的模板爲:server.X=A:B:C,其中X是一個數字, 表示這是第幾號server(就是myid)  
# A是該server所在的IP地址或hostname.  
# B配置該server和集羣中的leader交換消息所使用的端口.  
# C配置選舉leader時所使用的端口
server.1=data1:2888:3888
server.2=data2:2888:3888
server.3=data3:2888:3888
# zoo.cfg配置文件的其他參數可保持默認不變

# 建立上述配置中dataDir參數指定的data目錄
mdkir /usr/local/zookeeper/data

# 進入到該data目錄下,建立myid文件,並寫入一個惟一標識數字id
# 該id用來惟一標識這個服務,必定要保證在整個集羣中惟一
# zookeeper會根據這個id來取出server.x上的配置。好比當前id爲1,則對應着zoo.cfg裏的server.1的配置
cd /usr/local/zookeeper/data
touch myid
echo 1 > myid

# 至此,data1上的zk配置完畢
# 如今須要將data1上的zk分發到另外兩臺機器上(data2和data3)
scp /usr/local/zookeeper hadoop@data2:/usr/local/zookeeper
scp /usr/local/zookeeper hadoop@data3:/usr/local/zookeeper

# 分別到data2和data3上更改/usr/local/zookeeper/data/myid文件
# data2機器上的myid文件內容改成2
# data2機器上的myid文件內容改成3
vi /usr/local/zookeeper/data/myid

# 分別在3臺機器上都配置上zookeeper的環境變量
# 爲了不每次操做zookeeper命令時都須要進入到zk的bin目錄,須要配置zk的環境變量
sudo vi /etc/profile

# 在文件末尾增長如下兩行
export ZK_HOME=/usr/local/zookeeper
export PATH=$ZK_HOME/bin:$PATH

# 改完後切記source一下
source /etc/profile

# 至此,全部配置已完畢,可分別在3臺機器上啓動zk服務
zkServer.sh start

# 啓動完畢後,可查看當前zk的狀態
zkServer.sh status

8.部署hadoop

8.1 安裝hadoop

# 將hadoop安裝包上傳至data1機器
# 解壓hadoop安裝包到指定目錄
tar -zxvf hadoop-2.6.0-cdh5.15.0.tar.gz /usr/local/hadoop

# 配置hadoop的環境變量
sudo vi /etc/profile

# 在文件末尾增長
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

# 改完後切記source一下
source /etc/profile

8.2 修改hadoop-env.sh文件

# 進入到hadoop的配置文件目錄
cd /usr/local/hadoop/etc/hadoop

# 編輯hadoop-env.sh
vi hadoop-env.sh

# 更改export JAVA_HOME={JAVA_HOME}爲jdk的安裝目錄
export JAVA_HOME=/usr/local/jdk1.8.0_251

8.3 修改core-site.xml文件

# 該文件默認只有一個空的<configuration>標籤,須要在該標籤中添加一下配置
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://cdhbds</value>
  <description>
   The name of the default file system.  
   A URI whose scheme and authority determine the FileSystem implementation.  
   Theuri's scheme determines the config property (fs.SCHEME.impl) namingthe FileSystem implementation class.  
   The uri's authority is used to determine the host, port, etc. for a filesystem.
  </description>
</property>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/data/hadooptmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>io.native.lib.available</name>
  <value>true</value>
  <description>Should native hadoop libraries, if present, be used.</description>
</property>

<property>
  <name>io.compression.codecs</name>
  <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>
  <description>
    A comma-separated list of the compression codec classes that can
    be used for compression/decompression. In addition to any classes specified
    with this property (which take precedence), codec classes on the classpath
    are discovered using a Java ServiceLoader.</description>
</property>

<property>
  <name>fs.trash.interval</name>
  <value>1440</value>
  <description>Number of minutes between trash checkpoint. if zero, the trash feature is disabled.</description>
</property>

<property>
  <name>fs.trash.checkpoint.interval</name>
  <value>1440</value>
  <description> 
    Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. 
    If zero, the value is set to the value of fs.trash.interval </description>
</property>

<property>
    <name>ha.zookeeper.quorum</name>
    <value>data1:2181,data2:2181,data3:2181</value>
    <description>3個zookeeper節點</description>
</property>

8.4 修改hdfs-site.xml文件

# 該文件默認只有一個空的<configuration>標籤,須要在該標籤中添加一下配置
<property>
    <name>dfs.nameservices</name>
    <value>cdhbds</value>
    <description>
        Comma-separated list of nameservices.
    </description>
</property>

<property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
    <description>
       The datanode server address and port for data transfer.
       If the port is 0 then the server will start on a free port.
    </description>
</property>

<property>
    <name>dfs.datanode.balance.bandwidthPerSec</name>
    <value>52428800</value>
</property>

<property>
    <name>dfs.datanode.balance.max.concurrent.moves</name>
    <value>250</value>
</property>

<property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
    <description>
       The datanode http server address and port.
       If the port is 0 then the server will start on a free port.
    </description>
</property>

<property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
    <description>
       The datanode ipc server address and port.
       If the port is 0 then the server will start on a free port.
    </description>
</property>

<property>
    <name>dfs.ha.namenodes.cdhbds</name>
    <value>nn1,nn2</value>
    <description></description>
</property>

<property>
    <name>dfs.namenode.rpc-address.cdhbds.nn1</name>
    <value>data1:8020</value>
    <description>節點NN1的RPC地址</description>
</property>
                        
<property>
    <name>dfs.namenode.rpc-address.cdhbds.nn2</name>
    <value>data2:8020</value>
    <description>節點NN2的RPC地址</description>
</property>
                                    
<property>
    <name>dfs.namenode.http-address.cdhbds.nn1</name>
    <value>data1:50070</value>
    <description>節點NN1的HTTP地址</description>
</property>
                                                
<property>
    <name>dfs.namenode.http-address.ocdccluster.nn2</name>
    <value>data2:50070</value>
    <description>節點NN2的HTTP地址</description>
</property>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>/data/namenode</value>
    <description>
      Determines where on the local filesystem the DFS name node should store the name table.
      If this is a comma-delimited list of directories,then name table is replicated in all of the directories,
      for redundancy.</description>
    <final>true</final>
</property>

<property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>/data/checkpoint</value>
    <description></description>
</property>

<property>
    <name>dfs.datanode.data.dir</name>
    <value>/data/datanode</value>
    <description>Determines where on the local filesystem an DFS data node should store its blocks.
         If this is a comma-delimited list of directories,then data will be stored in all named directories,
         typically on different devices.Directories that do not exist are ignored.
    </description>
<final>true</final>
</property>

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>

<property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
    <description>
   Boolean which enables backend datanode-side support for the experimental DistributedFileSystem*getFileVBlockStorageLocations API.
    </description>
</property>

<property>
    <name>dfs.permissions.enabled</name>
    <value>true</value>
    <description>
        If "true", enable permission checking in HDFS.
        If "false", permission checking is turned off,but all other behavior is unchanged.
        Switching from one parameter value to the other does not change the mode,owner or group of files or directories.
    </description>
</property>

<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://data1:8485;data2:8485;data3:8485/cdhbds</value>
    <description>採用3個journalnode節點存儲元數據,這是IP與端口</description>
</property>
            
<property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/data/journaldata/</value>
    <description>journaldata的存儲路徑</description>
</property>

<property>
    <name>dfs.journalnode.rpc-address</name>
    <value>0.0.0.0:8485</value>
</property>
        
<property>
    <name>dfs.journalnode.http-address</name>
    <value>0.0.0.0:8480</value>
</property>

<property>
    <name>dfs.client.failover.proxy.provider.cdhbds</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    <description>該類用來判斷哪一個namenode處於生效狀態</description>
</property>

<property>
    <name>dfs.ha.fencing.methods</name>
    <value>shell(/bin/true)</value>
</property>

<property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>10000</value>
</property>

<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
    <description>
          Whether automatic failover is enabled. See the HDFS High Availability documentation for details 
          on automatic HA configuration.
    </description>
</property>

<property>
     <name>dfs.namenode.handler.count</name>
     <value>20</value>
     <description>The number of server threads for the namenode.</description>
</property>

8.5 修改mapred-site.xml文件

# 將mapred-site.xml.template複製並重命名爲mapred-site.xml
# cp mapred-site.xml.template mapred-site.xml

# 編輯該mapred-site.xml文件,在<configuration>標籤中增長如下內容
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>

<property>
    <name>mapreduce.shuffle.port</name>
    <value>8350</value>
</property>

<property>
    <name>mapreduce.jobhistory.address</name>
    <value>0.0.0.0:10121</value>
</property>

<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>0.0.0.0:19868</value>
</property>

<property>
    <name>mapreduce.jobtracker.http.address</name>
    <value>0.0.0.0:50330</value>
</property>

<property>
    <name>mapreduce.tasktracker.http.address</name>
    <value>0.0.0.0:50360</value>
</property>

<property>
    <name>mapreduce.map.output.compress</name> 
    <value>true</value>
</property>
              
<property>
    <name>mapreduce.map.output.compress.codec</name> 
    <value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>

<property>
    <name>mapred.output.compression.type</name>
    <value>BLOCK</value>
</property>

<property>
    <name>mapreduce.job.counters.max</name>
    <value>560</value>
    <description>Limit on the number of counters allowed per job.</description>
</property>

<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx4096m</value>
</property>

<property>
    <name>mapreduce.map.memory.mb</name>
    <value>3072</value>
</property>

<property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>4096</value>
</property>

<property>
    <name>mapreduce.map.cpu.vcores</name>
    <value>1</value>
</property>

<property>
    <name>mapreduce.reduce.cpu.vcores</name>
    <value>1</value>
</property>

<property>
    <name>mapreduce.task.io.sort.mb</name>
    <value>300</value>
</property>

8.6 修改yarn-env.sh文件

# 編輯yarn-env.sh文件
vi yarn-env.sh

# 更改export JAVA_HOME={JAVA_HOME}爲jdk的安裝目錄
export JAVA_HOME=/usr/local/jdk1.8.0_251

8.7 修改yarn-site.xml文件

# 在<configuration>標籤中增長如下內容
<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarn-rm-cluster</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
</property>

<property>
    <description>Id of the current ResourceManager. Must be set explicitly on each ResourceManager to the appropriate value.</description>
    <name>yarn.resourcemanager.ha.id</name>
    <value>rm1</value>
</property>

<property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>data1:2181,data2:2181,data3:2181</value>
</property>

<property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
</property>

<property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>

<!-- 使用公平調度隊列,須要依賴於一個公平調度隊列的配置文件,在下一步中將對這個文件進行配置 -->
<property>
    <name>yarn.scheduler.fair.allocation.file</name>
    <value>fair-scheduler.xml</value>
</property>

<!-- RM1 configs -->
<property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>data1:8032</value>
</property>

<property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>data1:8030</value>
</property>

<property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>data1:50030</value>
</property>

<property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>data1:8031</value>
</property>

<property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>data1:8033</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.admin.address.rm1</name>
    <value>data1:8034</value>
</property>

<!-- RM2 configs -->
<property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>data2:8032</value>
</property>

<property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>data2:8030</value>
</property>

<property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>data2:50030</value>
</property>

<property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>data2:8031</value>
</property>

<property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>data2:8033</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.admin.address.rm2</name>
    <value>data2:8034</value>
</property>

<!-- Node Manager Configs -->
<property>
    <description>Address where the localizer IPC is.</description>
    <name>yarn.nodemanager.localizer.address</name>
    <value>0.0.0.0:23344</value>
</property>

<property>
    <description>NM Webapp address.</description>
    <name>yarn.nodemanager.webapp.address</name>
    <value>0.0.0.0:23999</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>112640</value>
</property>

<property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
</property>

<property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>31</value>
</property>

<property>
    <name>yarn.scheduler.increment-allocation-mb</name>
    <value>512</value>
</property>

<property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>2.1</value>
</property>

<property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/data/yarn/local</value>
</property>

<property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>/data/yarn/logs</value>
</property>

8.8 修改fair-scheduler.xml文件

# 建立一個fair-scheduler.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!--
  This file contains pool and user allocations for the Fair Scheduler.
  Its format is explained in the Fair Scheduler documentation at
  http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html.
  The documentation also includes a sample config file.
-->
<allocations>
<!-- 自定義一個隊列名叫dev,並指定這個隊列的最大資源和最小資源 -->
<queue name="dev">
    <minResources>10240 mb, 10 vcores</minResources>
    <maxResources>51200 mb, 18 vcores</maxResources>
    <schedulingMode>fair</schedulingMode>
    <weight>5</weight>
    <maxRunningApps>30</maxRunningApps>
</queue>
</allocations>

8.9 修改slaves文件

# 編輯slaves文件
vi slaves

# 將localhost替換爲如下三行,表示這3臺機器將做爲hadoop集羣的從節點
# 即會在這3臺機器上啓動datanode和nodeManager服務
data1
data2
data3

8.10 分發hadoop包

# 將data1上配置好的hadoop包分發到另外兩臺機器上(data2和data3)
scp -rp /usr/local/hadoop hadoop@data2:/usr/local/hadoop
scp -rp /usr/local/hadoop hadoop@data3:/usr/local/hadoop

# 分發完畢後,還須要修改data2上的yarn-site.xml文件裏的一個配置
# 咱們是將data1和data2機器做爲ResourceManager的HA模式部署機器
# 將下面這個屬性的值從rm1改成rm2,不然在data2上啓動ResourcManager服務的時候會報data1:端口被佔用
<property>
    <description>Id of the current ResourceManager. Must be set explicitly on each ResourceManager to the appropriate value.</description>
    <name>yarn.resourcemanager.ha.id</name>
    <value>rm2</value>
</property>

# 在data2和data3機器上也配置好hadoop的環境變量
sudo vi /etc/profile

# 在文件末尾增長
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

# 改完後切記source一下
source /etc/profile

8.11 初始化並啓動集羣

# 首先要確保前面已經將zookeeper集羣啓動成功了
# 分別在3臺機器上啓動journalnode節點
hadoop-daemon.sh start journalnode

# 在data1上初始化NameNode
hadoop namenode -format

# 將data1上的namenode元數據目錄拷貝至data2機器,使兩臺nameNode節點的元數據在初始化後一致
# 即hdfs-site.xml 文件中配置的這個目錄/data/namenode
# <property>
#    <name>dfs.namenode.name.dir</name>
#    <value>/data/namenode</value>
#    <description>
#      Determines where on the local filesystem the DFS name node should store the name table.
#      If this is a comma-delimited list of directories,then name table is replicated in all of the #directories,
#      for redundancy.</description>
#    <final>true</final>
# </property>
scp -rp /data/namenode hadoop@data2:/data/namenode

# 在data1機器上初始化ZFCK
hdfs zkfc -formatZK

# 在data1機器上啓動hdfs分佈式存儲系統
start-dfs.sh

# 在data1機器上啓動yarn集羣
# 此命令將會在data1上啓動ResourceManager服務,在data一、data二、data3上啓動NodeManager服務
start-yarn.sh

# 在data2機器上啓動ResourceManager服務
yarn-daemon.sh start resourcemanager

8.12 驗證hadoop集羣

(1)訪問hdfs集羣的web UI管理頁面
在Windows機器上訪問http://data1:50070,將呈現hdfs集羣的基本信息:
image.png數據庫

(2)訪問yarn集羣的web UI管理頁面
在Windows機器上訪問http://data1:50030,將呈現yarn集羣資源的基本信息「
image.png

9.部署hive

因hive的元數據咱們是存的MySQL,故須要提早安裝好MySQL數據庫。
(1)安裝hive包

# 將hive安裝包上傳至data1機器,並解壓至指定目錄
tar -zxvf hive-1.1.0-cdh5.15.0.tar.gz /usr/local/hive

# 配置hive的環境變量
# 在/etc/profile 文件中增長hive的path變量配置
export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin:$PATH

# 配置完環境變量後要source /etc/profile
source /etc/profile

(2)拷貝MySQL驅動包

將MySQL的驅動jar包拷貝到hive安裝目錄的lib目錄下,即/usr/local/hive/lib.

(3)修改hive-env.sh文件

# 進入到hive的conf目錄下,將hive-env.sh.template文件拷貝並重命名爲hive-env.sh
cp hive-env.sh.tmplate hive-env.sh

# 修改hive-env.sh文件的如下兩個配置參數
HADOOP_HOME=/usr/local/hadoop
export HIVE_CONF_DIR=/usr/local/hive/conf

(4)修改hive-site.xml文件

# 修改hive-site.xml文件內容
<configuration>
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://192.168.66.240:3306/hive?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncoding=UTF-8</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>root</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>123456</value>
  <description>password to use against metastore database</description>
</property>

<property>
  <name>hive.exec.compress.output</name>
  <value>true</value>
  <description> This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* </descriptio
n>
</property>

<property>
  <name>hive.exec.compress.intermediate</name>
  <value>true</value>
  <description> This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from Hadoop config variables mapred.output.compress* </descript
ion>
</property>

<property>
  <name>datanucleus.autoCreateSchema</name>
  <value>true</value>
  <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>
</property>

<property>
  <name>hive.mapjoin.check.memory.rows</name>
  <value>100000</value>
  <description>The number means after how many rows processed it needs to check the memory usage</description>
</property>

<property>
  <name>hive.auto.convert.join</name>
  <value>true</value>
  <description>Whether Hive enables the optimization about converting common join into mapjoin based on the input file size</description>
</property>

<property>
  <name>hive.auto.convert.join.noconditionaltask</name>
  <value>true</value>
  <description>Whether Hive enables the optimization about converting common join into mapjoin based on the input file 
    size. If this parameter is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the
    specified size, the join is directly converted to a mapjoin (there is no conditional task).
  </description>
</property>

<property>
  <name>hive.auto.convert.join.noconditionaltask.size</name>
  <value>10000000</value>
  <description>If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it
    is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly
    converted to a mapjoin(there is no conditional task). The default is 10MB
  </description>
</property>

<property>
  <name>hive.auto.convert.join.use.nonstaged</name>
  <value>false</value>
  <description>For conditional joins, if input stream from a small alias can be directly applied to join operator without
    filtering or projection, the alias need not to be pre-staged in distributed cache via mapred local task.
    Currently, this is not working with vectorization or tez execution engine.
  </description>
</property>

<property>
  <name>hive.mapred.mode</name>
  <value>nonstrict</value>
  <description>The mode in which the Hive operations are being performed.
     In strict mode, some risky queries are not allowed to run. They include:
       Cartesian Product.
       No partition being picked up for a query.
       Comparing bigints and strings.
       Comparing bigints and doubles.
       Orderby without limit.
  </description>
</property>

<property>
  <name>hive.exec.parallel</name>
  <value>true</value>
  <description>Whether to execute jobs in parallel</description>
</property>

<property>
  <name>hive.exec.parallel.thread.number</name>
  <value>8</value>
  <description>How many jobs at most can be executed in parallel</description>
</property>

<property>
  <name>hive.exec.dynamic.partition</name>
  <value>true</value>
  <description>Whether or not to allow dynamic partitions in DML/DDL.</description>
</property>

<property>
  <name>hive.exec.dynamic.partition.mode</name>
  <value>nonstrict</value>
  <description>In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions.</description>
</property>

<property>  
  <name>hive.metastore.uris</name>  
  <value>thrift://data1:9083</value>  
  <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>  
</property>

<property>
<name>hive.server2.enable.impersonation</name>
<description>Enable user impersonation for HiveServer2</description>
<value>false</value>
</property>

<property>
  <name>hive.server2.enable.doAs</name>
  <value>false</value>
</property>

<property>
  <name>hive.input.format</name>
  <value>org.apache.hadoop.hive.ql.io.CombineHiveInputFormat</value>
</property>

<property>
  <name>hive.merge.mapfiles</name>
  <value>true</value>
</property>

<property>
  <name>hive.merge.mapredfiles</name>
  <value>true</value>
</property>

<property>
  <name>hive.merge.size.per.task</name>
  <value>256000000</value>
</property>

<property>
  <name>hive.merge.smallfiles.avgsize</name>
  <value>256000000</value>
</property>

<property>
    <name>hive.server2.logging.operation.enabled</name>
    <value>true</value>
</property>

<!--SENTRY META STORE-->
<!-- <property>
<name>hive.metastore.filter.hook</name>
<value>org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook</value>
</property>

<property>  
    <name>hive.metastore.pre.event.listeners</name>  
    <value>org.apache.sentry.binding.metastore.MetastoreAuthzBinding</value>  
    <description>list of comma separated listeners for metastore events.</description>
</property>

<property>
    <name>hive.metastore.event.listeners</name>  
    <value>org.apache.sentry.binding.metastore.SentryMetastorePostEventListener</value>  
    <description>list of comma separated listeners for metastore, post events.</description>
</property> -->

<!--SENTRY SESSION-->
<!--<property>
   <name>hive.security.authorization.task.factory</name>
   <value>org.apache.sentry.binding.hive.SentryHiveAuthorizationTaskFactoryImpl</value>
</property>

<property>
   <name>hive.server2.session.hook</name>
   <value>org.apache.sentry.binding.hive.HiveAuthzBindingSessionHook</value>
</property>

<property>
   <name>hive.sentry.conf.url</name>
   <value>file:///usr/local/hive-1.1.0-cdh5.15.0/conf/sentry-site.xml</value>
</property>
</configuration>

(5)修改hive-log4j.properties文件

# 拷貝hive-log4j.properties.template並重命名爲hive-log4j.properties
cp hive-log4j.properties.template hive-log4j.properties

# 編輯hive-log4j.properties,更改hive的日誌文件路徑
hive.log.dir=/data/hive/logs

(6)啓動metaStore服務和hiveserver2服務

# 啓動metaStore服務
nohup hive --service metastore &

# 啓動hiveserver2服務(有jdbc鏈接hive需求的能夠啓動hiveserver2服務)
nohup hive --service hiveserver2 &

(7)驗證hive

# 在data1機器上輸入hive命令,將進入到hive的命令行客戶端
hive

10.部署Hbase

(1)安裝HBase包

# 將HBase安裝包上傳至data1機器,並解壓至指定位置
tar -zxvf hbase-1.2.0-cdh5.15.0.tar.gz /usr/local/hbase

# 配置hbase環境變量
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin

# 配置完環境變量後要source /etc/profile
source /etc/profile

(2)修改hbase-env.sh文件

# 編輯hbase-env.sh文件,修改HBASE_MANAGES_ZK=false配置,使hbase不使用自帶的zookeeper
export HBASE_MANAGES_ZK=false

(3)修改hbase-site.xml文件

# 編輯hbase-site.xml,增長如下配置內容
<!-- hbase在hdfs上的數據目錄 -->
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://cdhbds/hbase</value>
  </property>
  <!-- hdfs高可用集羣名稱-->
  <property>
    <name>dfs.nameservices</name>
    <value>cdhbds</value>
  </property>
  <!-- 開啓hbase集羣模式-->
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property >
    <name>hbase.tmp.dir</name>
    <value>/data/hbase/tmp</value>
  </property>
  <property>
    <name>hbase.master.port</name>
    <value>16000</value>
  </property>
  <!-- hbase要註冊的zookeepeer集羣地址-->
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>data1,data2,data3</value>
  </property>
  <!-- hbase要註冊的zookeepeer集羣的端口-->
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
 </property>

(4)拷貝hdfs的配置文件

將hadoop配置目錄下的core-site.xml和hdfs-site.xml文件拷貝至hbase的conf目錄下

(5)配置regionservers文件

# 配置HRegionServer節點的主機名,編輯regionservers文件,添加如下內容
data1
data2
data3

(6)配置HMaster高可用

# 在hbase的conf目錄下建立一個backup-masters文件,並添加HMaster備用節點的主機名
data2

(7)分發hbase包

# 將data1上配置好的hbase包分發到另外兩臺機器(data2和data3)
scp -rp /usr/local/hbase hadoop@data2:/usr/local/hbase
scp -rp /usr/local/hbase hadoop@data3:/usr/local/hbase

(8)啓動hbase集羣

# 在data1機器上執行啓動hbase集羣命令
start-hbase.sh

# 啓動完成後,分別在3臺機器上執行jps命令,能夠查看到每臺機器上啓動的hbase進程
# data1機器上,可看到HMaster和HRegionServer進程
# data2機器上,可看到HMaster和HRegionServer進程
# data3機器上,可看到HRegionServer進程
jps

(9)驗證hbase

# 在data1機器上輸入hbase shell命令,將進入到hbase的命令行客戶端
hbase shell

此外,也可訪問hbase的UI管理頁面:http://data1:60010

11.部署kafka

(1)安裝kafka包

# 將kafka安裝包上傳至data1機器,並解壓至指定位置
tar -zxvf kafka_2.12-0.11.0.3.tgz /usr/local/kafka

(2)修改server.properties文件

# 編輯server.properties文件,修改如下內容
# broker的惟一標識
broker.id=0
# 修改成當前機器的hostname
listeners=PLAINTEXT://data1:9092
# 修改kafka的日誌路徑(也是kafka消息數據存儲的路徑)
log.dirs=/data/kafka-logs
# 修改zookeeper鏈接地址
zookeeper.connect=data1:2181,data2:2181,data3:2181

(3)分發kafka包

# 將data1上的kafka包分發到另外兩臺機器(data2和data3)
scp -rp /usr/loca/kafka hadoop@data2:/usr/local/kafka
scp -rp /usr/loca/kafka hadoop@data3:/usr/local/kafka

# 更改data2上的server.properties文件裏的如下兩個配置
# broker的惟一標識
broker.id=1
# 修改成當前機器的hostname
listeners=PLAINTEXT://data2:9092

# 同理,更改data3上對應的配置
# broker的惟一標識
broker.id=2
# 修改成當前機器的hostname
listeners=PLAINTEXT://data3:9092

(4)啓動kafka集羣

# 在3臺機器上分別啓動kafka
# 進入到kafka的bin目錄,執行啓動命令
cd /usr/local/kafka/bin
./kafka-server-start.sh -daemon ../config/server.properties
# 命令中的-daemon參數表示後臺啓動kafka

(5)驗證kafka

# 分別在3臺機器上執行jps命令,可查看到kafka進程是否啓動成功
jps

# 建立topic命令
bin/kafka-topics.sh --create --zookeeper data1:2181,data2:2181,data3:2181 --replication-factor 3 --partitions 1 --topic test

# 模擬生產者命令
bin/kafka-console-producer.sh --broker-list data1:9092,data2:9092,data3:9092 --topic test

# 模擬消費者命令
bin/kafka-console-consumer.sh --bootstrap-server data1:9092,data2:9092,data3:9092 --from-beginning --topic test

12.部署flink on yarn

本次flink部署採用的是on yarn模式(HA)。

(1)安裝flink包

# 將flink安裝包上傳至data1機器,並解壓至指定位置
tar -zxvf flink-1.10.1-bin-scala_2.12.tgz /usr/local/flink

(2)修改flink-conf.yaml文件

# 進入到flink的conf目錄,編輯flink-conf.yaml文件
vi flink-conf.yaml

# 修改如下內容的配置
taskmanager.numberOfTaskSlots: 4

high-availability: zookeeper
high-availability.storageDir: hdfs://cdhbds/flink/ha/
high-availability.zookeeper.quorum: data1:2181,data2:2181,data3:2181
high-availability.zookeeper.path.root: /flink

state.backend: filesystem
state.checkpoints.dir: hdfs://cdhbds/flink/flink-checkpoints
state.savepoints.dir: hdfs://cdhbds/flink/flink-checkpoints

jobmanager.archive.fs.dir: hdfs://cdhbds/flink/completed-jobs/
historyserver.archive.fs.dir: hdfs://cdhbds/flink/completed-jobs/

yarn.application-attempts: 10

(3)修改日誌配置文件

由於flink的conf目錄下有log4j和logback的配置文件,在啓動flink集羣的時候會有一個警告:
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The configuration directory ('/root/flink-1.7.1/conf') contains both LOG4J and Logback configuration files. Please delete or rename one of them.

故須要去掉一個日誌配置文件,咱們能夠將log4j.properties給重命名爲log4j.properties.bak便可.

(4)配置hadoop classpath

# 該版本的flink默認是沒有和hadoop集成的,官方文檔指出須要咱們本身去完成hadoop集成的配置,
# 官網上給出了兩種集成方案,一種是添加hadoop classpath的配置,
# 另外一種是將flink-shaded-hadoop-2-uber-xx.jar拷貝至flink的lib目錄下,
# 這裏咱們採用第一種方式,經過配置hadoop classpath
# 編輯/etc/profile文件,添加如下內容
export HADOOP_CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath)

# 配置完環境變量後要source /etc/profile
source /etc/profile

(5)以yarn-session方式啓動flink集羣

# 這裏咱們以yarn-session方式來啓動集羣,進入到bin目錄,啓動命令以下:
./yarn-session.sh -s 4 -jm 1024m -tm 4096m -nm flink-test -d

(6)啓動historyServer服務

# 啓動historyServer服務命令
./historyserver.sh start
相關文章
相關標籤/搜索