項目實戰從0到1之hive(35)大數據項目之電商數倉(用戶行爲數據採集)(三)

4.4 採集日誌Flume

img

4.4.1 日誌採集Flume安裝

集羣規劃:java

img

4.4.2 項目經驗之Flume組件

1)Source (1)Taildir Source相比Exec Source、Spooling Directory Source的優點 TailDir Source:斷點續傳、多目錄。Flume1.6之前須要本身自定義Source記錄每次讀取文件位置,實現斷點續傳。 Exec Source能夠實時蒐集數據,可是在Flume不運行或者Shell命令出錯的狀況下,數據將會丟失。 Spooling Directory Source監控目錄,不支持斷點續傳。 (2)batchSize大小如何設置? 答:Event 1K左右時,500-1000合適(默認爲100) 2)Channel 採用Kafka Channel,省去了Sink,提升了效率。linux

4.4.3 日誌採集Flume配置

1)Flume配置分析git

img

Flume直接讀log日誌的數據,log日誌的格式是app-yyyy-mm-dd.log。 2)Flume的具體配置以下: (1)在/opt/module/flume/conf目錄下建立file-flume-kafka.conf文件apache

[kgg@hadoop101 conf]$ vim file-flume-kafka.conf
在文件配置以下內容
a1.sources=r1
a1.channels=c1 c2

# configure source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /opt/module/flume/test/log_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1 c2

#interceptor
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = com.kgg.flume.interceptor.LogETLInterceptor$Builder
a1.sources.r1.interceptors.i2.type = com.kgg.flume.interceptor.LogTypeInterceptor$Builder

a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = topic
a1.sources.r1.selector.mapping.topic_start = c1
a1.sources.r1.selector.mapping.topic_event = c2

# configure channel
a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c1.kafka.bootstrap.servers = hadoop101:9092,hadoop102:9092,hadoop103:9092
a1.channels.c1.kafka.topic = topic_start
a1.channels.c1.parseAsFlumeEvent = false
a1.channels.c1.kafka.consumer.group.id = flume-consumer

a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.c2.kafka.bootstrap.servers = hadoop101:9092,hadoop102:9092,hadoop103:9092
a1.channels.c2.kafka.topic = topic_event
a1.channels.c2.parseAsFlumeEvent = false
a1.channels.c2.kafka.consumer.group.id = flume-consumer

注意:com.kgg.flume.interceptor.LogETLInterceptor和com.kgg.flume.interceptor.LogTypeInterceptor是自定義的攔截器的全類名。須要根據用戶自定義的攔截器作相應修改。json

img

4.4.4 Flume的ETL和分類型攔截器

本項目中自定義了兩個攔截器,分別是:ETL攔截器、日誌類型區分攔截器。 ETL攔截器主要用於,過濾時間戳不合法和Json數據不完整的日誌bootstrap

日誌類型區分攔截器主要用於,將啓動日誌和事件日誌區分開來,方便發往Kafka的不一樣Topic。 1)建立Maven工程flume-interceptor 2)建立包名:com.kgg.flume.interceptor 3)在pom.xml文件中添加以下配置vim

<dependencies>
   <dependency>
       <groupId>org.apache.flume</groupId>
       <artifactId>flume-ng-core</artifactId>
       <version>1.7.0</version>
   </dependency>
</dependencies>

<build>
   <plugins>
       <plugin>
           <artifactId>maven-compiler-plugin</artifactId>
           <version>2.3.2</version>
           <configuration>
               <source>1.8</source>
               <target>1.8</target>
           </configuration>
       </plugin>
       <plugin>
           <artifactId>maven-assembly-plugin</artifactId>
           <configuration>
               <descriptorRefs>
                   <descriptorRef>jar-with-dependencies</descriptorRef>
               </descriptorRefs>
           </configuration>
           <executions>
               <execution>
                   <id>make-assembly</id>
                   <phase>package</phase>
                   <goals>
                       <goal>single</goal>
                   </goals>
               </execution>
           </executions>
       </plugin>
   </plugins>
</build>

4)在com.kgg.flume.interceptor包下建立LogETLInterceptor類名bash

Flume ETL攔截器LogETLInterceptor
package com.kgg.flume.interceptor;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;

public class LogETLInterceptor implements Interceptor {

   @Override
   public void initialize() {

  }

   @Override
   public Event intercept(Event event) {

       // 1 獲取數據
       byte[] body = event.getBody();
       String log = new String(body, Charset.forName("UTF-8"));

       // 2 判斷數據類型並向Header中賦值
       if (log.contains("start")) {
           if (LogUtils.validateStart(log)){
               return event;
          }
      }else {
           if (LogUtils.validateEvent(log)){
               return event;
          }
      }

       // 3 返回校驗結果
       return null;
  }

   @Override
   public List<Event> intercept(List<Event> events) {

       ArrayList<Event> interceptors = new ArrayList<>();

       for (Event event : events) {
           Event intercept1 = intercept(event);

           if (intercept1 != null){
               interceptors.add(intercept1);
          }
      }

       return interceptors;
  }

   @Override
   public void close() {

  }

   public static class Builder implements Interceptor.Builder{

       @Override
       public Interceptor build() {
           return new LogETLInterceptor();
      }

       @Override
       public void configure(Context context) {

      }
  }
}

4)Flume日誌過濾工具類服務器

package com.kgg.flume.interceptor;
import org.apache.commons.lang.math.NumberUtils;

public class LogUtils {

   public static boolean validateEvent(String log) {
       // 服務器時間 | json
       // 1549696569054 | {"cm":{"ln":"-89.2","sv":"V2.0.4","os":"8.2.0","g":"M67B4QYU@gmail.com","nw":"4G","l":"en","vc":"18","hw":"1080*1920","ar":"MX","uid":"u8678","t":"1549679122062","la":"-27.4","md":"sumsung-12","vn":"1.1.3","ba":"Sumsung","sr":"Y"},"ap":"weather","et":[]}

       // 1 切割
       String[] logContents = log.split("\\|");

       // 2 校驗
       if(logContents.length != 2){
           return false;
      }

       //3 校驗服務器時間
       if (logContents[0].length()!=13 || !NumberUtils.isDigits(logContents[0])){
           return false;
      }

       // 4 校驗json
       if (!logContents[1].trim().startsWith("{") || !logContents[1].trim().endsWith("}")){
           return false;
      }

       return true;
  }

   public static boolean validateStart(String log) {
// {"action":"1","ar":"MX","ba":"HTC","detail":"542","en":"start","entry":"2","extend1":"","g":"S3HQ7LKM@gmail.com","hw":"640*960","l":"en","la":"-43.4","ln":"-98.3","loading_time":"10","md":"HTC-5","mid":"993","nw":"WIFI","open_ad_type":"1","os":"8.2.1","sr":"D","sv":"V2.9.0","t":"1559551922019","uid":"993","vc":"0","vn":"1.1.5"}

       if (log == null){
           return false;
      }

       // 校驗json
       if (!log.trim().startsWith("{") || !log.trim().endsWith("}")){
           return false;
      }

       return true;
  }
}

5)Flume日誌類型區分攔截器LogTypeInterceptorapp

package com.kgg.flume.interceptor;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class LogTypeInterceptor implements Interceptor {
   @Override
   public void initialize() {

  }

   @Override
   public Event intercept(Event event) {

       // 區分日誌類型:   body header
       // 1 獲取body數據
       byte[] body = event.getBody();
       String log = new String(body, Charset.forName("UTF-8"));

       // 2 獲取header
       Map<String, String> headers = event.getHeaders();

       // 3 判斷數據類型並向Header中賦值
       if (log.contains("start")) {
           headers.put("topic","topic_start");
      }else {
           headers.put("topic","topic_event");
      }

       return event;
  }

   @Override
   public List<Event> intercept(List<Event> events) {

       ArrayList<Event> interceptors = new ArrayList<>();

       for (Event event : events) {
           Event intercept1 = intercept(event);

           interceptors.add(intercept1);
      }

       return interceptors;
  }

   @Override
   public void close() {

  }

   public static class Builder implements  Interceptor.Builder{

       @Override
       public Interceptor build() {
           return new LogTypeInterceptor();
      }

       @Override
       public void configure(Context context) {

      }
  }
}

6)打包 攔截器打包以後,只須要單獨包,不須要將依賴的包上傳。打包以後要放入Flume的lib文件夾下面。

img

注意:爲何不須要依賴包?由於依賴包在flume的lib目錄下面已經存在了。 7)須要先將打好的包放入到hadoop101的/opt/module/flume/lib文件夾下面。

[kgg@hadoop101 lib]$ ls | grep interceptor
flume-interceptor-1.0-SNAPSHOT.jar

4.4.5 日誌採集Flume啓動中止腳本

1)在/home/kgg/bin目錄下建立腳本f1.sh

[kgg@hadoop101 bin]$ vim f1.sh
  在腳本中填寫以下內容
#! /bin/bash

case $1 in
"start"){
       for i in hadoop101 hadoop102
       do
               echo " --------啓動 $i 採集flume-------"
               ssh $i "nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/file-flume-kafka.conf --name a1 -Dflume.root.logger=INFO,LOGFILE > /dev/null 2>&1 &"
       done
};;    
"stop"){
       for i in hadoop101 hadoop102
       do
               echo " --------中止 $i 採集flume-------"
               ssh $i "ps -ef | grep file-flume-kafka | grep -v grep |awk '{print \$2}' | xargs kill"
       done

};;
esac

說明1:nohup,該命令能夠在你退出賬戶/關閉終端以後繼續運行相應的進程。nohup就是不掛起的意思,不掛斷地運行命令。 說明2:/dev/null表明linux的空設備文件,全部往這個文件裏面寫入的內容都會丟失,俗稱「黑洞」。 標準輸入0:從鍵盤得到輸入 /proc/self/fd/0 標準輸出1:輸出到屏幕(即控制檯) /proc/self/fd/1 錯誤輸出2:輸出到屏幕(即控制檯) /proc/self/fd/2 2)增長腳本執行權限

[kgg@hadoop101 bin]$ chmod 777 f1.sh

3)f1集羣啓動腳本

[kgg@hadoop101 module]$ f1.sh start

4)f1集羣中止腳本

[kgg@hadoop101 module]$ f1.sh stop
相關文章
相關標籤/搜索