使用ELK(Elasticsearch + Logstash + Kibana) 搭建日誌集中分析平臺實踐

前言

Elasticsearch + Logstash + Kibana(ELK)是一套開源的日誌管理方案,分析網站的訪問狀況時咱們通常會藉助Google/百度/CNZZ等方式嵌入JS作數據統計,可是當網站訪問異常或者被攻擊時咱們須要在後臺分析如Nginx的具體日誌,而Nginx日誌分割/GoAccess/Awstats都是相對簡單的單節點解決方案,針對分佈式集羣或者數據量級較大時會顯得愛莫能助,而ELK的出現可使咱們從容面對新的挑戰。php

  • Logstash:負責日誌的收集,處理和儲存
  • Elasticsearch:負責日誌檢索和分析
  • Kibana:負責日誌的可視化
ELK(Elasticsearch + Logstash + Kibana)

更新記錄

2019年07月02日 - 轉載同事整理的ELK Stack進行重構
2015年08月31日 - 初稿html

閱讀原文 - https://wsgzao.github.io/post...node

擴展閱讀linux

elastic - https://www.elastic.co/cn/
ELK - https://fainyang.github.io/po...nginx


ELK簡介

ELK 官方文檔 是一個分佈式、可擴展、實時的搜索與數據分析引擎。目前我在工做中只用來收集 server 的 log, 開發鍋鍋們 debug 的好助手。git

安裝設置單節點 ELK

若是你想快速的搭建單節點 ELK, 那麼使用 docker 方式確定是你的最佳選擇。使用三合一的鏡像,文檔詳情
注意:安裝完 docker, 記得設置 mmap counts 大小至少 262144
什麼是 mmapgithub

# 設置 mmap 命令
# 臨時添加法
sysctl -w vm.max_map_count=262144  

# 寫入 sysctl.conf 文件裏
vim /etc/sysctl.conf

vm.max_map_count=262144  
# 保存好文件執行如下命令

sysctl -p

# 安裝 docker
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install -y docker-ce

sudo systemctl start docker

單節點的機器,沒必要暴露 9200(Elasticsearch JSON interface) 和 9300(Elasticsearch transport interface) 端口。
若是想在 docker 上暴露端口,用 -p 若是沒有填寫監聽的地址,默認是 0.0.0.0 全部的網卡。建議仍是寫明確監聽的地址,安全性更好。docker

-p 監聽的IP:宿主機端口:容器內的端口
-p 192.168.10.10:9300:9300

命令行啓動一個 ELK

sudo docker run -p 5601:5601 -p 5044:5044 \
-v /data/elk-data:/var/lib/elasticsearch  \
-v /data/elk/logstash:/etc/logstash/conf.d  \
-it -e TZ="Asia/Singapore" -e ES_HEAP_SIZE="20g"  \
-e LS_HEAP_SIZE="10g" --name elk-ubuntu sebp/elk

將配置和數據掛載出來,即便 docker container 出現了問題。能夠當即銷燬再重啓一個,服務受影響的時間很短。json

# 注意掛載出來的文件夾的權限問題
chmod 755 /data/elk-data 
chmod 755 /data/elk/logstash
chown -R root:root /data 
-v /data/elk-data:/var/lib/elasticsearch   # 將 elasticsearch 存儲的數據掛載出來,數據持久化。
-v /data/elk/logstash:/etc/logstash/conf.d # 將 logstash 的配置文件掛載出來,方便在宿主機上修改。

elasticsearch 重要的參數調優

  1. ES_HEAP_SIZE Elasticsearch will assign the entire heap specified in jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings. You should set these two settings to be equal to each other. Set Xmx and Xms to no more than 50% of your physical RAM.the exact threshold varies but is near 32 GB. the exact threshold varies but 26 GB is safe on most systems, but can be as large as 30 GB on some systems.

利弊關係: The more heap available to Elasticsearch, the more memory it can use for its internal caches, but the less memory it leaves available for the operating system to use for the filesystem cache. Also, larger heaps can cause longer garbage collection pauses.bootstrap

  1. LS_HEAP_SIZE 若是 heap size 太低,會致使 CPU 利用率到達瓶頸,形成 JVM 不斷的回收垃圾。 不能設置 heap size 超過物理內存。 至少留 1G 給操做系統和其餘的進程。

只須要配置logstash

接下來,咱們再來看一看 logstash.conf 記得看註釋
參考連接:

  1. SSL詳情可參考
  2. grok 正則捕獲
  3. grok插件語法介紹
  4. logstash 配置語法
  5. grok 內置 pattern
  6. Logstash詳細記錄
input {
  beats {
    port => 5044
    #ssl => true
    #ssl_certificate => "/etc/logstash/logstash.crt"
    #ssl_key => "/etc/logstash/logstash.key"
# 1. SSL詳情可參考 
  }
}
# filter 模塊主要是數據預處理,提取一些信息,方便 elasticsearch 好歸類存儲。
# 2. grok 正則捕獲
# 3. grok插件語法介紹 
# 4. logstash 配置語法 
# 5. grok 內置 pattern 
filter {
    grok {  
      match => {"message" => "%{EXIM_DATE:timestamp}\|%{LOGLEVEL:log_level}\|%{INT:pid}\|%{GREEDYDATA}"}
# message 字段是 log 的內容,例如 2018-12-11 23:46:47.051|DEBUG|3491|helper.py:85|helper._save_to_cache|shop_session
# 在這裏咱們提取出了 timestamp log_level pid,grok 有內置定義好的patterns: EXIM_DATE, EXIM_DATE, INT
# GREEDYDATA 貪婪數據,表明任意字符均可以匹配 
    }
# 咱們在 filebeat 裏面添加了這個字段[fields][function]的話,那就會執行對應的 match 規則去匹配 path
# source 字段就是 log 的來源路徑,例如 /var/log/nginx/feiyang233.club.access.log
# match 後咱們就能夠獲得 path=feiyang233.club.access
    if [fields][function]=="nginx" {
        grok {         
        match => {"source" => "/var/log/nginx/%{GREEDYDATA:path}.log%{GREEDYDATA}"}  
            }
        } 
# 例如 ims 日誌來源是 /var/log/ims_logic/debug.log
# match 後咱們就能夠獲得 path=ims_logic
    else if [fields][function]=="ims" {
        grok {
        match => {"source" => "/var/log/%{GREEDYDATA:path}/%{GREEDYDATA}"}
            }
        }  

    else {
        grok {
        match => {"source" => "/var/log/app/%{GREEDYDATA:path}/%{GREEDYDATA}"}
            }         
        }
# filebeat 有定義 [fields][function] 時,咱們就添加上這個字段,例如 QA
    if [fields][function] {
          mutate {
              add_field => {
                  "function" => "%{[fields][function]}"
                }
            }
        } 
# 由於線上的機器更多,線上的我默認不在 filebeat 添加 function,因此 else 我就添加上 live  
    else {
          mutate {
              add_field => {
                  "function" => "live"
                }
            }
        }
# 在以前 filter message 時,咱們獲得了 timestamp,這裏咱們修改一下格式,添加上時區。
    date {
      match => ["timestamp" , "yyyy-MM-dd HH:mm:ss Z"]
      target => "@timestamp"
      timezone => "Asia/Singapore"
    }
# 將以前得到的 path 替換其中的 / 替換爲 - , 由於 elasticsearch index name 有要求
# 例如 feiyang/test  feiyang_test 
    mutate {
     gsub => ["path","/","-"]
      add_field => {"host_ip" => "%{[fields][host]}"}
      remove_field => ["tags","@version","offset","beat","fields","exim_year","exim_month","exim_day","exim_time","timestamp"]
    }
# remove_field 去掉一些多餘的字段
}
# 單節點 output 就在本機,也不須要 SSL, 但 index 的命名規則仍是須要很是的注意
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "sg-%{function}-%{path}-%{+xxxx.ww}"
# sg-nginx-feiyang233.club.access-2019.13  ww表明週數
  }
}

最終的流程圖以下所示

index 的規則 參考連接

  • Lowercase only
  • Cannot include , /, *, ?, ", <, >, |, ` ` (space character), ,, #
  • Indices prior to 7.0 could contain a colon (:), but that’s been deprecated and won’t be supported in 7.0+
  • Cannot start with -, _, +
  • Cannot be . or ..
  • Cannot be longer than 255 bytes (note it is bytes, so multi-byte characters will count towards the 255 limit faster)

filebeat 配置

在 client 端,咱們須要安裝而且配置 filebeat 請參考
Filebeat 模塊與配置
配置文件 filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  paths: # 須要收集的日誌
    - /var/log/app/**  ## ** need high versiob filebeat can support recursive

  fields: #須要添加的字段
    host: "{{inventory_hostname}}" 
    function: "xxx"
  multiline:  # 多行匹配
    match: after
    negate: true  # pay attention the format
    pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'   #\[
  ignore_older: 24h
  clean_inactive: 72h

output.logstash:
  hosts: ["{{elk_server}}:25044"]
  # ssl:
  #   certificate_authorities: ["/etc/filebeat/logstash.crt"]

批量部署 filebeat.yml 最好使用 ansible

---
- hosts: all
  become: yes
  gather_facts: yes
  tasks:
  - name: stop filebeat
    service: 
      name: filebeat
      state: stopped
      enabled: yes
      
  - name: upload filebeat.yml 
    template:
     src: filebeat.yml
     dest: /etc/filebeat/filebeat.yml
     owner: root
     group: root
     mode: 0644      

  - name: remove
    file: #delete all files in this directory
      path: /var/lib/filebeat/registry    
      state: absent

  - name: restart filebeat
    service: 
      name: filebeat
      state: restarted
      enabled: yes

查看 filebeat output

首先須要修改配置,將 filebeat 輸出到本地的文件,輸出的格式爲 json.

filebeat.inputs:
- type: log
  enabled: true
  paths:
     - /var/log/app/**
  fields:
    host: "x.x.x.x"
    region: "sg"
  multiline:
    match: after
    negate: true
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  ignore_older: 24h
  clean_inactive: 72h

output.file:
 path: "/home/feiyang"
  filename: feiyang.json

經過上述的配置,咱們就能夠在路徑 /home/feiyang 下獲得輸出結果文件 feiyang.json 在這裏須要注意的是,不一樣版本的 filebeat 輸出結果的格式會有所不一樣,這會給 logstash 解析過濾形成一點點困難。下面舉例說明 6.x 和 7.x filebeat 輸出結果的不一樣

{
  "@timestamp": "2019-06-27T15:53:27.682Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.4.2"
  },
  "fields": {
    "host": "x.x.x.x",
    "region": "sg"
  },
  "host": {
    "name": "x.x.x.x"
  },
  "beat": {
    "name": "x.x.x.x",
    "hostname": "feiyang-localhost",
    "version": "6.4.2"
  },
  "offset": 1567983499,
  "message": "[2019-06-27T22:53:25.756327232][Info][@http.go.177] [48552188]request",
  "source": "/var/log/feiyang/scripts/all.log"
}

6.4 與 7.2 仍是有很大的差別,在結構上。

{
  "@timestamp": "2019-06-27T15:41:42.991Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.2.0"
  },
  "agent": {
    "id": "3a38567b-e6c3-4b5a-a420-f0dee3a3bec8",
    "version": "7.2.0",
    "type": "filebeat",
    "ephemeral_id": "b7e3c0b7-b460-4e43-a9af-6d36c25eece7",
    "hostname": "feiyang-localhost"
  },
  "log": {
    "offset": 69132192,
    "file": {
      "path": "/var/log/app/feiyang/scripts/info.log"
    }
  },
  "message": "2019-06-27 22:41:25.312|WARNING|14186|Option|data|unrecognized|fields=set([u'id'])",
  "input": {
    "type": "log"
  },
  "fields": {
    "region": "sg",
    "host": "x.x.x.x"
  },
  "ecs": {
    "version": "1.0.0"
  },
  "host": {
    "name": "feiyang-localhost"
  }
}

Kibana 簡單的使用

在搭建 ELK 時,暴露出來的 5601 端口就是 Kibana 的服務。
訪問 http://your_elk_ip:5601

安裝設置集羣 ELK 版本 6.7

ELK 安裝文檔集羣主要是高可用,多節點的 Elasticsearch 還能夠擴容。本文中用的官方鏡像 The base image is centos:7

Elasticsearch 多節點搭建

官方安裝文檔 Elasticsearch

# 掛載出來的文件夾權限很是的重要
mkdir -p /data/elk-data && chmod 755 /data/elk-data
chown -R root:root /data 
docker run -p WAN_IP:9200:9200 -p 10.66.236.116:9300:9300 \
-v /data/elk-data:/usr/share/elasticsearch/data \
--name feiy_elk \
docker.elastic.co/elasticsearch/elasticsearch:6.7.0

接下來是修改配置文件 elasticsearch.yml

# Master 節點 node-1
# 進入容器 docker exec -it [container_id] bash
# docker exec -it 70ada825aae1 bash
# vi /usr/share/elasticsearch/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: 0.0.0.0
node.master: true
node.data: true
node.name: node-1
network.publish_host: 10.66.236.116
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.118:9300","10.66.236.115:9300"]

# exit
# docker restart  70ada825aae1
# slave 節點 node-2
# 進入容器 docker exec -it [container_id] bash
# vi /usr/share/elasticsearch/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: "0.0.0.0"
node.name: node-2
node.data: true
network.publish_host: 10.66.236.118
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.118:9300","10.66.236.115:9300"]

# exit
# docker restart  70ada825aae1
# slave 節點 node-3
# 進入容器 docker exec -it [container_id] bash
# vi /usr/share/elasticsearch/config/elasticsearch.yml
cluster.name: "feiy_elk"
network.host: "0.0.0.0"
node.name: node-3
node.data: true
network.publish_host: 10.66.236.115
discovery.zen.ping.unicast.hosts: ["10.66.236.116:9300","10.66.236.118:9300","10.66.236.115:9300"]

# exit
# docker restart  70ada825aae1

檢查集羣節點個數,狀態等

# curl http://wan_ip:9200/_cluster/health?pretty
{
  "cluster_name" : "feiy_elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 9,
  "active_shards" : 18,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

最終結果圖在 kibana 上能夠看到集羣狀態

Kibana 搭建

官方安裝文檔 Kibana

# docker run --link YOUR_ELASTICSEARCH_CONTAINER_NAME_OR_ID:elasticsearch -p 5601:5601 {docker-repo}:{version}
docker run -p 外網IP:5601:5601 --link elasticsearch容器的ID:elasticsearch docker.elastic.co/kibana/kibana:6.7.0

# 注意的是 --link 官方其實並不推薦的,推薦的是 use user-defined networks https://docs.docker.com/network/links/
# 測試不用 --link 也能夠通。直接用容器的 IP
docker run -p 外網IP:5601:5601  docker.elastic.co/kibana/kibana:6.7.0

we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link

# vi /usr/share/kibana/config/kibana.yml
# 須要把 hosts IP 改成 elasticsearch 容器的 IP
# 我這裏 elasticsearch 容器的 IP 是 172.17.0.2
# 如何查看 docker inspect elasticsearch_ID
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://172.17.0.2:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true

# 退出容器並重啓
docker restart [container_ID]

Logstash 搭建

官方安裝文檔 Logstash

# docker -d 之後臺的方式啓動容器  --name 參數顯式地爲容器命名
docker run -p 5044:5044 -d --name test_logstash  docker.elastic.co/logstash/logstash:6.7.0
# 也能夠指定網卡,監聽在內網或者外網 監聽在內網 192.168.1.2
docker run -p 192.168.1.2:5044:5044 -d --name test_logstash  docker.elastic.co/logstash/logstash:6.7.0
# vi /usr/share/logstash/pipeline/logstash.conf
# 配置詳情請參考下面的連接,記得 output hosts IP 指向 Elasticsearch 的 IP
# Elasticsearch 的默認端口是9200,在下面的配置中能夠省略。
hosts => ["IP Address 1:port1", "IP Address 2:port2", "IP Address 3"]

logstash 過濾規則 見上文的配置和 grok 語法規則

# vi /usr/share/logstash/config/logstash.yml
# 須要把 url 改成 elasticsearch master 節點的 IP
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://elasticsearch_master_IP:9200
node.name: "feiy"
pipeline.workers: 24 # same with cores

改完配置 exit 從容器裏退出到宿主機,而後重啓這個容器。更多配置詳情,參見官方文檔

# 如何查看 container_ID
docker ps -a

docker restart [container_ID]

容災測試

咱們把當前的 master 節點 node-1 關機,經過 kibana 看看集羣的狀態是怎樣變化的。

當前集羣的狀態變成了黃色,由於還有 3 個 Unassigned Shards。顏色含義請參考官方文檔,再過一會發現集羣狀態變成了綠色。

kibana 控制檯 Console

Quick intro to the UI
The Console UI is split into two panes: an editor pane (left) and a response pane (right). Use the editor to type requests and submit them to Elasticsearch. The results will be displayed in the response pane on the right side.

Console understands requests in a compact format, similar to cURL:

# index a doc
PUT index/type/1
{
  "body": "here"
}

# and get it ...
GET index/type/1

While typing a request, Console will make suggestions which you can then accept by hitting Enter/Tab. These suggestions are made based on the request structure as well as your indices and types.

A few quick tips, while I have your attention

  • Submit requests to ES using the green triangle button.
  • Use the wrench menu for other useful things.
  • You can paste requests in cURL format and they will be translated to the Console syntax.
  • You can resize the editor and output panes by dragging the separator between them.
  • Study the keyboard shortcuts under the Help button. Good stuff in there!

Console 經常使用的命令

Kibana 控制檯
ELK技術棧中的那些查詢語法

GET _search
{
  "query": {
    "match_all": {}
  }
}

GET /_cat/health?v

GET /_cat/nodes?v

GET /_cluster/allocation/explain

GET /_cluster/state

GET /_cat/thread_pool?v

GET /_cat/indices?health=red&v

GET /_cat/indices?v

#將當前全部的 index 的 replicas 設置爲 0

PUT /*/_settings
{
   "index" : {
       "number_of_replicas" : 0,
       "refresh_interval": "30s"
   }
}

GET /_template


# 在單節點的時候,不須要備份,因此將 replicas 設置爲 0
PUT _template/app-logstash
{
 "index_patterns": ["app-*"],
 "settings": {
   "number_of_shards": 3,
   "number_of_replicas": 0,
   "refresh_interval": "30s"
  }
}

Elasticsearch 數據遷移

Elasticsearch 數據遷移官方文檔感受不是很詳細。容器化的數據遷移,我太菜用 reindex 失敗了,snapshot 也涼涼。
最後是用一個開源工具 An Elasticsearch Migration Tool 進行數據遷移的。

wget https://github.com/medcl/esm-abandoned/releases/download/v0.4.2/linux64.tar.gz
tar -xzvf linux64.tar.gz
./esm  -s http://127.0.0.1:9200   -d http://192.168.21.55:9200 -x index_name  -w=5 -b=10 -c 10000 --copy_settings --copy_mappings --force  --refresh

Nginx 代理轉發

由於有時候 docker 重啓,iptables restart 也會刷新,因此致使了咱們的限制規則會被更改,出現安全問題。這是因爲 docker 的網絡隔離基於 iptable 實現形成的問題。爲了不這個安全問題,咱們能夠在啓動 docker 時,就只監聽在內網,或者本地 127.0.0.1 而後經過 nginx 轉發。

# cat kibana.conf
server {

    listen 25601;
    server_name x.x.x.x;
    access_log /var/log/nginx/kibana.access.log;
    error_log /var/log/nginx/kibana.error.log;

    location / {
        allow x.x.x.x;
        allow x.x.x.x;
        deny all;

        proxy_http_version 1.1;
        proxy_buffer_size 64k;
        proxy_buffers   32 32k;
        proxy_busy_buffers_size 128k;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;

        proxy_pass    http://127.0.0.1:5601;

    }
}

! 這裏須要注意的是, iptable filter 表 INPUT 鏈 有沒有阻擋 172.17.0.0/16 docker 默認的網段。是否阻擋了 25601 這個端口。

踩過的坑

  • iptables 防不住。須要看上一篇博客裏的 iptable 問題。或者監聽在內網,用 Nginx 代理轉發。
  • elk 網絡問題
  • elk node
  • discovery.type=single-node 在測試單點時可用,搭建集羣時不能設置這個環境變量,詳情見官方文檔
  • ELK的一次吞吐量優化
  • filebeat 版本太低致使 recursive glob patterns ** 不可用

用 ansible 升級 filebeat

---
- hosts: all
  become: yes
  gather_facts: yes
  tasks:
  - name: upload filebeat.repo 
    copy:
     src: elasticsearch.repo
     dest: /etc/yum.repos.d/elasticsearch.repo
     owner: root
     group: root
     mode: 0644

  - name: install the latest version of filebeat
    yum:
      name: filebeat
      state: latest

  - name: restart filebeat
    service: 
      name: filebeat
      state: restarted
      enabled: yes
      
# elasticsearch.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
  • filebeat 7.x 與 6.x 不兼容問題. 關鍵字變化很大, 好比說 "sorce" 變爲了 log[path]

參考文章

  1. 騰訊雲Elasticsearch Service 這個騰訊雲的專欄很是的不錯,請您必定要點開看一眼,總有你想要的。
  2. ELK重難點總結和總體優化配置
相關文章
相關標籤/搜索