filebeat + ELK 部署篇

ELK Stack

  • Elasticsearch:分佈式搜索和分析引擎,具備高可伸縮、高可靠和易管理等特色。基於 Apache Lucene 構建,能對大容量的數據進行接近實時的存儲、搜索和分析操做。一般被用做某些應用的基礎搜索引擎,使其具備複雜的搜索功能;
  • Logstash:數據收集引擎。它支持動態的從各類數據源蒐集數據,並對數據進行過濾、分析、豐富、統一格式等操做,而後存儲到用戶指定的位置;
  • Kibana:數據分析和可視化平臺。一般與 Elasticsearch 配合使用,對其中數據進行搜索、分析和以統計圖表的方式展現;
  • Filebeat:ELK 協議棧的新成員,一個輕量級開源日誌文件數據蒐集器,基於 Logstash-Forwarder 源代碼開發,是對它的替代。在須要採集日誌數據的 server 上安裝 Filebeat,並指定日誌目錄或日誌文件後,Filebeat 就能讀取數據,迅速發送到 Logstash 進行解析,亦或直接發送到 Elasticsearch 進行集中式存儲和分析。

目前成熟架構(億級): Filebeat * n + redis + logstash + elasticsearch + kibana 中小型(本文部署): Filebeat*n +logstash + elasticsearch + kibananode

Docker 部署Filebeat

docker-compose.ymlnginx

version: '3'

services:
  filebeat:
    build:
      context: .
      args:
        ELK_VERSION: 7.1.1
    user: root
    container_name: 'filebeat'
    volumes:
      - ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro # 配置文件,只讀
      - /var/lib/docker/containers:/var/lib/docker/containers:ro # 採集docker日誌數據
      - /var/run/docker.sock:/var/run/docker.sock:ro

Dockerfileredis

ARG ELK_VERSION

FROM docker.elastic.co/beats/filebeat:${ELK_VERSION}

config/filebeat.ymldocker

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

processors:
- add_cloud_metadata: ~

filebeat.inputs:
- type: json-file
  paths:
    - /var/lib/docker/containers/*/*.log

output.logstash:
  hosts: ["192.168.31.45:5000"] # 此處修改成logstash監聽地址

但這裏咱們發現,日誌文件的命名方式是使用containerId來命名的,所以沒法區分日誌的容器所對應的鏡像,所以咱們須要在各容器docker-compose文件添加labels信息.json

version: "3"
services:
  nginx:
    image: nginx
    container_name: nginx
    labels:
      service: nginx
    ports:
      - 80:80
    logging:
      driver: json-file
      options:
        labels: "service"

日誌輸出以下,便可區分不一樣的容器建立不一樣的es庫架構

{"log":"172.18.0.1 - - [05/Jul/2019:06:33:55 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","attrs":{"service":"nginx"},"time":"2019-07-05T06:33:55.973727477Z"}

Docker部署ELK

說實話,貼代碼配置其實看上去會挺繁瑣的,也不可能每一個配置項都講解到,看不懂的配置項或者docker-compose知識點,須要本身去惡補。 讓咱們先來看一下文件目錄結構cors

├── docker-compose.yml
├── elasticsearch
│   ├── config
│   │   └── elasticsearch.yml
│   └── Dockerfile
├── kibana
│   ├── config
│   │   └── kibana.yml
│   └── Dockerfile
└── logstash
    ├── config
    │   └── logstash.yml
    ├── Dockerfile
    └── pipeline
        └── logstash.conf

docker-compose.ymlcurl

version: '2'
services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: 7.1.1
    volumes:
      - esdata:/usr/share/elasticsearch/data
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx512m -Xms512m"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: 7.1.1
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx512m -Xms512m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: 7.1.1
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge
volumes:
  esdata:

elasticsearch

elasticsearch/config/elasticsearch.ymlelasticsearch

cluster.name: docker-cluster

node.name: master
node.master: true
node.data: true
network.host: 0.0.0.0
network.publish_host: 192.168.31.45 # 這裏是我內網ip
cluster.initial_master_nodes:
  - master

http.cors.enabled: true
http.cors.allow-origin: "*"

elasticsearch/Dockerfile分佈式

ARG ELK_VERSION

FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}

kibana

kibana/config/kibana.yml

server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]

kibana/Dockerfile

ARG ELK_VERSION

FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}

logstash

logstash/config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]

logstash/pipeline/logstash.conf

input {
	beats {
		port => 5000
	}
}

## Add your filters / logstash plugins configuration here

output {
	elasticsearch {
		hosts => "elasticsearch:9200"
	}
}

logstash/Dockerfile

ARG ELK_VERSION

FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}

至此部署成功,須要修改的地方有elasticsearch.yml的內網ip,logstash.conf新增filter

相關文章
相關標籤/搜索