目前成熟架構(億級): Filebeat * n + redis + logstash + elasticsearch + kibana 中小型(本文部署): Filebeat*n +logstash + elasticsearch + kibananode
docker-compose.ymlnginx
version: '3' services: filebeat: build: context: . args: ELK_VERSION: 7.1.1 user: root container_name: 'filebeat' volumes: - ./config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro # 配置文件,只讀 - /var/lib/docker/containers:/var/lib/docker/containers:ro # 採集docker日誌數據 - /var/run/docker.sock:/var/run/docker.sock:ro
Dockerfileredis
ARG ELK_VERSION FROM docker.elastic.co/beats/filebeat:${ELK_VERSION}
config/filebeat.ymldocker
filebeat.config: modules: path: ${path.config}/modules.d/*.yml reload.enabled: false filebeat.autodiscover: providers: - type: docker hints.enabled: true processors: - add_cloud_metadata: ~ filebeat.inputs: - type: json-file paths: - /var/lib/docker/containers/*/*.log output.logstash: hosts: ["192.168.31.45:5000"] # 此處修改成logstash監聽地址
但這裏咱們發現,日誌文件的命名方式是使用containerId來命名的,所以沒法區分日誌的容器所對應的鏡像,所以咱們須要在各容器docker-compose文件添加labels信息.json
version: "3" services: nginx: image: nginx container_name: nginx labels: service: nginx ports: - 80:80 logging: driver: json-file options: labels: "service"
日誌輸出以下,便可區分不一樣的容器建立不一樣的es庫架構
{"log":"172.18.0.1 - - [05/Jul/2019:06:33:55 +0000] \"GET / HTTP/1.1\" 200 612 \"-\" \"curl/7.29.0\" \"-\"\n","stream":"stdout","attrs":{"service":"nginx"},"time":"2019-07-05T06:33:55.973727477Z"}
說實話,貼代碼配置其實看上去會挺繁瑣的,也不可能每一個配置項都講解到,看不懂的配置項或者docker-compose知識點,須要本身去惡補。 讓咱們先來看一下文件目錄結構cors
├── docker-compose.yml ├── elasticsearch │ ├── config │ │ └── elasticsearch.yml │ └── Dockerfile ├── kibana │ ├── config │ │ └── kibana.yml │ └── Dockerfile └── logstash ├── config │ └── logstash.yml ├── Dockerfile └── pipeline └── logstash.conf
docker-compose.ymlcurl
version: '2' services: elasticsearch: build: context: elasticsearch/ args: ELK_VERSION: 7.1.1 volumes: - esdata:/usr/share/elasticsearch/data - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro ports: - "9200:9200" - "9300:9300" environment: ES_JAVA_OPTS: "-Xmx512m -Xms512m" networks: - elk logstash: build: context: logstash/ args: ELK_VERSION: 7.1.1 volumes: - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro - ./logstash/pipeline:/usr/share/logstash/pipeline:ro ports: - "5000:5000" - "9600:9600" environment: LS_JAVA_OPTS: "-Xmx512m -Xms512m" networks: - elk depends_on: - elasticsearch kibana: build: context: kibana/ args: ELK_VERSION: 7.1.1 volumes: - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro ports: - "5601:5601" networks: - elk depends_on: - elasticsearch networks: elk: driver: bridge volumes: esdata:
elasticsearch/config/elasticsearch.ymlelasticsearch
cluster.name: docker-cluster node.name: master node.master: true node.data: true network.host: 0.0.0.0 network.publish_host: 192.168.31.45 # 這裏是我內網ip cluster.initial_master_nodes: - master http.cors.enabled: true http.cors.allow-origin: "*"
elasticsearch/Dockerfile分佈式
ARG ELK_VERSION FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
kibana/config/kibana.yml
server.name: kibana server.host: "0" elasticsearch.hosts: [ "http://elasticsearch:9200" ]
kibana/Dockerfile
ARG ELK_VERSION FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
logstash/config/logstash.yml
http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
logstash/pipeline/logstash.conf
input { beats { port => 5000 } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => "elasticsearch:9200" } }
logstash/Dockerfile
ARG ELK_VERSION FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
至此部署成功,須要修改的地方有elasticsearch.yml的內網ip,logstash.conf新增filter