docker-compose ELK+Filebeat查看docker及容器的日誌

我目前所在公司開發團隊比較小,爲集團下面的工廠開發了一套小的系統,跑在一臺CentOS服務器上,服務器搭建了docker環境,安裝了docker-compose,但在日誌處理方面,暫時沒有一個好的方法可以收集徹底的日誌,只能依賴進入至服務器後,以docker logs containerID的方法來進入查看,很是不方便,以前也有關注ELK的技術,但一直在開發系統功能,全力實現,今天得空,從新想起了ELK查看日誌的任務。html

 

kibana文檔:https://www.elastic.co/guide/cn/kibana/current/index.htmlnode

關於elastic開始視頻:https://www.elastic.co/guide/cn/index.htmlgit

 

 

 

項目文件夾github

其中docker-compose.ymldocker

version: '3' services: filebeat: hostname: filebeat image: weschen/filebeat build: context: filebeat dockerfile: Dockerfile volumes: # needed to access all docker logs (read only) :
     - "/var/lib/docker/containers:/usr/share/dockerlogs/data:ro"
      # needed to access additional informations about containers
     - "/var/run/docker.sock:/var/run/docker.sock" links: - logstash kibana: image: docker.elastic.co/kibana/kibana:6.5.2 environment: - "LOGGING_QUIET=true" links: - elasticsearch ports: - 5601:5601 logstash: hostname: logstash image: weschen/logstash build: context: logstash dockerfile: Dockerfile ports: - 5044:5044 environment: LOG_LEVEL: error links: - elasticsearch elasticsearch: hostname: elasticsearch image: weschen/elasticsearch build: context: elasticsearch dockerfile: Dockerfile environment: - cluster.name=docker-elk-cluster - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m" ulimits: memlock: soft: -1 hard: -1 ports: - 9200:9200

 

 

1.Elasticsearchjson

文件elasticsearch/Dockerfilebootstrap

FROM docker.elastic.co/elasticsearch/elasticsearch:6.5.2 COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/ CMD ["elasticsearch", "-Elogger.level=INFO"]

 

文件elasticsearch/elasticsearch.yml瀏覽器

cluster.name: ${cluster.name} network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP # set to 1 to allow single node clusters # Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

 

 

2.Logstashruby

文件logstash/Dockerfile服務器

FROM docker.elastic.co/logstash/logstash:6.5.2 RUN rm -f /usr/share/logstash/pipeline/logstash.conf COPY pipeline /usr/share/logstash/pipeline/

 

文件logstash/pipeline/logstash.conf

input { beats { port => 5044 host => "0.0.0.0" } } output { elasticsearch { hosts => elasticsearch manage_template => false index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" } stdout { codec => rubydebug } }

 

 

 

3.Filebeat

文件filebeat/Dockerfile

FROM docker.elastic.co/beats/filebeat:6.5.2

# Copy our custom configuration file
COPY filebeat.yml /usr/share/filebeat/filebeat.yml USER root # Create a directory to map volume with all docker log files
RUN mkdir /usr/share/filebeat/dockerlogs RUN chown -R root /usr/share/filebeat/ RUN chmod -R go-w /usr/share/filebeat/

 

文件filebeat/filebeat.yml

filebeat.inputs: - type: docker combine_partial: true containers: path: "/usr/share/dockerlogs/data" stream: "stdout" ids: - "*" exclude_files: ['\.gz$'] ignore_older: 10m processors: # decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields: fields: ["log", "message"] target: ""
    # overwrite existing target elasticsearch fields while decoding json fields 
    overwrite_keys: true
- add_docker_metadata: host: "unix:///var/run/docker.sock" filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false

# setup filebeat to send output to logstash
output.logstash: hosts: ["logstash"] # Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: error logging.to_files: false logging.to_syslog: false loggins.metrice.enabled: false logging.files: path: /var/log/filebeat name: filebeat keepfiles: 7 permissions: 0644 ssl.verification_mode: none

 

 

 

使用docker-compose up -d跑起來

 

 

 

 

 

在瀏覽器打開[Host-IP]:9200,可以打開如下界面,說明elasticsearch服務已經起來了

 

 

再在瀏覽器打開[Host-IP]:5601,是Kibana日誌查看平臺

 

進入至系統菜單【管理】中的【index-pattern】

 

 

首次使用Kibana須要先建立index-pattern,建立index-pattern操做以下,若是在Discover菜單中建立index-pattern時,會出現如下

 

建立了index-pattern後,查看Logs應該能夠查看到日誌

 

首頁查看日誌

 

 

 

源碼地址:https://github.com/ChenWes/docker-elk

相關文章
相關標籤/搜索