核心思路:node
Elasticsearch+Kibana部署在單獨的服務器上,Fluentd運行在承載docker容器的服務器上,經過容器內置參數logging driver來獲取日誌,docker
docker-elastickibana.yml服務器
version: '2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.4 container_name: elasticsearch environment: - discovery.type=single-node volumes: - ./esdata:/usr/share/elasticsearch/data networks: - efknet ports: - "9200:9200" kibana: image: docker.elastic.co/kibana/kibana-oss:6.1.4 container_name: kibana networks: - efknet ports: - "5601:5601" networks: efknet:
docker-fluentd.ymlelasticsearch
version: '2' services: fluentd: build: . volumes: - ./conf:/fluentd/etc - ./log:/fluentd/log container_name: fluentd networks: - fluent_net ports: - "24224:24224" - "24224:24224/udp" networks: fluent_net:
fluent.conf測試
# fluentd/conf/fluent.conf <source> @type forward port 24224 bind 0.0.0.0 </source> <match *.**> @type copy <store> @type elasticsearch host elasticsearch port 9200 logstash_format true logstash_prefix fluentd logstash_dateformat %Y%m%d include_tag_key true type_name access_log tag_key @log_name flush_interval 1s </store> <store> @type stdout </store> </match>
測試容器http_server,核心是logging一欄ui
version: '3' services: http_server: image: httpd container_name: http_server ports: - "80:80" networks: - efknet logging: driver: "fluentd" options: fluentd-address: localhost:24224 tag: httpd_server networks: efknet:
須要注意的點:spa
Elasticsearch宿主機/etc/sysctl.conf文件底部添加一行 vm.max_map_count=6553603d
[root@localhost efk]# sysctl -p vm.max_map_count = 655360
修改系統ulimit大小rest
Elasticsearch默認是集羣方式啓動,compose中要指明以單節點啓動,通常用於開發環境。日誌
docker中fluentd沒法鏈接到Elasticsearch多是firewalld的緣由,systemctl stop firewalld && systemctl restart docker
Elasticsearch在容器中默認是以id爲1000的帳戶在運行,若是-v掛載日誌目錄到本地或者文件,須要保證宿主機目錄或文件有寫權限。
Fluentd容器運行時區的問題,能夠掛載/etc/localtime到容器中。
暫時沒想到,持續更新。。。