docker-compose部署ELK

本章基於html

http://www.javashuo.com/article/p-ubnecgbn-bc.htmlnode

在此基礎上將ELK系統docker-compose.yml化。docker

其docker-compose 須要注意json

1.不要把 docker 當作數據容器來使用,數據必定要用 volumes 放在容器外面bootstrap

2.不要把 docker-compose 文件暴露給別人, 由於上面有你的服務器信息服務器

3.多用 docker-compose 的命令去操做, 不要用 docker 手動命令&docker-compose 去同時操做cors

4.寫一個腳本類的東西,自動備份docker 映射出來的數據。jvm

5.不要把全部服務都放在一個 docker 容器裏面elasticsearch

 

準備環境:tcp

管理節點10.191.51.44

數據節點 10.191.51.45/46/47

 

具體文件:

es docker-compose.yml

version: '2' services: elasticsearch: container_name: ES environment : - ES_JAVA_OPTS=-Xms4G -Xmx4G image: 10.191.51.5/elk/elasticsearch:6.5.4 volumes: - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./data:/usr/share/elasticsearch/data ports: - "9200:9200"
      - "9300:9300"
View Code

管理節點 elasticsearch.yml

cluster.name: elasticsearch-cluster node.name: es-node1 network.bind_host: 0.0.0.0 network.publish_host: 10.191.51.44 http.port: 9200 transport.tcp.port: 9300 http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.data: false node.ingest: true discovery.zen.ping.unicast.hosts: ["10.191.51.44:9300","10.191.51.45:9300","10.191.51.46:9300","10.191.51.47:9300"] discovery.zen.minimum_master_nodes: 2
View Code

數據節點 elasticsearch.yml

cluster.name: elasticsearch-cluster node.name: es-node2 network.bind_host: 0.0.0.0 network.publish_host: 10.191.51.45 http.port: 9200 transport.tcp.port: 9300 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.191.51.44:9300","10.191.51.45:9300","10.191.51.46:9300","10.191.51.47:9300"] discovery.zen.minimum_master_nodes: 2
View Code

 

kafka docker-compose.yml (其中environment須要配置)

version: '2' services: kafka: container_name: kafka0 environment: - KAFKA_BROKER_ID=0
      - KAFKA_ZOOKEEPER_CONNECT=10.191.51.44:2181
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
      - KAFKA_ADVERTISED_HOST_NAME=kafka1 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.191.51.45:9092
      - KAFKA_delete_topic_enable=true image: 10.191.51.5/elk/wurstmeister/kafka:2.1.1 ports: - "9092:9092"
View Code

 

logstash docker-compose.yml

version: '2' services: logstash: container_name: logstash image: 10.191.51.5/elk/logstash:6.5.4 volumes: - ./config/:/usr/share/logstash/config/
      - ./pipeline/:/usr/share/logstash/pipeline/ ports: - "5044:5044"
      - "9600:9600"
View Code

pipeline/logstash.conf 

 kafka{ bootstrap_servers => ["10.191.51.45:9092,10.191.51.46:9092,10.191.51.47:9092"] client_id => "logstash-garnet" group_id => "logstash-garnet" consumer_threads => 8 decorate_events => true topics => ["garnet_garnetAll_log"] type => "garnet_all_log" } } filter{ if[type]=="garnet_all_log"{ mutate{ gsub => ["message", "@timestamp", "sampling_time"] } json{ source=>"message" } grok{ match=>{ "message"=>[ "%{TIMESTAMP_ISO8601:log_time}\s+\[(?<thread_name>[a-zA-Z0-9\-\s]*)\]\s+%{LOGLEVEL:log_level}\s+\[(?<class_name>[a-zA-Z0-9.]*)\]\s+%{GREEDYDATA:msg_info}parameters=%{GREEDYDATA:msg_json_info}", "%{TIMESTAMP_ISO8601:log_time}\s+\[(?<thread_name>[a-zA-Z0-9\-\s]*)\]\s+%{LOGLEVEL:log_level}\s+\[(?<class_name>[a-zA-Z0-9.]*)\]\s+%{GREEDYDATA:msg_info}" ] } } if [msg_json_info]{ json{ source=>"msg_json_info" } } } } output { if[type] == "garnet_all_log"{ elasticsearch{ hosts => ["10.191.51.45:9200","10.191.51.46:9200","10.191.51.47:9200"] index => "garnet_all-%{+YYYY.MM.dd}" } } }
View Code

config/pipeline.yml

- pipeline.id: pipeline_1 pipeline.batch.size: 200 pipeline.batch.delay: 1 path.config: /usr/share/logstash/pipeline
View Code

config/logstash.yml

log.level: warn xpack.license.self_generated.type: basic xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.url: "http://10.191.51.44:9200"
View Code

config/jvm.options 修改

-Xms4g -Xmx4g
View Code
相關文章
相關標籤/搜索