Docker 部署ELK 日誌分析


Docker 部署ELK 日誌分析java


elk集成鏡像包 名字是 sebp/elknode


安裝 docke、啓動git

yum install dockeweb

service docker startdocker


Docker至少得分配3GB的內存;否則得加參數 -e ES_MIN_MEM=128m  -e ES_MAX_MEM=1024m 加了-e參數限制使用最小內存及最大內存。express

Elasticsearch至少須要單獨2G的內存;apache

vm.max_map_count至少須要262144,修改vm.max_map_count 參數json

解決:bootstrap

      # vi /etc/sysctl.conf瀏覽器

      末尾添加一行

      vm.max_map_count=262144

      查看結果

        # sysctl -p

         vm.max_map_count = 262144

首先建立相應存儲日誌目錄

/var/log/elasticsearch/elasticsearch.log

/var/log/logstash/logstash-plain.log

/var/log/kibana/kibana5.log

否則啓動啓動docker,elk報錯以下

#############################################
[root@node01 ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it -v /etc/localtime:/etc/localtime --name elk sebp/elk
 * Starting periodic command scheduler cron                              [ OK ] 
 * Starting Elasticsearch Server                                         [fail] 
###################################################



[root@node01 ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk

 * Starting periodic command scheduler cron                              [ OK ] 
 * Starting Elasticsearch Server                                         [ OK ] 
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
waiting for Elasticsearch to be up (11/30)
waiting for Elasticsearch to be up (12/30)
waiting for Elasticsearch to be up (13/30)
waiting for Elasticsearch to be up (14/30)
waiting for Elasticsearch to be up (15/30)
waiting for Elasticsearch to be up (16/30)
waiting for Elasticsearch to be up (17/30)
waiting for Elasticsearch to be up (18/30)
waiting for Elasticsearch to be up (19/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
 * Starting Kibana5                                                                                                                                                                                                                                                    [ OK ] 
==> /var/log/elasticsearch/elasticsearch.log <==
[2018-06-08T05:58:05,018][INFO ][o.e.n.Node               ] [cJhUW1e] starting ...
[2018-06-08T05:58:05,780][INFO ][o.e.t.TransportService   ] [cJhUW1e] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2018-06-08T05:58:06,001][INFO ][o.e.b.BootstrapChecks    ] [cJhUW1e] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-06-08T05:58:08,219][WARN ][o.e.m.j.JvmGcMonitorService] [cJhUW1e] [gc][young][2][11] duration [1.1s], collections [1]/[2.1s], total [1.1s]/[3s], memory [73.1mb]->[72.5mb]/[1015.6mb], all_pools {[young] [31.4mb]->[17.5mb]/[66.5mb]}{[survivor] [8.3mb]->[8.3mb]/[8.3mb]}{[old] [33.7mb]->[46.7mb]/[940.8mb]}
[2018-06-08T05:58:08,220][WARN ][o.e.m.j.JvmGcMonitorService] [cJhUW1e] [gc][2] overhead, spent [1.1s] collecting in the last [2.1s]
[2018-06-08T05:58:09,842][INFO ][o.e.c.s.MasterService    ] [cJhUW1e] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {cJhUW1e}{cJhUW1ejQ7e19a4PNTpn1A}{CGmd3ROoTiKeGqXg7ldcNA}{172.17.0.2}{172.17.0.2:9300}
[2018-06-08T05:58:10,708][INFO ][o.e.c.s.ClusterApplierService] [cJhUW1e] new_master {cJhUW1e}{cJhUW1ejQ7e19a4PNTpn1A}{CGmd3ROoTiKeGqXg7ldcNA}{172.17.0.2}{172.17.0.2:9300}, reason: apply cluster state (from master [master {cJhUW1e}{cJhUW1ejQ7e19a4PNTpn1A}{CGmd3ROoTiKeGqXg7ldcNA}{172.17.0.2}{172.17.0.2:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-06-08T05:58:10,854][INFO ][o.e.h.n.Netty4HttpServerTransport] [cJhUW1e] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2018-06-08T05:58:10,855][INFO ][o.e.n.Node               ] [cJhUW1e] started
[2018-06-08T05:58:13,451][INFO ][o.e.g.GatewayService     ] [cJhUW1e] recovered [0] indices into cluster_state
==> /var/log/logstash/logstash-plain.log <==
==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2018-06-08T05:58:22Z","tags":["status","plugin:kibana@6.2.4","info"],"pid":269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-06-08T05:58:22Z","tags":["status","plugin:elasticsearch@6.2.4","info"],"pid":269,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-06-08T05:58:22Z","tags":["status","plugin:console@6.2.4","info"],"pid":269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-06-08T05:58:24Z","tags":["status","plugin:timelion@6.2.4","info"],"pid":269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-06-08T05:58:24Z","tags":["status","plugin:metrics@6.2.4","info"],"pid":269,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-06-08T05:58:24Z","tags":["listening","info"],"pid":269,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2018-06-08T05:58:25Z","tags":["status","plugin:elasticsearch@6.2.4","info"],"pid":269,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

啓動後進入容器

[root@node01 ~]# docker exec -it elk /bin/bash

執行/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

root@e91fcfcc3bc0:/# /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-06-08T07:12:33,649][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
[2018-06-08T07:12:33,698][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
[2018-06-08T07:12:34,693][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-06-08T07:12:34,719][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.
[2018-06-08T07:12:34,743][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit
出現如上錯誤:[2018-06-08T07:12:34,719][FATAL][logstash.runner          ] Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.

解決方法:先暫停logstash服務

root@e91fcfcc3bc0:/# service logstash stop

Killing logstash (pid 204) with SIGTERM
Waiting for logstash (pid 204) to die...
Waiting for logstash (pid 204) to die...
Waiting for logstash (pid 204) to die...
Waiting for logstash (pid 204) to die...
Waiting for logstash (pid 204) to die...
logstash stop failed; still running.
從新執行/opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }',當出現【Successfully started Logstash API endpoint {:port=>9600}】時表示正常!並鍵入「this is a test」 字符串測試
root@e91fcfcc3bc0:/# /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-06-08T07:14:10,915][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
[2018-06-08T07:14:10,955][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
[2018-06-08T07:14:11,939][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-06-08T07:14:13,166][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-06-08T07:14:13,924][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-08T07:14:17,237][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-08T07:14:18,289][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-06-08T07:14:18,309][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-06-08T07:14:18,641][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-06-08T07:14:18,742][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-06-08T07:14:18,748][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-06-08T07:14:18,779][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-06-08T07:14:18,820][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-08T07:14:18,891][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-06-08T07:14:19,225][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2018-06-08T07:14:19,444][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2f536475 run>"}
The stdin plugin is now waiting for input:
[2018-06-08T07:14:19,590][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
this is a test



使用瀏覽器 http://192.168.1.111:9200/_search?pretty 出現正常頁面

{

  "took" : 3,

  "timed_out" : false,

  "_shards" : {

    "total" : 5,

    "successful" : 5,

    "skipped" : 0,

    "failed" : 0

  },

  "hits" : {

    "total" : 1,

    "max_score" : 1.0,

    "hits" : [

      {

        "_index" : "logstash-2018.06.08",

        "_type" : "doc",

        "_id" : "n6dB3mMBsnl4BEURH4Zb",

        "_score" : 1.0,

        "_source" : {

          "message" : "this is a test",

          "host" : "e91fcfcc3bc0",

          "@timestamp" : "2018-06-08T07:16:39.142Z",

          "@version" : "1"

        }

      }

    ]

  }

}



安裝filebeat

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-amd64.deb

 dpkg -i filebeat-6.2.4-amd64.deb


 Filebeat 的配置文件爲 /etc/filebeat/filebeat.yml,咱們須要告訴 Filebeat 兩件事:

監控哪些日誌文件?

將日誌發送到哪裏?


 paths:
    - /var/log/*.log
    - /var/lib/docker/contrainers/*/*.log
    - /var/log/syslog
    #- c:\programdata\elasticsearch\logs\*


在 paths 中咱們配置了兩條路徑:

/var/lib/docker/containers/*/*.log 是全部容器的日誌文件。

/var/log/syslog 是 Host 操做系統的 syslog。


接下來告訴 Filebeat 將這些日誌發送給 ELK。

Filebeat 能夠將日誌發送給 Elasticsearch 進行索引和保存;也能夠先發送給 Logstash 進行分析和過濾,而後由 Logstash 轉發給 Elasticsearch。

爲了避免引入過多的複雜性,咱們這裏將日誌直接發送給 Elasticsearch。

 

 

[root@node01 ~]# cat /etc/filebeat/filebeat.yml 

###################### Filebeat Configuration Example #########################
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
  # Change to true to enable this prospector configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    - /var/lib/docker/contrainers/*/*.log
    - /var/log/syslog
    #- c:\programdata\elasticsearch\logs\*
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
  ### Multiline options
  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false
  # Period on which files under path should be checked for changes
  #reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.1.111:9200"]
  template.name: "filebeat"  
  template.path: "filebeat.template.json"  
  template.overwrite: false 
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

啓動filebeat

systemctl start filebeat.service


Filebeat 啓動後,正常狀況下會將監控的日誌發送給 Elasticsearch。刷新 Elasticsearch 的 JSON 接口 http://192.168.1.111:9200/_search?pretty 進行確認。可以看到 filebeat-* 的 index,以及 Filebeat 監控的那兩個路徑下的日誌。

1-1.jpg


kibana頁面 http://192.168.1.111:5601 添加相應的索引,Create index pattern,指定 index pattern 爲 filebeat-*,這與 Elasticsearch 中的 index一致。

Time-field name 選擇 @timestamp。

點擊 Create 建立 index pattern。

點擊 Kibana 左側 Discover 菜單,即可看到容器和 syslog 日誌信息。

2.jpg

3.jpg

4.jpg

5.jpg

啓動一個新的容器,該容器將向控制檯打印信息,模擬日誌輸出。

docker run busybox sh -c 'while true; do echo "This is a log message from container busybox!"; sleep 10; done;'

6.jpg


注意:ELK還能夠對日誌進行歸類彙總、分析聚合、建立炫酷的 Dashboard 等,能夠挖掘的內容不少,玩法很豐富

相關文章
相關標籤/搜索