Manage Spring Boot Logs with Elasticsearch, Logstash and Kibana

下載地址:https://www.elastic.co/downloadshtml

 

When time comes to deploy a new project, one often overlooked aspect is log management. ELK stack (Elasticsearch, Logstash, Kibana) is, among other things, a powerful and freely available log management solution. In this article I will show you how to install and setup ELK and use it with default log format of a Spring Boot application.java

For this guide, I've setup a demo Spring Boot application with logging enabled and with Logstash configuration that will send log entries to Elasticsearch. Demo application is a simple todo list available here.node

ELK setup overview

Application will store logs into a log file. Logstash will read and parse the log file and ship log entries to an Elasticsearch instance. Finally, we will use Kibana 4 (Elasticsearch web frontend) to search and analyze the logs.git

Step 1) Install Elasticsearch

  • Download elasticsearch zip file from https://www.elastic.co/downloads/elasticsearch
  • Extract it to a directory (unzip it)
  • Run it (bin/elasticsearch or bin/elasticsearch.bat on Windows)
  • Check that it runs using curl -XGET http://localhost:9200

Here's how to do it (steps are written for OS X but should be similar on other systems):github

wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.zip unzip elasticsearch-1.7.1.zip cd elasticsearch-1.7.1 bin/elasticsearch 

Elasticsearch should be running now. You can verify it's running using curl. In a separate terminal window execute a GET request to Elasticsearch's status page:web

curl -XGET http://localhost:9200 

If all is well, you should get the following result:redis

{
  "status" : 200, "name" : "Tartarus", "cluster_name" : "elasticsearch", "version" : { "number" : "1.7.1", "build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19", "build_timestamp" : "2015-07-29T09:54:16Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" } 

Step 2) Install Kibana 4

  • Download Kibana archive from https://www.elastic.co/downloads/kibana
    • Please note that you need to download appropriate distribution for your OS, URL given in examples below is for OS X
  • Extract the archive
  • Run it (bin/kibana)
  • Check that it runs by pointing the browser to the Kibana's WebUI

Here's how to do it:spring

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-darwin-x64.tar.gz tar xvzf kibana-4.1.1-darwin-x64.tar.gz cd kibana-4.1.1-darwin-x64 bin/kibana 

Point your browser to http://localhost:5601 (if Kibana page shows up, we're good - we'll configure it later)json

Step 3) Install Logstash

wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.zip unzip logstash-1.5.3.zip 

Step 4) Configure Spring Boot's Log File

In order to have Logstash ship log files to Elasticsearch, we must first configure Spring Boot to store log entries into a file. We will establish the following pipeline: Spring Boot App → Log File → Logstash → Elasticsearch. There are other ways of accomplishing the same thing, such as configuring logback to use TCP appender to send logs to a remote Logstash instance via TCP, and many other configurations. I prefer the file approach because it's simple, unobtrusive (you can easily add it to existing systems) and nothing will be lost/broken if for some reason Logstash stops working or if Elasticsearch dies.windows

Anyhow, let's configure Spring Boot's log file. The simplest way to do this is to configure log file name in application.properties. It's enough to add the following line:

logging.file=application.log 

Spring Boot will now log ERROR, WARN and INFO level messages in the application.log log file and will also rotate it as it reaches 10 Mb.

Step 5) Configure Logstash to Understand Spring Boot's Log File Format

Now comes the tricky part. We need to create Logstash config file. Typical Logstash config file consists of three main sections: input, filter and output. Each section contains plugins that do relevant part of the processing (such as file input plugin that reads log events from a file or elasticsearch output plugin which sends log events to Elasticsearch).

Logstash config pipeline

Input section defines from where Logstash will read input data - in our case it will be a file hence we will use a file plugin with multilinecodec, which basically means that our input file may have multiple lines per log entry.

Input Section

Here's the input section:

input {
  file {
    type => "java" path => "/path/to/application.log" codec => multiline { pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*" negate => "true" what => "previous" } } } 
  • We're using file plugin.
  • type is set to java - it's just additional piece of metadata in case you will use multiple types of log files in the future.
  • path is the absolute path to the log file. It must be absolute - Logstash is picky about this.
  • We're using multiline codec which means that multiple lines may correspond to a single log event,
  • In order to detect lines that should logically be grouped with a previous line we use a detection pattern:
    • pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*" → Each new log event needs to start with date.
    • negate => "true" → if it doesn't start with a date ...
    • what => "previous" → ... then it should be grouped with a previous line.

File input plugin, as configured, will tail the log file (e.g. only read new entries at the end of the file). Therefore, when testing, in order for Logstash to read something you will need to generate new log entries.

Filter Section

Filter section contains plugins that perform intermediary processing on an a log event. In our case, event will either be a single log line or multiline log event grouped according to the rules described above. In the filter section we will do several things:

  • Tag a log event if it contains a stacktrace. This will be useful when searching for exceptions later on.
  • Parse out (or grok, in logstash terminology) timestamp, log level, pid, thread, class name (logger actually) and log message.
  • Specified timestamp field and format - Kibana will use that later for time based searches.

Filter section for Spring Boot's log format that aforementioned things looks like this:

filter {
  #If log line contains tab character followed by 'at' then we will tag that entry as stacktrace if [message] =~ "\tat" { grok { match => ["message", "^(\tat)"] add_tag => ["stacktrace"] } } #Grokking Spring Boot's default log format grok { match => [ "message", "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- \[(?<thread>[A-Za-z0-9-]+)\] [A-Za-z0-9.]*\.(?<class>[A-Za-z0-9#_]+)\s*:\s+(?<logmessage>.*)", "message", "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) %{LOGLEVEL:level} %{NUMBER:pid} --- .+? :\s+(?<logmessage>.*)" ] } #Parsing out timestamps which are in timestamp field thanks to previous grok section date { match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ] } } 

Explanation:

  • if [message] =~ "\tat" → If message contains tab character followed by at (this is ruby syntax) then...
  • ... use the grok plugin to tag stacktraces:
    • match => ["message", "^(\tat)"] → when message matches beginning of the line followed by tab followed by at then...
    • add_tag => ["stacktrace"] → ... tag the event with stacktrace tag.
  • Use the grok plugin for regular Spring Boot log message parsing:
    • First pattern extracts timestamp, level, pid, thread, class name (this is actually logger name) and the log message.
    • Unfortunately, some log messages don't have logger name that resembles a class name (for example, Tomcat logs) hence the second pattern that will skip the logger/class field and parse out timestamp, level, pid, thread and the log message.
  • Use date plugin to parse and set the event date:
    • match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ] → timestamp field (grokked earlier) contains the timestamp in the specified format

Output Section

Output section contains output plugins that send event data to a particular destination. Outputs are the final stage in the event pipeline. We will be sending our log events to stdout (console output, for debugging) and to Elasticsearch.

Compared to filter section, output section is rather straightforward:

output {
  # Print each event to stdout, useful for debugging. Should be commented out in production. # Enabling 'rubydebug' codec on the stdout output will make logstash # pretty-print the entire event as something similar to a JSON representation. stdout { codec => rubydebug } # Sending properly parsed log events to elasticsearch elasticsearch { host => "127.0.0.1" } } 

Explanation:

  • We are using multiple outputs: stdout and elasticsearch.
  • stdout { ... } → stdout plugin prints log events to standard output (console).
    • codec => rubydebug → Pretty print events using JSON-like format
  • elasticsearch { ... } → elasticsearch plugin sends log events to Elasticsearch server.
    • host => "127.0.0.1" → Hostname where Elasticsearch is located - in our case, localhost.

Update 5/9/2016: At the time of writing this update, the latest versions of Logstash's elasticsearch output plugin uses hosts configuration parameter instead of host which is shown in example above. New parameter takes an array of hosts (e.g. elasticsearch cluster) as value. In other words, if you are using the latest Logstash version, configure elasticsearch output plugin as follows:

elasticsearch {
   hosts => ["127.0.0.1"] } 

Putting it all together

Finally, the three parts - input, filter and output - need to be copy pasted together and saved into logstash.conf config file. Once the config file is in place and Elasticsearch is running, we can run Logstash:

/path/to/logstash/bin/logstash -f logstash.conf 

If everything went well, Logstash is now shipping log events to Elasticsearch.

Step 6) Configure Kibana

Ok, now it's time to visit the Kibana web UI again. We have started it in step 2 and it should be running at http://localhost:5601.

First, you need to point Kibana to Elasticsearch index(s) of your choice. Logstash creates indices with the name pattern of logstash-YYYY.MM.DD. In Kibana Settings → Indices configure the indices:

  • Index contains time-based events (select this option)
  • Use event times to create index names (select this option)
  • Index pattern interval: Daily
  • Index name or pattern: [logstash-]YYYY.MM.DD
  • Click on "Create Index"

Now click on "Discover" tab. In my opinion, "Discover" tab is really named incorrectly in Kibana - it should be labeled as "Search" instead of "Discover" because it allows you to perform new searches and also to save/manage them. Log events should be showing up now in the main window. If they're not, then double check the time period filter in to right corner of the screen. Default table will have 2 columns by default: Time and _source. In order to make the listing more useful, we can configure the displayed columns. From the menu on the left select level, class and logmessage.

Kibana 4 Discover Tab

Alright! You're now ready to take control of your logs using ELK stack and start customizing and tweaking your log management configuration. You can download the sample application used when writing this article from here: https://github.com/knes1/todo. It's already configured to write logs in a file and has the Logstash config as described above (although absolute paths will need to be tweaked in logstash.conf).

If you would like to search or follow your EL logs from command line, checkout Elktail - a command line utility I've created for accessing and tailng logs stored in EL.

As always, let me know if you have any question/comments or ideas in the comments section below.

http://knes1.github.io/blog/2015/2015-08-16-manage-spring-boot-logs-with-elasticsearch-kibana-and-logstash.html

spring mvc+ELK從頭開始搭建日誌平臺

最近因爲以前協助前公司作了點力所能及的事情,竟然收到了一份貴重的端午禮物,是給我女兒的一個樂高積木,整個有7大包物件,我花了接近一天的時間一磚一瓦的組織起來,雖然很辛苦可是可以從過程當中體驗到樂趣。此次將分享從頭搭建分佈式日誌系統,主要是在spring mvc上結合ELK套件實現(以前有些工做因爲分工不一樣由不一樣的同事來完成,我只是在已經配置好的環境下作開發而已),包含以下這些技術點:

  • spring mvc
  • logback
  • logstash
  • elasticsearch
  • kibana
  • redis

來看下總體的架構圖,這類架構很是容易解決當下分佈式系統下的日誌記錄,查詢以及分析困難的問題。


操做系統,IDE環境:

  • eclipse
  • windows


1:搭建spring mvc項目
eclipse自帶建立的dynamic web project是個空結構,沒有任何配置,咱們要想跑起來一個hello world的項目,還須要作些配置,好比建立程序文件,好比view,controller等等。
spring tool suite能夠幫助咱們解決這個問題,它提供了spring mvc的項目模板,裏面自帶一個hello world的可啓動的應用頁面,在eclipse中能夠很方便的以插件形式安裝spring tool suit,安裝好以後就能夠建立了。
這裏須要注意的是不一樣版本的spring tool suite在建立時的菜單會有不一樣,我目前的菜單位於:

首先要選中spring標籤:

而後在File菜單下找:


建立好以後,咱們就能夠直接在tomcat下運行了,不須要任何的其它操做,相對建立的dynamic web project要方便的多,不過經過這種模板建立的項目也有缺點:若是你喜歡新的一些依賴包,那麼你須要手工去pom文件中去更新版本號爲你想要的,還有可能會引入部分你暫時可能用不上的一些第三方包。下圖是稍加修改的項目完成圖,是一個標準的maven結構的spring mvc。


2:redis安裝
因爲個人是windows環境,因此相應的須要下載windows版本的redis:
windows版:https://github.com/mythz/redis-windows
下載下來解壓,而後選擇一個版本:



配置文件我只修改了一個:bind,它是用來綁定一個固定IP的,爲何要顯示的去綁定一個IP呢?後面會介紹我遇到的問題。
啓動服務端:在redis/bin目錄下執行:redis-server.exe redis.windows.conf便可啓動

啓動客戶端:在redis/bin目錄下執行:redis-cli.exe -h 127.0.0.1 -p 6379,在這個窗口能夠經過一些redis命令再測試redis是否正常,好比get,set ,keys *等等。


3:ELK安裝

在這個網站能夠找到ELK最新版本:https://www.elastic.co/downloads,將elasticsearch,logstash,kibana這三個所有下載下來。

  • 配置elasticsearch

大部分的配置都使用默認的,只是爲了好標識我修改了cluster.name以及node.name,詳細的參數可研究文檔。而後直接在bin目錄下運行elasticsearch.bat就能夠啓動了。

打開http://127.0.0.1:9200/就能夠,看到以下信息就說明啓動正常了。


還有不少插件能夠安裝,用來幫助咱們查看監控elasticsearch,這裏先安裝head,命令行進入elasticsearch的目錄,而後執行plugin install mobz/elasticsearch-head便可安裝。

安裝成功後打開http://127.0.0.1:9200/_plugin/head/

  • 配置logstash

先看下logstash的架構設計以及與其它ELK的配合,本篇的data source就是redis,不涉及到filter,最終日誌輸出到elasticsearch中。

這裏咱們只配置input以及output,須要注意的是不一樣版本的logstash在配置上也會略有不一樣,你們有興趣能夠進一步作下對比。

複製代碼
input {

    redis {
        data_type => "list"
        key => "logstash"
        host => "127.0.0.1"
        port => 6379
        threads => 5
        codec => "json"
    }
}
filter {

}
output {

    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        index => "logstash-%{type}-%{+YYYY.MM.dd}"
        document_type => "%{type}"
        workers => 1
        flush_size => 20
        idle_flush_time => 1
        template_overwrite => true
    }
    stdout{}
}
複製代碼


而後在logstash目錄下執行logstash -f etc/logstash.d/便可啓動

  • 配置kinbana
    • elasticesearch.url指向以前配置好的elasticsearch地址。
    • kinbna.index,這個是用來存儲kibana自身的一些信息的。



  • 集成logback

須要有一個記錄日誌的入口,將logback-classic引入進來,爲了將日誌傳遞給redis,須要配置一個logback-redis-appender,依賴以下:

複製代碼
<!-- Logging -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${org.slf4j-version}</version>
        </dependency>
         <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>${logback.version}</version>
        </dependency>


        <!--logstash begin -->
        <dependency>
            <groupId>com.cwbase</groupId>
            <artifactId>logback-redis-appender</artifactId>
            <version>1.1.3</version>
            <exclusions>
                <exclusion>
                    <groupId>redis.clients</groupId>
                    <artifactId>jedis</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
複製代碼

 

配置logback.xml,key須要與logstash配置文件中配置的key相匹配。

複製代碼
<appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender">
        <source>logstashdemo</source>
        <type>dev</type>
        <host>127.0.0.1</host>
        <key>logstash</key>
        <tags>dev</tags>
        <mdc>true</mdc>
        <location>true</location>
        <callerStackIndex>0</callerStackIndex>
    </appender>
複製代碼

在homecontroller中記錄日誌,slf4j因爲完成了與logback完美集成,因此咱們也不須要作任何轉換類的配置便可實現日誌記錄。


前文中曾經提到在配置redis時,設置了bind屬性,讓其指定到一個固定的IP。若是不指定,在logstash鏈接redis會有問題,其中的緣由有待後續進一步確認。

4:運行網站,查看日誌

當redis,elasticsearch,logstash服務運行正常後,啓動spring mvc,經過logger記錄的日誌就能夠在kibana中方便的查看了。

測試logback是否已經將日誌發送到redis,能夠經過redis的命令來查看是否包含了配置的logstash這個key,還能夠經過llen來判斷日誌是否在正常的遞增。

若是上面都正常,再找開kibana的頁面,第一次打開會提示建立索引規則,建立好以後就能夠看到日誌已經被採集到elasticsearch中了。


 

通過接近兩天的研究,終於從0開始搭建成功了spring mvc+ELK的分佈式日誌管理平臺,java平臺的優點就是開源的產品多,可利用優秀插件也多,擅於去發倔仍是能夠很省事的作些比較優秀的項目的。雖然本篇只是一個練手入門文章,但有了開始就會有收穫。

 

本文參考:

  • http://os.51cto.com/art/201403/431103.htm
  • http://kibana.logstash.es
  • http://blog.csdn.net/kmtong/article/details/38920327
  • http://www.cnblogs.com/xing901022/p/4802822.html
  • http://blog.csdn.net/july_2/article/details/24481935
  • https://www.elastic.co/guide/en/kibana/current/getting-started.html

 

http://www.cnblogs.com/ASPNET2008/p/5594479.html

 

http://www.oschina.net/translate/elasticsearch-getting-started

相關文章
相關標籤/搜索