ELK能夠說是當前對分佈式服務器集羣日誌作彙總、分析、統計和檢索操做的很好的一套系統了。而Spring Boot做爲一套爲微服務而生的框架,天然也免不了處理分佈式日誌的問題,經過ELK日誌系統來處理日誌仍是頗有意義的。在這套系統中,E即爲ElasticSearch,負責日誌存儲;L爲LogStash,負責日誌收集,並將日誌信息寫入ElasticSearch,K則爲Kibana,負責將ElasticSearch中的日誌數據進行可視化及分析檢索操做。能夠說將Spring Boot與ELK整合很大程度上至關於將Spring Boot與Logstash進行整合。將那麼如何將Spring Boot與LogStash整合起來呢?git
在Spring Boot當中,默認使用logback進行log操做。和其餘日誌工具如log4j同樣,logback支持將日誌數據經過提供IP地址、端口號,以Socket的方式遠程發送。在Spring Boot中,一般使用logback-spring.xml來進行logback配置。spring
要想將logback與Logstash整合,必須引入logstash-logback-encoder包。該包的依賴以下:json
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.3</version> </dependency>
將依賴添加好以後,就能夠進行logback的配置了。logback-spring.xml配置以下:bootstrap
<?xml version="1.0" encoding="utf-8" ?> <!--該日誌將日誌級別不一樣的log信息保存到不一樣的文件中 --> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <springProperty scope="context" name="springAppName" source="spring.application.name"/> <!-- 日誌在工程中的輸出位置 --> <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/> <!-- 控制檯的日誌輸出樣式 --> <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}"/> <!-- 控制檯輸出 --> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>INFO</level> </filter> <!-- 日誌輸出編碼 --> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> </appender> <!-- 爲logstash輸出的Appender --> <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>192.168.1.111:8081</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <!-- 日誌輸出級別 --> <root level="INFO"> <appender-ref ref="console"/> <appender-ref ref="logstash"/> </root> </configuration>
logback的基礎配置很少說,百度谷歌上一大把,這裏咱們只討論與Logstash相關的部分。在上面的XML文件中的第4行至第7行即爲Logstash的配置。destination爲日誌的發送地址,在配置Logstash的時候選擇監聽這個地址便可進行日誌收集。這裏我把Logstash部署在了本地的虛擬機上(其地址就是192.168.1.111),端口爲8081。固然這裏你須要把地址改爲你本身的。下面的encoder是必選項。Console則是爲了保證Spring Boot原始的日誌配置不被覆蓋。這裏logback配置完畢。api
Logstash方面,在安裝目錄下新建一個文件夾conf,在conf文件夾下建立文件logstash.conf,內容以下:ruby
# Logstash configuration # TCP -> Logstash -> Elasticsearch pipeline. input { tcp { mode => "server" host => "192.168.1.111" //儘可能使用IP port => 8081 //從本地的8081端口取日誌 codec => json_lines //須要安裝logstash-codec-json_lines插件 } } output { elasticsearch { hosts => ["http://192.168.1.111:9200"] //輸出到ElasticSearch index => "logstash-%{+YYYY.MM.dd}" } stdout { //若不須要在控制檯中輸出,此行能夠刪除 codec => rubydebug } }
若是你的Logstash沒有安裝logstash-codec-json_lines插件,經過如下命令安裝:bash
[root@ecs-55e5 ~]# cd /usr/share/logstash/ [root@ecs-55e5 logstash]# ls bin CONTRIBUTORS data Gemfile Gemfile.lock lib LICENSE.txt logstash-core logstash-core-plugin-api modules NOTICE.TXT tools vendor x-pack [root@ecs-55e5 logstash]# cd bin [root@ecs-55e5 bin]# ./logstash-plugin install logstash-codec-json_lines Validating logstash-codec-json_lines Installing logstash-codec-json_lines Installation successful [root@ecs-55e5 bin]#
啓動Logstash 暴露出端口8081接受日誌 : 服務器
[root@ecs-55e5 logstash]# logstash -f logstash.conf WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-03-06 14:38:50.990 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-03-06 14:38:51.007 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.5.4"} [INFO ] 2019-03-06 14:38:54.639 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-03-06 14:38:55.095 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://192.168.1.111:9200/]}} [WARN ] 2019-03-06 14:38:55.284 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://192.168.1.111:9200/"} [INFO ] 2019-03-06 14:38:55.549 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6} [WARN ] 2019-03-06 14:38:55.553 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [INFO ] 2019-03-06 14:38:55.577 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://192.168.1.111:9200"]} [INFO ] 2019-03-06 14:38:55.591 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil} [INFO ] 2019-03-06 14:38:55.608 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [INFO ] 2019-03-06 14:38:55.644 [[main]>worker7] tcp - Starting tcp input listener {:address=>"localhost:8081", :ssl_enable=>"false"} [INFO ] 2019-03-06 14:38:55.917 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x67d08165 run>"} [INFO ] 2019-03-06 14:38:55.952 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-03-06 14:38:56.152 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
大功告成!最終效果以下:app
{ "logger_name" => "com.amt.hibei.sysframework.config.SysConfig", "thread_name" => "main", "@timestamp" => 2019-03-06T07:27:26.348Z, "level_value" => 20000, "host" => "182.148.112.187", "port" => 58138, "level" => "INFO", "@version" => "1", "message" => "============系統參數加載完成!==============" } { "logger_name" => "com.amt.hibei.client.HibeiGameClientHiApplication", "thread_name" => "main", "@timestamp" => 2019-03-06T07:27:26.259Z, "level_value" => 20000, "host" => "182.148.112.187", "port" => 58138, "level" => "INFO", "@version" => "1", "message" => "Starting HibeiGameClientHiApplication on Amt-PC with PID 4256 (E:\\IdeWorkspace\\hibeigame\\hibeigame-client-HI\\target\\classes started by amt in E:\\IdeWorkspace\\hibeigame)" }