ELK5.X+logback搭建日誌平臺

1、背景

  當生產環境web服務器作了負載集羣后,查詢每一個tomcat下的日誌無疑是一件麻煩的事,elk爲咱們提供了快速查看日誌的方法。html

2、環境

  CentOS七、JDK八、這裏使用了ELK5.0.0(即:Elasticsearch-5.0.0、Logstash-5.0.0、kibana-5.0.0),安裝ElasticSearch6.2.0也是同樣的步驟,已親測。node

3、注意

  1. 安裝ELK必定要JDK8或者以上版本。linux

  2. 關閉防火牆:systemctl stop firewalld.servicegit

  3. 啓動elasticsearch,必定不能用root用戶,須要從新建立一個用戶,由於elasticsearch會驗證安全。web

  4.官網歷史版本下載:https://www.elastic.co/downloads/past-releasesredis

4、Elasticsearch安裝

1. 用root登陸centos,建立elk組,而且將elk用戶添加到elk組中,而後建立elk用戶的密碼spring

  [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# groupadd elk
  [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# useradd -g elk elk   [root@iZuf6a50pk1lwxxkn0qr6tZ ~]# passwd elk   更改用戶 elk 的密碼 。   新的 密碼:   從新輸入新的 密碼:   passwd:全部的身份驗證令牌已經成功更新。

2.進入/usr/local目錄,建立elk目錄,並將該目錄的使用權賦給elk用戶。下載elasticsearch,能夠從官網下載,也能夠直接使用wget下載,這裏使用wget下載json

  cd /usr/local
  mkdir elk
   chown -R elk:elk /usr/local/elk # 受權   cd elk   wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.0.tar.gz 解壓elasticsearch:tar -zxvf elasticsearch-5.0.0.tar.gz

3.使用root用戶,進入/usr/local/elk/elasticsearch-5.0.0/config目錄,編輯 elasticsearch.yml ,修改如下參數:centos

  cluster.name: nmtx-cluster                     # 在集羣中的名字
  node.name: node-1 # 節點的名字   path.data: /data/elk/elasticsearch-data # es數據保存的目錄   path.logs: /data/elk/elasticsearch-logs # es日誌保存的目錄   network.host: 0.0.0.0   http.port: 9200 # 開放的端口

  將data/elk目錄的使用權限 賦予 elk用戶:瀏覽器

  chown -R elk:elk /data/elk

4.使用root用戶,修改/etc/sysctl.conf 配置文件,在最後一行添加:vm.max_map_count=262144, 而後使其生效:sysctl -p /etc/sysctl.conf。若是不配置這一個的話,啓動es會報以下錯誤:

   max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

5.使用root用戶,修改/etc/security/limits.conf文件,添加或修改以下行:

  * soft nproc 65536
  * hard nproc 65536
  * soft nofile 65536
  * hard nofile 65536

   若是不配置,會報以下錯誤:

  max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

6.切換到elk用戶,啓動es(直接後臺啓動了,加上 -d 即爲後臺啓動,配置完四、5步驟基本不會出什麼問題了),

   /usr/local/elk/elasticsearch-5.0.0/bin/elasticsearch -d

7.驗證是否啓動成功,輸入如下命令:

   curl -XGET 'localhost:9200/?pretty'

     當看到以下時,即爲啓動成功:

 {
   "name" : "node-1", "cluster_name" : "nmtx-cluster", "cluster_uuid" : "WdX1nqBPQJCPQniObzbUiQ", "version" : { "number" : "5.0.0", "build_hash" : "253032b", "build_date" : "2016-10-26T04:37:51.531Z", "build_snapshot" : false, "lucene_version" : "6.2.0" }, "tagline" : "You Know, for Search" }

  或者使用瀏覽器訪問:192.168.1.66:9200 

  

可是6版本集成了x-pack插件,可能會出現以下:

[elk@localhost bin]$ curl -XGET 'localhost:9200/?pretty'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication token for REST request [/?pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication token for REST request [/?pretty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}

須要加上用戶名密碼,以下:

[elk@localhost bin]$ curl --user elastic:changeme -XGET 'localhost:9200/_cat/health?v&pretty'
epoch      timestamp cluster        status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1556865276 06:34:36  my-application green           1         1      1   1    0    0        0             0                  -                100.0%

5、Logstash安裝

1.下載logstash5.0.0,也是使用 wget下載,解壓logstash

  cd /usr/local/elk
  wget https://artifacts.elastic.co/downloads/logstash/logstash-5.0.0.tar.gz
  tar -zxvf logstash-5.0.0.tar.gz

2.編輯 usr/local/elk/logstash-5.0.0/config/logstash.conf 文件,這是一個新建立的文件,在文件中添加以下內容:

   input {
      tcp {
            port => 4567
            mode => "server"
            codec => json_lines
       }
    }

    filter {
    }

  output {
    elasticsearch { 
      hosts => ["192.168.1.66:9200"] 
      cluster => nmtx-cluster
      index => "operation-%{+YYYY.MM.dd}"
    }  
    stdout {
          codec => rubydebug
     }
  }

  logstash的配置文件須包含三個內容:

  • input{}:此模塊是負責收集日誌,能夠從文件讀取、從redis讀取 或者 開啓端口讓產生日誌的業務系統直接寫入到logstash,這裏使用的是:開啓端口讓產生日誌的業務系統直接寫入到logstash
  • filter{}:此模塊是負責過濾收集到的日誌,並根據過濾後對日誌定義顯示字段
  • output{}:此模塊是負責將過濾後的日誌輸出到elasticsearch或者文件、redis等,生產環境中能夠將 stdout 移除掉,防止在控制檯上打印日誌。

3.啓動logstash,先不之後臺方式啓動,爲了看logstash是否能夠啓動成功

   /usr/local/elk/logstash-5.0.0/bin/logstash -f /usr/local/elk/logstash-5.0.0/config/logstash.conf 

  當看到控制檯上出現以下內容時,即爲啓動成功:

 Sending Logstash logs to /elk/logstash-5.0.0/logs which is now configured via log4j2.properties.
 [2018-03-11T12:12:14,588][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:4567"} [2018-03-11T12:12:14,892][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}} [2018-03-11T12:12:14,894][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2018-03-11T12:12:15,425][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2018-03-11T12:12:15,445][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash [2018-03-11T12:12:15,729][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]} [2018-03-11T12:12:15,732][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500} [2018-03-11T12:12:15,735][INFO ][logstash.pipeline ] Pipeline main started [2018-03-11T12:12:15,768][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

   Ctrl+C,關閉logstash,而後 後臺啓動:

  nohup /usr/local/elk/logstash-5.0.0/bin/logstash -f /usr/local/elk/logstash-5.0.0/config/logstash.conf > /data/elk/logstash-log.file 2>&1 &

 6、Kibana安裝

1.下載並解壓Kibana,wget下載:

  cd /usr/local/elk
  wget https://artifacts.elastic.co/downloads/kibana/kibana-5.0.0-linux-x86_64.tar.gz
  tar zxvf kibana-5.0.0-linux-x86_64.tar.gz

2.修改/usr/local/elk/kibana-5.0.0-linux-x86_64/config/kibana.yml文件,修改內容以下:

  server.port: 5601
  server.host: "192.168.1.66"   elasticsearch.url: "http://192.168.1.66:9200"   kibana.index: ".kibana"

3.啓動Kibana,後臺啓動:

  nohup /elk/kibana-5.0.0-linux-x86_64/bin/kibana > /data/elk/kibana-log.file 2>&1 &

4.瀏覽器訪問:x.x.x.x:5601,而後配置索引匹配,以下:

  

7、使用logback向logstash中寫入日誌

1.新建一個SpringBoot工程,引入 logstash須要的jar包依賴:

    <!-- logstash -->
    <dependency>
        <groupId>net.logstash.logback</groupId>
        <artifactId>logstash-logback-encoder</artifactId>
        <version>4.11</version>
    </dependency>

2.添加 logback.xml 文件,內容以下:

<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<configuration scan="true" scanPeriod="10 seconds">
    <include resource="org/springframework/boot/logging/logback/base.xml" />

    <appender name="INFO_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <File>${LOG_PATH}/info.log</File>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}/info-%d{yyyyMMdd}.log.%i</fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>500MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            <maxHistory>2</maxHistory>
        </rollingPolicy>
        <layout class="ch.qos.logback.classic.PatternLayout">
            <Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} -%msg%n
            </Pattern>
        </layout>
    </appender>

    <appender name="ERROR_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
        <File>${LOG_PATH}/error.log</File>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}/error-%d{yyyyMMdd}.log.%i
            </fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>500MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
            <maxHistory>2</maxHistory>
        </rollingPolicy>
        <layout class="ch.qos.logback.classic.PatternLayout">
            <Pattern>
                %d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} -%msg%n
            </Pattern>
        </layout>
    </appender>
    
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">  
        <encoder>  
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>  
        </encoder>  
    </appender>  
   
    <!--  配置logstash的ip端口 -->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">  
         <destination>192.168.1.66:4567</destination>
         <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>  

    <root level="INFO">
        <appender-ref ref="INFO_FILE" />
        <appender-ref ref="ERROR_FILE" />
        <appender-ref ref="STDOUT" />
       <!-- 向logstash中寫入日誌 -->
        <appender-ref ref="LOGSTASH" />
    </root>
    
    <logger name="org.springframework.boot" level="INFO"/>
</configuration>

3.application.properties文件中配置使用 咱們剛剛加入的logback.xml文件

  #日誌
  logging.config=classpath:logback.xml
  logging.path=/data/springboot-log

4.新建一個測試類,以下:

import org.junit.Test;
    import org.junit.runner.RunWith;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.boot.test.context.SpringBootTest;
    import org.springframework.test.context.junit4.SpringRunner;

    @RunWith(SpringRunner.class)
    @SpringBootTest
    public class LogTest {

      public static final Logger logger = LoggerFactory.getLogger(Main.class);
        
        @Test
        public void testLog() {
            logger.info("==66666=={}" , "我是info級別的日誌");
            logger.error("==66666=={}" , "我是error級別的日誌");
        }
        
    }

   在瀏覽器中輸入:192.168.1.66:5601,選中 左側菜單中的Discover,而後在 下圖框中輸入'66666'(注意:單引號表明模糊查詢,雙引號表明精確查詢),回車,看到下圖中能夠查詢出剛剛LogTest中輸入的日誌,說明logback已經能夠向logstash中寫入日誌了。

  

相關文章
相關標籤/搜索