微服務日誌之Spring Boot Kafka實現日誌收集

前言

承接上文( 微服務日誌之.NET Core使用NLog經過Kafka實現日誌收集 http://www.javashuo.com/article/p-vosnsxre-hc.html ).NET/Core的實現,咱們的目地是爲了讓微服務環境中dotnet和java的服務都統一的進行日誌收集。
Java體系下Spring Boot + Logback很容易就接入了Kafka實現了日誌收集。
html

Spring Boot集成

Maven 包管理

<dependencyManagement>
  <dependencies>
     <dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-core</artifactId>
    <version>1.2.3</version>
    </dependency>
  </dependencies>
</dependencyManagement>

包依賴引用:java

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0-RC1</version>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>5.0</version>
</dependency>

logback-spring.xml

在Spring Boot項目resources目錄下添加logback-spring.xml配置文件,注意:必定要修改 {"appname":"webdemo"},這個值也能夠在配置中設置爲變量。添加以下配置,STDOUT是在鏈接失敗時,使用的日誌輸出配置。因此這每一個項目要根據本身的狀況添加配置。在普通日誌輸出中使用異步策略提升性能,內容以下:git

<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
            <customFields>{"appname":"webdemo"}</customFields>
            <includeMdc>true</includeMdc>
            <includeContext>true</includeContext>
            <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                <maxDepthPerThrowable>30</maxDepthPerThrowable>
                <rootCauseFirst>true</rootCauseFirst>
            </throwableConverter>
        </encoder>
        <topic>loges</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
        <producerConfig>bootstrap.servers=127.0.0.1:9092</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <!--<producerConfig>max.block.ms=0</producerConfig>-->
        <producerConfig>block.on.buffer.full=false</producerConfig>
        <!-- kafka鏈接失敗後,使用下面配置進行日誌輸出 -->
        <appender-ref ref="STDOUT" />
    </appender>

注意:必定要修改 {"appname":"webdemo"} , 這個值也能夠在配置中設置爲變量 。對於第三方框架或庫的錯誤和異常信息如須要寫入日誌,錯誤配置以下:github

<appender name="kafkaAppenderERROR" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
            <customFields>{"appname":"webdemo"}</customFields>
            <includeMdc>true</includeMdc>
            <includeContext>true</includeContext>
            <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                <maxDepthPerThrowable>30</maxDepthPerThrowable>
                <rootCauseFirst>true</rootCauseFirst>
            </throwableConverter>
        </encoder>
        <topic>ep_component_log</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy">
            <!-- wait indefinitely until the kafka producer was able to send the message -->
            <timeout>0</timeout>
        </deliveryStrategy>
        <producerConfig>bootstrap.servers=127.0.0.1:9020</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <appender-ref ref="STDOUT" />
        <filter class="ch.qos.logback.classic.filter.LevelFilter"><!-- 只打印錯誤日誌 -->
            <level>ERROR</level>
            <onMatch>ACCEPT</onMatch>
            <onMismatch>DENY</onMismatch>
        </filter>
    </appender>

在異常日誌用使用了同步策略保證,錯誤日誌的有效收集,固然能夠根據實際項目狀況進行配置。web

LOG配置建議:

日誌root指定錯誤便可輸出第三方框架異常日誌:spring

<root level="INFO">
        <appender-ref ref="kafkaAppenderERROR" />
 </root>

建議只輸出本身程序裏的級別日誌配置以下(只供參考):bootstrap

<logger name="項目所在包" additivity="false">
    <appender-ref ref="STDOUT" />
    <appender-ref ref="kafkaAppender" />
</logger>

最後

GitHub:https://github.com/maxzhang1985/YOYOFx 若是覺還能夠請Star下, 歡迎一塊兒交流。app

.NET Core 開源學習羣:214741894框架

相關文章
相關標籤/搜索