Tags: k8s環境下的容器日誌收集
K8S環境下面如何收集應用日誌
===
在本文中重點講一下K8S容器環境中如何收集容器的日誌; html
在K8S集羣中,容器的日誌收集方案通常有三種;第一種方案是經過在每個k8s節點安裝日誌收集客戶端軟件,好比fluentd。這種方案很差的一點是應用的日誌必須輸出到標準輸出,而且是經過在每一臺計算節點的/var/log/containers目錄下面的日誌文件,這個日誌文件的名稱是這種格式user-center-765885677f-j68zt_default_user-center-0867b9c2f8ede64cebeb359dd08a6b05f690d50427aa89f7498597db8944cccc.log,文件名稱有不少隨機字符串,很難和容器裏面的應用對應起來。而且在網上看到別人說這個裏面的日誌,對於JAVA的報錯內容沒有多行合併,不過我尚未測試過此方案。 python
第二種方案就是在應用的pods裏面在運行一個sidecar container(邊角容器),這個容器會和應用的容器掛載同一個volume日誌卷。好比這個sidecar容器能夠是filebeat或者flunetd等;這種方案不足之處是部署了sidecar , 因此會消耗資源 , 每一個pod都要起一個日誌收集容器。
第三種方案就是直接將應用的日誌收集到kafka,而後經過kafka再發送到logstash,再處理成json格式的日誌發送到es集羣,最後在kibana展現。我實驗的就是這種方案。經過修改logsbak配置文件實現了日誌直接發送到kafka緩存的功能;下面直接看配置了 git
<?xml version="1.0" encoding="UTF-8"?> <configuration> <jmxConfigurator/> <!-- 動態加載--> <property name="log-path" value="/apptestlogs" /> <!-- 統一 /applogs 下面 --> <property name="app-name" value="test" /> <!-- 應用系統名稱 --> <property name="filename" value="test-test" /> <!---日誌文件名,默認組件名稱 --> <property name="dev-group-name" value="test" /> <!-- 開發團隊名稱 --> <conversionRule conversionWord="traceId" converterClass="org.lsqt.components.log.logback.TraceIdConvert"/> <!-- 根據實際狀況修改變量 end--> -<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender"> <!-- 典型的日誌pattern --> <!-- -<encoder>--> <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>--> <!--</encoder>--> -<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout"> <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern> </layout> </encoder> </appender> -<appender name="fileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log-path}/${app-name}/${filename}.log</file> -<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>/${log-path}/${app-name}/${filename}.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <maxHistory>15</maxHistory> <!--用來指定日誌文件的上限大小,例如設置爲300M的話,那麼到了這個值,就會刪除舊的日誌。--> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>300MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <!-- -<encoder>--> <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>--> <!--</encoder>--> -<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout"> <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern> </layout> </encoder> </appender> <appender name="errorAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${log-path}/${app-name}/${filename}-error.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>/${log-path}/${app-name}/${filename}-error.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>300MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> <maxHistory>15</maxHistory> </rollingPolicy> <!--<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">--> <!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>--> <!--</encoder>--> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout"> <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern> </layout> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>ACCEPT</onMatch> <onMismatch>DENY</onMismatch> </filter> </appender> <!-- This example configuration is probably most unreliable under failure conditions but wont block your application at all --> <appender name="very-relaxed-and-fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern> </encoder> <topic>elk-stand-sit-fkp-eureka</topic> <!-- we don't care how the log messages will be partitioned --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" /> <!-- use async delivery. the application threads are not blocked by logging --> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /> <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --> <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --> <!-- bootstrap.servers is the only mandatory producerConfig --> <producerConfig>bootstrap.servers=192.168.1.12:9092,192.168.1.14:9092,192.168.1.15:9092</producerConfig> <!-- don't wait for a broker to ack the reception of a batch. --> <producerConfig>acks=0</producerConfig> <!-- wait up to 1000ms and collect log messages before sending them as a batch --> <producerConfig>linger.ms=1000</producerConfig> <!-- even if the producer buffer runs full, do not block the application but start to drop messages --> <producerConfig>max.block.ms=0</producerConfig> <!-- define a client-id that you use to identify yourself against the kafka broker --> <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig> <!-- define All log messages that cannot be delivered fast enough will then immediately go to the fallback appenders --> <producerConfig>block.on.buffer.full=false</producerConfig> <!-- this is the fallback appender if kafka is not available. --> <appender-ref ref="consoleAppender" /> </appender> <root level="debug"> <appender-ref ref="very-relaxed-and-fast-kafka-appender" /> <appender-ref ref="fileAppender"/> <appender-ref ref="consoleAppender"/> <appender-ref ref="errorAppender"/> </root> </configuration>
###2. 針對logsbak配置說明:### github
<!-- This example configuration is more restrictive and will try to ensure that every message is eventually delivered in an ordered fashion (as long the logging application stays alive) --> <appender name="very-restrictive-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> <topic>important-logs</topic> <!-- ensure that every message sent by the executing host is partitioned to the same partition strategy --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" /> <!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages --> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy"> <!-- wait indefinitely until the kafka producer was able to send the message --> <timeout>0</timeout> </deliveryStrategy> <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --> <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --> <!-- bootstrap.servers is the only mandatory producerConfig --> <producerConfig>bootstrap.servers=localhost:9092</producerConfig> <!-- restrict the size of the buffered batches to 8MB (default is 32MB) --> <producerConfig>buffer.memory=8388608</producerConfig> <!-- If the kafka broker is not online when we try to log, just block until it becomes available --> <producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig> <!-- define a client-id that you use to identify yourself against the kafka broker --> <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive</producerConfig> <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy --> <producerConfig>compression.type=gzip</producerConfig> <!-- Log every log message that could not be sent to kafka to STDERR --> <appender-ref ref="STDERR"/> </appender>
經過配置logsbak直接輸出到kafka,而且使用異步模式,就成功的在kibana裏面看到了容器的日誌了; redis