4.安裝fluentd用於收集集羣內部應用日誌

做者

微信:tangy8080
電子郵箱:914661180@qq.com
更新時間:2019-06-13 11:02:14 星期四nginx

歡迎您訂閱和分享個人訂閱號,訂閱號內會不按期分享一些我本身學習過程當中的編寫的文章
如您在閱讀過程當中發現文章錯誤,可添加個人微信 tangy8080 進行反饋.感謝您的支持。
git

文章主題

  • 在大多數狀況下,咱們須要集中管理應用的日誌.可是咱們又不能強制要求開發者直接對日誌進行統一輸出
    對開發者來講這多是侵入式的,爲了統一輸出日誌,可能致使業務收到影響.
    在這種狀況下咱們能夠本身採集日誌,採集工具比較多,這裏咱們選擇fluentd,用於收集集羣內的日誌
  • 集羣外部日誌的收集將在下一節涉及
  • 日誌收集原則:在任意時刻(由Es高性能索引速度支持)能夠實時的查看到某一節點(由physics.hostname支持)的某一服務(由tag支持)的日誌

前置條件

  • 已經完成了本章節的第一,第二節

安裝

開始安裝fluentd-elasticsearch

因爲咱們對fluentd-elasticsearch的定製比較多,因此咱們選擇使用源碼來安裝fluentd-elasticsearchgithub

#克隆源碼庫
cd /usr/local/src/
git clone https://github.com/kiwigrid/helm-charts
cd helm-charts/charts/fluentd-elasticsearch/

#爲全部輸出添加hostname,physics.hostname,tag字段,方便查找日誌
vim /usr/local/src/helm-charts/charts/fluentd-elasticsearch/templates/configmaps.yaml
#在containers.input.conf節點下添加以下配置
    <filter **>
      @id filter_hostname
      @type record_transformer
      <record>
        hostname "#{Socket.gethostname}"
        physics.hostname "#{ENV['K8S_NODE_NAME']}"
        tag ${tag}
      </record>
    </filter>
  • 在filter中添加了hostname,該值爲k8s中的podName
  • physics.hostname標識節點的名稱,該值爲k8s中節點的名稱,在daemonset.yaml中定義默認定義了K8S_NODE_NAME名稱
#執行安裝
cd /usr/local/src/helm-charts/charts/fluentd-elasticsearch

helm install --name fluentd-elasticsearch --set elasticsearch.host=elasticsearch-client,hostLogDir.dockerContainers=/data/k8s/docker/data/containers .
  • hostLogDir.dockerContainers 用於收集docker產生的日誌,請按實際狀況更改
  • stable/fluentd-elasticsearch is deprecated as we move to our own repo (https://kiwigrid.github.io) which will be puplished on hub.helm.sh soon. The chart source can be found here: https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch

[按需]卸載fluentd-elasticsearch

helm del --purge fluentd-elasticsearch

採集思路解析

  • 咱們在每一個節點上運行一個fluentd代理(採用DaemonSet),用於採集部署在k8s容器環境中程序和部署在物理主機中程序的日誌。
  • 對於容器內部應用.它的生命週期是不固定的,因此咱們能夠把日誌放在宿主機上,(也可使用sidecar容器,https://kubernetes.io/docs/concepts/cluster-administration/logging/),出於性能緣由,這裏會選擇只運行一個fluentd代理)
  • pos_file須要放在何處? 須要放在宿主機上的一個穩定位置(一般和設定和path一個目錄).它記錄了採集進度,假如容器銷燬重啓後,它依然能夠從上次進度的位置開始採集
  • 日誌輪轉問題, 若是應用自己不支持日誌輪轉.咱們須要考慮日誌輪轉問題.避免日誌愈來愈大

採集實例配置

實例一:運行在物理機器中的Nginx日誌

配置掛載目錄
vim /usr/local/src/helm-charts/charts/fluentd-elasticsearch/values.yaml

#nginx日誌不在/var/log目錄下,在文件中最後添加Nginx日誌文件掛載
extraVolumes:
   - name: nginxlog
     hostPath:
      path: /usr/local/nginx/logs
          
extraVolumeMounts:
   - name: nginxlog
     mountPath: /var/log/nginx
     readOnly: true
配置採集數據源
vim /usr/local/src/helm-charts/charts/fluentd-elasticsearch/templates/configmaps.yaml

    # service.honeysuckle-log-consumer
    <source>
      @id honeysuckle-log-consumer.log
      @type tail
      <parse>
        @type regexp
        expression /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}) (?<thread>.*) (?<level>[a-zA-Z]+) (?<logger>.*) (?<property>\[.*\]) - (?<msg>[\s\S]*)$/ 
        time_key time
      </parse>
      path /var/log/businesslogs/honeysuckle-log-consumer/Log.txt
      pos_file /var/log/businesslogs/honeysuckle-log-consumer/Log.txt.pos
      tag service.honeysuckle-log-consumer
    </source>
    
    
    # service.honeysuckle-configmanager-service
    <source>
      @id honeysuckle-configmanager-service.log
      @type tail
      <parse>
        @type regexp
        expression /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}) (?<thread>.*) (?<level>[a-zA-Z]+) (?<logger>.*) (?<property>\[.*\]) - (?<msg>[\s\S]*)$/ 
        time_key time
      </parse>
      path /var/log/businesslogs/honeysuckle-configmanager-service/Log.txt
      pos_file /var/log/businesslogs/honeysuckle-configmanager-service/Log.txt.pos
      tag service.honeysuckle-configmanager-service
    </source>
    
    # Nginx Access Log Source
    <source>
      @id nginx.accesslog
      @type tail
      path /var/log/nginx/access.log
      pos_file /var/log/access.log.pos
      format /^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)"(?:\s+(?<http_x_forwarded_for>[^ ]+))?)?$/
      time_format %d/%b/%Y:%H:%M:%S %z
      tag nginx
     </source>
  • nginx默認的日誌格式format能夠在這裏找到
    https://docs.fluentd.org/parser/nginx

實例二:運行在容器中的Helloword程序

該項目用於模擬一個服務,它每隔3秒向 /app/App_Data/Logs/Log.txt 寫入一條日誌
該項目採用Log4net寫入日誌,應用自己支持日誌輪轉,最新的日誌都在Log.txt中
log4net.config的配置以下:正則表達式

<?xml version="1.0" encoding="utf-8" ?>
<log4net>
  <!--文本文件appender-->
  <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender" >
    <file value="App_Data/Logs/Log.txt" />
    <appendToFile value="true" />
    <rollingStyle value="Size" />
    <maxSizeRollBackups value="10" />
    <maximumFileSize value="1024KB" />
    <staticLogFileName value="true" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" />
    </layout>
  </appender>

  <root>
    <appender-ref ref="RollingFileAppender" />
    <level value="ALL" />
  </root>
</log4net>

日誌輸出的樣例相似於:docker

2019-06-17 07:29:52,038 [5] INFO  honeysuckle.log.consumer.HoneysuckleWebModule [(null)] - electronicinvoice_log_queue.BasicConsume 已建立.
2019-06-17 07:31:51,510 [WorkPool-Session#1:Connection(21956434-0138-45f3-be65-848ca544cad3,amqp://honeysuckle.site:5672)] ERROR honeysuckle.log.consumer.HoneysuckleWebModule [(null)] - Error in EventingBasicConsumer.Received
Elasticsearch.Net.UnexpectedElasticsearchClientException: The operation was canceled. ---> System.OperationCanceledException: The operation was canceled.
   at System.Net.Http.HttpClient.HandleFinishSendAsyncError(Exception e, CancellationTokenSource cts)
   at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
   at Elasticsearch.Net.HttpConnection.Request[TResponse](RequestData requestData)
   at Elasticsearch.Net.RequestPipeline.CallElasticsearch[TResponse](RequestData requestData)
   at Elasticsearch.Net.Transport`1.Request[TResponse](HttpMethod method, String path, PostData data, IRequestParameters requestParameters)
   --- End of inner exception stack trace ---
   at Elasticsearch.Net.Transport`1.Request[TResponse](HttpMethod method, String path, PostData data, IRequestParameters requestParameters)
   at Nest.LowLevelDispatch.BulkDispatch[TResponse](IRequest`1 p, SerializableData`1 body)
   at Nest.ElasticClient.Nest.IHighLevelToLowLevelDispatcher.Dispatch[TRequest,TQueryString,TResponse](TRequest request, Func`3 responseGenerator, Func`3 dispatch)
   at honeysuckle.log.consumer.HoneysuckleWebModule.<>c__DisplayClass10_0.<CreateNewChannel>b__0(Object sender, BasicDeliverEventArgs e) in /src/src/honeysuckle.log.consumer/HoneysuckleWebModule.cs:line 141

基於日誌輸出格式,咱們採用正則表達式插件進行日誌解析(表達式在下面的source中能夠看到)
可使用fluentular工具進行表達式的正確性測試shell

該項目託管在:express

http://admin@gitblit.honeysuckle.site/r/public/helloworld.git

若有測試須要.歡迎clonejson

配置採集數據源
vim /usr/local/src/helm-charts/charts/fluentd-elasticsearch/templates/configmaps.yaml

    # service.helloworld.log Log Source
    <source>
      @id helloworld.log
      @type tail
      <parse>
        @type regexp
        expression /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}) (?<thread>\[\d+\]) (?<level>[a-zA-Z]+) (?<logger>.*) (?<property>\[.*\]) - (?<msg>.*)$/
        time_key time
      </parse>
      path /var/log/businesslogs/helloworld/Log.txt
      pos_file /var/log/businesslogs/helloworld/Log.txt.pos
      tag service.helloworld
    </source>

查看fluentd是否採集到了數據

待全部組件所有runing以後,能夠經過curl查看fluentd是否採集到了數據vim

curl 'http://10.254.193.78:9200/_cat/indices?v'
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logstash-2019.06.11 _v7q6b4DSt6mLJ_GLDyFUQ   5   1       4112            0      4.6mb          2.2mb
yellow open   logstash-2019.06.12 mz3sF15LTbqQwZiXMubdgg   5   1      29394            0     27.9mb         15.2mb
green  open   logstash-2019.06.13 rtjvfGVWSM-5rLvq72jxlA   5   1        971            0      2.4mb          1.1mb

在Kibana中檢查日誌收集狀況

  • 檢查點一:日誌是否根據tag正確分類
    微信

  • 檢查點二:日誌不然附加了hostname和physics.hostname屬性

處理升級問題

隨着業務的須要,咱們後期可能會加入其餘的服務.這個時候咱們能夠在配置文件(configmaps.yaml)中添加好採集規則以後,進行升級

cd /usr/local/src/helm-charts/charts/fluentd-elasticsearch
helm upgrade fluentd-elasticsearch .

一些問題的處理方式記錄

  • failed to read data from plugin storage file path="/var/log/kernel.pos/worker0/storage.json" error_class=Fluent::ConfigError error="Invalid contents (not object) in plugin storage file: '/var/log/kernel.pos/worker0/storage.json
rm /var/log/kernel.pos/worker0/storage.json

引用連接

https://github.com/helm/charts/tree/master/stable/kibana#configuration

相關文章
相關標籤/搜索