EFK (Elasticsearch + Fluentd + Kibana) 是kubernetes官方推薦的日誌收集方案,咱們一塊兒瞭解一下fluentd是如何收集kubernetes集羣日誌的,慶祝一下fluentd從 CNCF 畢業。開始以前,但願你已經讀過Docker 容器日誌分析, 本文是其延生的第二篇。node
注意 須要和ELK(Elasticsearch + Logstash + Kibana) 以及EFK(Elasticsearch + Filebeat + Kibana)區分,後一個EFK通常是原生部署。git
CNCF , 全稱Cloud Native Computing Foundation(雲原生計算基金會),kubernetes也是其旗下,或者說大多數容器雲項目都是其旗下。github
k8s中部署efk,所用的yaml文件在 github.com/kubernetes/… ,你能夠使用文章附錄提供的腳本進行下載。docker
下載完成後執行 cd fluentd-elasticsearch && kubectl apply -f .
命令進行部署。json
檢查elasticsearch和kibana service:api
$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-logging NodePort 10.97.248.209 <none> 9200:32126/TCP 23d
kibana-logging ClusterIP 10.103.126.183 <none> 5601/TCP 23d
複製代碼
檢查fluentd DaemonSet:bash
$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd-es-v2.4.0 2 2 2 2 2 <none> 23d
複製代碼
這裏咱們知道了fluentd是以daemonset方式運行的,es和kibana是service方式。架構
注意 elasticsearch 默認部署文件是沒有持久化的,若是須要持久化,須要調整其PVC設置。app
查看fluentd的類型,沒什麼好說的curl
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es-v2.2.1
namespace: kube-system
複製代碼
查看fluentd日誌收集
containers:
- name: fluentd-es
image: k8s.gcr.io/fluentd-elasticsearch:v2.2.0
...
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.d
...
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-es-config-v0.1.6
複製代碼
這裏能夠清晰的看到,fluentd以daemonset方式運做,而後把系統的 /var/lib/docker/containers
掛載,這個目錄咱們在Docker 容器日誌分析中介紹過,這是docker容器日誌存放路徑, 這樣fluentd就完成了對容器默認日誌的讀取。
fluentd的配置文件是以configmap形式加載,繼續往下看看。
收集容器日誌配置
收集容器日誌主要在 containers.input.conf,以下:
<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
tag raw.kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
複製代碼
細心的你會發現掛載的容器目錄是 /var/lib/docker/containers
,日誌應該都在這裏,可是配置的監聽的目錄倒是 /var/log/containers
。官方貼心的給出了註釋,主要內容以下:
# Example
# =======
# ...
#
# The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
# record & add labels to the log record if properly configured. This enables users
# to filter & search logs on any metadata.
# For example a Docker container's logs might be in the directory:
#
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
#
# and in the file:
#
# 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# where 997599971ee6... is the Docker ID of the running container.
# The Kubernetes kubelet makes a symbolic link to this file on the host machine
# in the /var/log/containers directory which includes the pod name and the Kubernetes
# container name:
#
# synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# ->
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
#
# The /var/log directory on the host is mapped to the /var/log directory in the container
# running this instance of Fluentd and we end up collecting the file:
#
# /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
#
複製代碼
日誌上傳到elasticsearch
output.conf: |-
<match **>
@id elasticsearch
@type elasticsearch
@log_level info
type_name _doc
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
複製代碼
這裏注意一下其中的host和port,均是elasticsearch service中定義的,若是修改過須要保持一致。fluentd也支持日誌數據上傳到外部的elasticsearch,也就是前文的elk/efk原生。
下載腳本文件 download.sh
for file in es-service es-statefulset fluentd-es-configmap fluentd-es-ds kibana-deployment kibana-service; do curl -o $file.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/$file.yaml; done
複製代碼
參考連接: