使用ELK收集k8s集羣日誌

使用ELK收集k8s平臺日誌

1.收集哪些日誌?

  • K8S系統的組件日誌
  • K8S Cluster裏面部署的應用程序日誌

 - 標準輸出html

 - 日誌文件 (輸出到指定文件裏)java

 - 日誌輪轉(本地保留30天)node

  - 日誌格式  (json,kv)nginx

 

若是是kubeadm方式部署的k8s 日誌是收集的 /var/log/messageweb

若是是二進制部署的k8s 日誌是配置文件中定義的日誌路徑docker

image.png

應用容器日誌apache

/var/lib/docker/contianers/*/*-json.logjson

image.png

docker配置文件中定義了 默認日誌格式爲jsonapi

image.png

 

pod的日誌路徑tomcat

/var/lib/kubelet/pods/*/volumes/

image.png

2.ELK收集日誌架構

image.png

 

其中logstash 是非必選的組件,若是日誌場景比較複雜的時候能夠加上logstash作出更好的預處理而後存入ES。

 

3.容器中的日誌怎麼收集

方案一:

Node上部署一個日誌收集程序

  • DaemonSet方式部署日誌收集程序
  • 對本節點/var/log/kubelet/pods和 /var/lib/docker/containers/兩個目錄下的日誌進 行採集
  • Pod中容器日誌目錄掛載到宿主機統一目錄上

image.png

方案二:

Pod中附加專用日誌收集的容器

  • 每一個運行應用程序的Pod中增長一個日誌 收集容器,使用emtyDir共享日誌目錄讓 日誌收集程序讀取到。

image.png

方案三:

應用程序直接推送日誌

  • 超出Kubernetes範圍

image.png

方案比較:

 

方式

優勢

缺點

方案一:Node上部署一個日誌收集程序

每一個Node僅需部署一個日誌收集程序, 資源消耗少,對應用無侵入

應用程序日誌若是寫到標準輸出和標準錯誤輸出, 那就不支持多行日誌。

方案二:Pod中附加專用日誌收集的容器

低耦合

每一個Pod啓動一個日誌收集代理,增長資源消耗, 並增長運維維護成本

方案三:應用程序直接推送日誌 無需額外收集工具

浸入應用,增長應用複雜度

 

 

4.k8s部署efk

mkdir efk && cd efk

 

elasticsearch.yaml

 

apiVersion: apps/v1 kind: StatefulSet metadata:  name: elasticsearch  namespace: kube-system  labels:  k8s-app: elasticsearch spec:  serviceName: elasticsearch  selector:  matchLabels:  k8s-app: elasticsearch  template:  metadata:  labels:  k8s-app: elasticsearch  spec:  containers:  - image: elasticsearch:7.3.2  name: elasticsearch  resources:  limits:  cpu: 1  memory: 2Gi  requests:  cpu: 0.5  memory: 500Mi  env:  - name: "discovery.type"  value: "single-node"  - name: ES_JAVA_OPTS  value: "-Xms512m -Xmx2g"  ports:  - containerPort: 9200  name: db  protocol: TCP  volumeMounts:  - name: elasticsearch-data  mountPath: /usr/share/elasticsearch/data  volumeClaimTemplates:  - metadata:  name: elasticsearch-data  spec:  storageClassName: "managed-nfs-storage"  accessModes: [ "ReadWriteOnce" ]  resources:  requests:  storage: 20Gi  ---  apiVersion: v1 kind: Service metadata:  name: elasticsearch  namespace: kube-system spec:  clusterIP: None  ports:  - port: 9200  protocol: TCP  targetPort: db  selector:  k8s-app: elasticsearch

image.png

 

kibana.yml

 

apiVersion: apps/v1 kind: Deployment metadata:  name: kibana  namespace: kube-system  labels:  k8s-app: kibana spec:  replicas: 1  selector:  matchLabels:  k8s-app: kibana  template:  metadata:  labels:  k8s-app: kibana  spec:  containers:  - name: kibana  image: kibana:7.3.2  resources:  limits:  cpu: 1  memory: 500Mi  requests:  cpu: 0.5  memory: 200Mi  env:  - name: ELASTICSEARCH_HOSTS  value: http://elasticsearch:9200  ports:  - containerPort: 5601  name: ui  protocol: TCP  --- apiVersion: v1 kind: Service metadata:  name: kibana  namespace: kube-system spec:  type: NodePort  ports:  - port: 5601  protocol: TCP  targetPort: ui  nodePort: 30601  selector:  k8s-app: kibana  --- apiVersion: extensions/v1beta1 kind: Ingress metadata:  name: kibana  namespace: kube-system spec:  rules:  - host: kibana.ctnrs.com  http:  paths:  - path: /  backend:  serviceName: kibana  servicePort: 5601 

image.png

使用node ip:30601  訪問kibana

kibana頁面

image.png

因爲kibana.yaml中定義了數據來源爲http://elasticsearch:9200

image.png

kibana會自動獲取存在elasticsearch中日誌數據

 

filebeat-kubernetes.yaml

 

--- apiVersion: v1 kind: ConfigMap metadata:  name: filebeat-config  namespace: kube-system  labels:  k8s-app: filebeat data:  filebeat.yml: |-  filebeat.config:  inputs:  # Mounted `filebeat-inputs` configmap:  path: ${path.config}/inputs.d/*.yml  # Reload inputs configs as they change:  reload.enabled: false  modules:  path: ${path.config}/modules.d/*.yml  # Reload module configs as they change:  reload.enabled: false   # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:  #filebeat.autodiscover:  # providers:  # - type: kubernetes  # hints.enabled: true   output.elasticsearch:  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] --- apiVersion: v1 kind: ConfigMap metadata:  name: filebeat-inputs  namespace: kube-system  labels:  k8s-app: filebeat data:  kubernetes.yml: |-  - type: docker  containers.ids:  - "*"  processors:  - add_kubernetes_metadata:  in_cluster: true --- apiVersion: apps/v1 kind: DaemonSet metadata:  name: filebeat  namespace: kube-system  labels:  k8s-app: filebeat spec:  selector:  matchLabels:  k8s-app: filebeat  template:  metadata:  labels:  k8s-app: filebeat  spec:  serviceAccountName: filebeat  terminationGracePeriodSeconds: 30  containers:  - name: filebeat  image: elastic/filebeat:7.3.2  args: [  "-c", "/etc/filebeat.yml",  "-e",  ]  env:  - name: ELASTICSEARCH_HOST  value: elasticsearch  - name: ELASTICSEARCH_PORT  value: "9200"  securityContext:  runAsUser: 0  # If using Red Hat OpenShift uncomment this:  #privileged: true  resources:  limits:  memory: 200Mi  requests:  cpu: 100m  memory: 100Mi  volumeMounts:  - name: config  mountPath: /etc/filebeat.yml  readOnly: true  subPath: filebeat.yml  - name: inputs  mountPath: /usr/share/filebeat/inputs.d  readOnly: true  - name: data  mountPath: /usr/share/filebeat/data  - name: varlibdockercontainers  mountPath: /var/lib/docker/containers  readOnly: true  volumes:  - name: config  configMap:  defaultMode: 0600  name: filebeat-config  - name: varlibdockercontainers  hostPath:  path: /var/lib/docker/containers  - name: inputs  configMap:  defaultMode: 0600  name: filebeat-inputs # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart  - name: data  hostPath:  path: /var/lib/filebeat-data  type: DirectoryOrCreate --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:  name: filebeat subjects: - kind: ServiceAccount  name: filebeat  namespace: kube-system roleRef:  kind: ClusterRole  name: filebeat  apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:  name: filebeat  labels:  k8s-app: filebeat rules: - apiGroups: [""] # "" indicates the core API group  resources:  - namespaces  - pods  verbs:  - get  - watch  - list --- apiVersion: v1 kind: ServiceAccount metadata:  name: filebeat  namespace: kube-system  labels:  k8s-app: filebeat --- 

image.png

 

當filebeat起來以後 kibana會收到來自 filebeat處理後存在elasticsearch中的日誌

image.png

建立索引匹配

image.png

 

image.png

image.png

數據展現

image.png

能夠調節左側邊欄的avaliable fields  自定義展現對應字段。

image.png

 

5.方案二:Pod中附加專用日誌收集的容器

k8s-logs.yaml

 

apiVersion: v1 kind: ConfigMap metadata:  name: k8s-logs-filebeat-config  namespace: kube-system  data:  filebeat.yml: |  filebeat.inputs:  - type: log  paths:  - /var/log/messages  fields:  app: k8s  type: module  fields_under_root: true   setup.ilm.enabled: false  setup.template.name: "k8s-module"  setup.template.pattern: "k8s-module-*"   output.elasticsearch:  hosts: ['elasticsearch.kube-system:9200']  index: "k8s-module-%{+yyyy.MM.dd}"  ---  apiVersion: apps/v1 kind: DaemonSet metadata:  name: k8s-logs  namespace: kube-system spec:  selector:  matchLabels:  project: k8s  app: filebeat  template:  metadata:  labels:  project: k8s  app: filebeat  spec:  containers:  - name: filebeat  image: elastic/filebeat:7.3.2  args: [  "-c", "/etc/filebeat.yml",  "-e",  ]  resources:  requests:  cpu: 100m  memory: 100Mi  limits:  cpu: 500m  memory: 500Mi  securityContext:  runAsUser: 0  volumeMounts:  - name: filebeat-config  mountPath: /etc/filebeat.yml  subPath: filebeat.yml  - name: k8s-logs  mountPath: /var/log/messages  volumes:  - name: k8s-logs  hostPath:  path: /var/log/messages  - name: filebeat-config  configMap:  name: k8s-logs-filebeat-config

 

image.png

查看索引管理

image.png

image.png

 

使用過濾器

image.png

image.png

 

對於java異常的日誌,須要多行匹配

image.png

一個java應用的 日誌收集demo

 

apiVersion: apps/v1 kind: Deployment metadata:  name: java-demo  namespace: test spec:  replicas: 2  selector:  matchLabels:  project: www  app: java-demo  template:  metadata:  labels:  project: www  app: java-demo  spec:  imagePullSecrets:  - name: "docker-regsitry-auth"  containers:  - image: 192.168.31.70/demo/java-demo:v2  name: java  imagePullPolicy: Always  ports:  - containerPort: 8080  name: web  protocol: TCP  resources:  requests:  cpu: 0.5  memory: 1Gi  limits:  cpu: 1  memory: 2Gi  livenessProbe:  httpGet:  path: /  port: 8080  initialDelaySeconds: 60  timeoutSeconds: 20  readinessProbe:  httpGet:  path: /  port: 8080  initialDelaySeconds: 60  timeoutSeconds: 20  volumeMounts:  - name: tomcat-logs  mountPath: /usr/local/tomcat/logs   - name: filebeat  image: elastic/filebeat:7.3.2  args: [  "-c", "/etc/filebeat.yml",  "-e",  ]  resources:  limits:  memory: 500Mi  requests:  cpu: 100m  memory: 100Mi  securityContext:  runAsUser: 0  volumeMounts:  - name: filebeat-config  mountPath: /etc/filebeat.yml  subPath: filebeat.yml  - name: tomcat-logs  mountPath: /usr/local/tomcat/logs  volumes:  - name: tomcat-logs  emptyDir: {}  - name: filebeat-config  configMap:  name: filebeat-config --- apiVersion: v1 kind: ConfigMap metadata:  name: filebeat-config  namespace: test  data:  filebeat.yml: |-  filebeat.inputs:  - type: log  paths:  - /usr/local/tomcat/logs/catalina.*  # tags: ["tomcat"]  fields:  app: www  type: tomcat-catalina  fields_under_root: true  multiline:  pattern: '^\['  negate: true  match: after  setup.ilm.enabled: false  setup.template.name: "tomcat-catalina"  setup.template.pattern: "tomcat-catalina-*"  output.elasticsearch:  hosts: ['elasticsearch.kube-system:9200']  index: "tomcat-catalina-%{+yyyy.MM.dd}"

針對java異常設置的多行匹配規則。
image.png

待日誌寫入到 es時,配置索引匹配

 

 

相關文章
相關標籤/搜索