https://yq.aliyun.com/articles/679721 https://www.cnblogs.com/keithtt/p/6410249.html https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch https://github.com/kubernetes/kubernetes/tree/5d9d5bca796774a2c12d4e4443e684b619cda7ee/cluster/addons/fluentd-elasticsearch
關於kubernetes的日誌分好幾種,針對kubernetes自己而言有三種:
一、資源運行時的event事件。好比在k8s集羣中建立pod以後,能夠經過 kubectl describe pod 命令查看pod的詳細信息。
二、容器中運行的應用程序自身產生的日誌,好比tomcat、nginx、php的運行日誌。好比kubectl logs redis-master-bobr0。這也是官方以及網上多數文章介紹的部分。
三、k8s各組件的服務日誌,好比 systemctl status kubelet。php容器日誌收集的方式一般有如下幾種:
一、容器外收集。將宿主機的目錄掛載爲容器的日誌目錄,而後在宿主機上收集。
二、容器內收集。在容器內運行一個後臺日誌收集服務。
三、單獨運行日誌容器。單獨運行一個容器提供共享日誌卷,在日誌容器中收集日誌。
四、網絡收集。容器內應用將日誌直接發送到日誌中心,好比java程序可使用log4j 2轉換日誌格式併發送到遠端。
五、經過修改docker的--log-driver。能夠利用不一樣的driver把日誌輸出到不一樣地方,將log-driver設置爲syslog、fluentd、splunk等日誌收集服務,而後發送到遠端。html
Fluentd is deployed as a DaemonSet which spawns a pod on each node that reads logs, generated by kubelet, container runtime and containers and sends them to Elasticsearch.
Fluentd被部署爲一個守護進程集,在每一個節點上生成一個pod,該pod讀取由kubelet、容器運行時和容器生成的日誌,並將它們發送到ElasticSearch。java
1.下載node
[root@elasticsearch01 yaml]# git clone https://github.com/kiwigrid/helm-charts Cloning into 'helm-charts'... remote: Enumerating objects: 33, done. remote: Counting objects: 100% (33/33), done. remote: Compressing objects: 100% (23/23), done. remote: Total 1062 (delta 13), reused 25 (delta 10), pack-reused 1029 Receiving objects: 100% (1062/1062), 248.83 KiB | 139.00 KiB/s, done. Resolving deltas: 100% (667/667), done. [root@elasticsearch01 yaml]# cd helm-charts/fluentd-elasticsearch [root@elasticsearch01 fluentd-elasticsearch]# ls Chart.yaml OWNERS README.md templates values.yaml
2.修改values.yaml配置
主要修改fluentd鏡像地址、elasticsearch地址、index前綴等信息nginx
[root@elasticsearch01 fluentd-elasticsearch]# cat values.yaml |grep -Ev "^#|^$" image: repository: registry.cn-beijing.aliyuncs.com/minminmsn/fluentd-elasticsearch tag: v2.5.2 pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistrKeySecretName awsSigningSidecar: enabled: false image: repository: abutaha/aws-es-proxy tag: 0.9 priorityClassName: "" hostLogDir: varLog: /var/log dockerContainers: /var/lib/docker/containers libSystemdDir: /usr/lib64 resources: {} # limits: # cpu: 100m # memory: 500Mi # requests: # cpu: 100m # memory: 200Mi elasticsearch: auth: enabled: false user: "yourUser" password: "yourPass" buffer_chunk_limit: 2M buffer_queue_limit: 8 host: '10.2.8.44' logstash_prefix: 'logstash' port: 9200 scheme: 'http' ssl_version: TLSv1_2 fluentdArgs: "--no-supervisor -q" env: # OUTPUT_USER: my_user # LIVENESS_THRESHOLD_SECONDS: 300 # STUCK_THRESHOLD_SECONDS: 900 secret: rbac: create: true serviceAccount: # Specifies whether a ServiceAccount should be created create: true # The name of the ServiceAccount to use. # If not set and create is true, a name is generated using the fullname template name: "" podSecurityPolicy: enabled: false annotations: {} ## Specify pod annotations ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl ## # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' livenessProbe: enabled: true annotations: {} podAnnotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "24231" updateStrategy: type: RollingUpdate tolerations: {} # - key: node-role.kubernetes.io/master # operator: Exists # effect: NoSchedule affinity: {} # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: node-role.kubernetes.io/master # operator: DoesNotExist nodeSelector: {} service: {} # type: ClusterIP # ports: # - name: "monitor-agent" # port: 24231 serviceMonitor: ## If true, a ServiceMonitor CRD is created for a prometheus operator ## https://github.com/coreos/prometheus-operator ## enabled: false interval: 10s path: /metrics labels: {} prometheusRule: ## If true, a PrometheusRule CRD is created for a prometheus operator ## https://github.com/coreos/prometheus-operator ## enabled: false prometheusNamespace: monitoring labels: {} # role: alert-rules configMaps: useDefaults: systemConf: true containersInputConf: true systemInputConf: true forwardInputConf: true monitoringConf: true outputConf: true extraConfigMaps: # system.conf: |- # <system> # root_dir /tmp/fluentd-buffers/ # </system>
3.helm安裝fluentdgit
[root@elasticsearch01 fluentd-elasticsearch]# helm install . NAME: sanguine-dragonfly LAST DEPLOYED: Thu Jun 6 16:07:55 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ServiceAccount NAME SECRETS AGE sanguine-dragonfly-fluentd-elasticsearch 0 0s ==> v1/ClusterRole NAME AGE sanguine-dragonfly-fluentd-elasticsearch 0s ==> v1/ClusterRoleBinding NAME AGE sanguine-dragonfly-fluentd-elasticsearch 0s ==> v1/DaemonSet NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE sanguine-dragonfly-fluentd-elasticsearch 0 0 0 0 0 <none> 0s ==> v1/ConfigMap NAME DATA AGE sanguine-dragonfly-fluentd-elasticsearch 6 0s NOTES: 1. To verify that Fluentd has started, run: kubectl --namespace=default get pods -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=sanguine-dragonfly" THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying, including things like IP addresses, container images, and object names will NOT be anonymized.
4.檢查安裝效果github
[root@elasticsearch01 fluentd-elasticsearch]# kubectl get pods |grep flu sanguine-dragonfly-fluentd-elasticsearch-hrxbp 1/1 Running 0 26m sanguine-dragonfly-fluentd-elasticsearch-jcznt 1/1 Running 0 26m
1.Elasticsearch
elasticsearch上會生成logstash-2019.06.06樣式的index,默認按天生產,前綴logstash是values.yaml配置文件裏設置的redis
2.Kibana
Management--Create Index Pattern--logstash-2019*--Discoverdocker