對於企業來講,日誌的重要性不言而喻,日誌收集分析展現平臺的選擇,這裏給出幾點選擇ELK的理由。ELK是一套很是成熟的系統,她自己的構架很是適合Kubernetes集羣,這裏官網當中也是選用的 Elasticsearch做爲Sample的,GitHub上下載的kubernetes二進制包中自己就有這個.yaml文件,因此使用ELK做爲收集日誌的理由至關充分。 node
對於任何基礎設施或後端服務系統,日誌都是極其重要的。對於受Google內部容器管理系統Borg啓發而催生出的Kubernetes項目來講,天然少不了對Logging的支持。在「 Logging Overview 「中,官方概要介紹了Kubernetes上的幾個層次的Logging方案,並給出Cluster-level logging的參考架構: web
Kubernetes還給出了參考實現: docker
– Logging Backend: Elastic Search stack(包括: Kibana ) 後端
– Logging-agent: fluentd api
01服務器
介紹
架構
1. Fluentd是一個開源收集事件和日誌系統,用與各node節點日誌數據的收集、處理等等。app
2. ElasticSearch是一個開源的,基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文搜索引擎,基於RESTful web接口。elasticsearch
3. Kibana是一個開源的用於數據可視化的web UI工具,可以使用它對日誌進行高效的搜索、可視化、分析等各類操做。分佈式
02
流程
每一個node節點上面的fluentd監控並收集該節點上面的系統日誌,並將處理事後的日誌信息發送給ElasticSearch,Elasticsearch彙總各個node節點的日誌信息,最後結合Kibana 實現web UI界面的數據展現。
03
安裝實現
1.確保K8S集羣正常工做(固然這是必須的....)
2.fluentd.yaml文件編寫,這裏要實現每一個節點都能有fluentd跑起來,只須要將kind設置爲DaemonSet便可。
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google-containers/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
3.elasticsearch-rc.yaml&elasticsearch-svc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google-containers/elasticsearch:v2.4.1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
volumes:
- name: es-persistent-storage
emptyDir: {}
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
4.kibana-rc.yaml&kibana-svc.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: gcr.io/google-containers/kibana:v4.6.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
ports:
- containerPort: 5601
name: ui
protocol: TCP
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
5.kubectl create -f ****** ,這裏可自由發揮。