ELK日誌系統你們不會陌生(zipkin + jaeger , prometheus + grafana)解決了你們對於鏈路對於統計採集的需求,可是真正的對於日誌進行存儲仍是得專業的上,在Istio中官方提供的方案是EFK(Fluentd + Elasticsearch + Kibana)Fluentd 是一個開源的日誌收集器,支持多種數據輸出而且有一個可插拔架構。 Elasticsearch是一個流行的後端日誌記錄程序, Kibana 用於查看。node
附上:docker
喵了個咪的博客:w-blog.cnjson
Istio官方地址:https://preliminary.istio.io/zhvim
Istio中文文檔:https://preliminary.istio.io/zh/docs/後端
PS : 此處基於當前最新istio版本1.0.3版本進行搭建和演示api
咱們把Fluentd,Elasticsearch 和 Kibana 在一個非生產集合 Services 和 Deployments 在一個新的叫作logging的 Namespace 中。架構
> vim logging-stack.yaml # Logging Namespace. All below are a part of this namespace. apiVersion: v1 kind: Namespace metadata: name: logging --- # Elasticsearch Service apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: logging labels: app: elasticsearch spec: ports: - port: 9200 protocol: TCP targetPort: db selector: app: elasticsearch --- # Elasticsearch Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch namespace: logging labels: app: elasticsearch annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: elasticsearch spec: containers: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1 name: elasticsearch resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: discovery.type value: single-node ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch mountPath: /data volumes: - name: elasticsearch emptyDir: {} --- # Fluentd Service apiVersion: v1 kind: Service metadata: name: fluentd-es namespace: logging labels: app: fluentd-es spec: ports: - name: fluentd-tcp port: 24224 protocol: TCP targetPort: 24224 - name: fluentd-udp port: 24224 protocol: UDP targetPort: 24224 selector: app: fluentd-es --- # Fluentd Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: fluentd-es namespace: logging labels: app: fluentd-es annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: fluentd-es spec: containers: - name: fluentd-es image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: config-volume mountPath: /etc/fluent/config.d terminationGracePeriodSeconds: 30 volumes: - name: config-volume configMap: name: fluentd-es-config --- # Fluentd ConfigMap, contains config files. kind: ConfigMap apiVersion: v1 data: forward.input.conf: |- # Takes the messages sent over TCP <source> type forward </source> output.conf: |- <match **> type elasticsearch log_level info include_tag_key true host elasticsearch port 9200 logstash_format true # Set the chunk limits. buffer_chunk_limit 2M buffer_queue_limit 8 flush_interval 5s # Never wait longer than 5 minutes between retries. max_retry_wait 30 # Disable the limit on the number of retries (retry forever). disable_retry_limit # Use multiple threads for processing. num_threads 2 </match> metadata: name: fluentd-es-config namespace: logging --- # Kibana Service apiVersion: v1 kind: Service metadata: name: kibana namespace: logging labels: app: kibana spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: app: kibana --- # Kibana Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kibana namespace: logging labels: app: kibana annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana-oss:6.1.1 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 name: ui protocol: TCP ---
建立資源app
kubectl apply -f logging-stack.yaml
如今有一個正在運行的 Fluentd 守護進程,使用新的日誌類型配置 Istio,並將這些日誌發送到監聽守護進程。elasticsearch
建立一個新的 YAML 文件來保存日誌流的配置,Istio 將自動生成並收集。tcp
> vim fluentd-istio.yaml apiVersion: "config.istio.io/v1alpha2" kind: logentry metadata: name: newlog namespace: istio-system spec: severity: '"info"' timestamp: request.time variables: source: source.labels["app"] | source.workload.name | "unknown" user: source.user | "unknown" destination: destination.labels["app"] | destination.workload.name | "unknown" responseCode: response.code | 0 responseSize: response.size | 0 latency: response.duration | "0ms" monitored_resource_type: '"UNSPECIFIED"' --- # fluentd handler 的配置 apiVersion: "config.istio.io/v1alpha2" kind: fluentd metadata: name: handler namespace: istio-system spec: address: "fluentd-es.logging:24224" --- # 發送 logentry 實例到 fluentd handler 的規則 apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: newlogtofluentd namespace: istio-system spec: match: "true" # match for all requests actions: - handler: handler.fluentd instances: - newlog.logentry ---
PS : 處理程序配置中 address: "fluentd-es.logging:24224" 行指向咱們設置的 Fluentd 守護進程示例軟件棧。
使其生效
kubectl apply -f fluentd-istio.yaml
咱們先訪問如下咱們的示例程序bookinfo,而後老方式經過端口映射訪問kibana
kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601
PS : 推薦吧ES和kibana單獨部署在集羣外部,ES對存儲和資源有較高要求