教你在Kubernetes中快速部署ES集羣

摘要:ES集羣是進行大數據存儲和分析,快速檢索的利器,本文簡述了ES的集羣架構,並提供了在Kubernetes中快速部署ES集羣的樣例;對ES集羣的監控運維工具進行了介紹,並提供了部分問題定位經驗,最後總結了經常使用ES集羣的API調用方法。

本文分享自華爲雲社區《Kubernetes中部署ES集羣及運維》,原文做者:minucas。node

ES集羣架構:

ES集羣分爲單點模式和集羣模式,其中單點模式通常在生產環境不推薦使用,推薦使用集羣模式部署。其中集羣模式又分爲Master節點與Data節點由同一個節點承擔,以及Master節點與Data節點由不一樣節點承擔的部署模式。Master節點與Data節點分開的部署方式可靠性更強。下圖爲ES集羣的部署架構圖:
image.pngnginx

採用K8s進行ES集羣部署:

一、採用k8s statefulset部署,可快速的進行擴縮容es節點,本例子採用 3 Master Node + 12 Data Node 方式部署
二、經過k8s service配置了對應的域名和服務發現,確保集羣能自動聯通和監控git

kubectl -s http://ip:port create -f es-master.yaml
kubectl -s http://ip:port create -f es-data.yaml
kubectl -s http://ip:port create -f es-service.yaml

es-master.yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    version: v6.2.5
  name: es-master
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: es
      version: v6.2.5
  serviceName: es
  template:
    metadata:
      labels:
        k8s-app: camp-es
        kubernetes.io/cluster-service: "true"
        version: v6.2.5
    spec:
      containers:
      - env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: ELASTICSEARCH_SERVICE_NAME
          value: es
        - name: NODE_MASTER
          value: "true"
        - name: NODE_DATA
          value: "false"
        - name: ES_HEAP_SIZE
          value: 4g
        - name: ES_JAVA_OPTS
          value: -Xmx4g -Xms4g
        - name: cluster.name
          value: es
        image: elasticsearch:v6.2.5
        imagePullPolicy: Always
        name: es
        ports:
        - containerPort: 9200
          hostPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          hostPort: 9300
          name: transport
          protocol: TCP
        resources:
          limits:
            cpu: "6"
            memory: 12Gi
          requests:
            cpu: "4"
            memory: 8Gi
        securityContext:
          capabilities:
            add:
            - IPC_LOCK
            - SYS_RESOURCE
        volumeMounts:
        - mountPath: /data
          name: es
      - command:
        - /bin/elasticsearch_exporter
        - -es.uri=http://localhost:9200
        - -es.all=true
        image: elasticsearch_exporter:1.0.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9108
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: es-exporter
        ports:
        - containerPort: 9108
          hostPort: 9108
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9108
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 25m
            memory: 64Mi
        securityContext:
          capabilities:
            drop:
            - SETPCAP
            - MKNOD
            - AUDIT_WRITE
            - CHOWN
            - NET_RAW
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - SETGID
            - SETUID
            - NET_BIND_SERVICE
            - SYS_CHROOT
            - SETFCAP
          readOnlyRootFilesystem: true
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - /sbin/sysctl
        - -w
        - vm.max_map_count=262144
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: elasticsearch-logging-init
        resources: {}
        securityContext:
          privileged: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      volumes:
      - hostPath:
          path: /Data/es
          type: DirectoryOrCreate
        name: es

es-data.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    version: v6.2.5
  name: es-data
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  replicas: 12
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: es
      version: v6.2.5
  serviceName: es
  template:
    metadata:
      labels:
        k8s-app: es
        kubernetes.io/cluster-service: "true"
        version: v6.2.5
    spec:
      containers:
      - env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: ELASTICSEARCH_SERVICE_NAME
          value: es
        - name: NODE_MASTER
          value: "false"
        - name: NODE_DATA
          value: "true"
        - name: ES_HEAP_SIZE
          value: 16g
        - name: ES_JAVA_OPTS
          value: -Xmx16g -Xms16g
        - name: cluster.name
          value: es
        image: elasticsearch:v6.2.5
        imagePullPolicy: Always
        name: es
        ports:
        - containerPort: 9200
          hostPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          hostPort: 9300
          name: transport
          protocol: TCP
        resources:
          limits:
            cpu: "8"
            memory: 32Gi
          requests:
            cpu: "7"
            memory: 30Gi
        securityContext:
          capabilities:
            add:
            - IPC_LOCK
            - SYS_RESOURCE
        volumeMounts:
        - mountPath: /data
          name: es
      - command:
        - /bin/elasticsearch_exporter
        - -es.uri=http://localhost:9200
        - -es.all=true
        image: elasticsearch_exporter:1.0.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9108
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        name: es-exporter
        ports:
        - containerPort: 9108
          hostPort: 9108
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 9108
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 10
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 25m
            memory: 64Mi
        securityContext:
          capabilities:
            drop:
            - SETPCAP
            - MKNOD
            - AUDIT_WRITE
            - CHOWN
            - NET_RAW
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - SETGID
            - SETUID
            - NET_BIND_SERVICE
            - SYS_CHROOT
            - SETFCAP
          readOnlyRootFilesystem: true
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - /sbin/sysctl
        - -w
        - vm.max_map_count=262144
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: elasticsearch-logging-init
        resources: {}
        securityContext:
          privileged: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      volumes:
      - hostPath:
          path: /Data/es
          type: DirectoryOrCreate
        name: es

es-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: es
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: Elasticsearch
  name: es
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: es
    port: 9200
    protocol: TCP
    targetPort: 9200
  - name: exporter
    port: 9108
    protocol: TCP
    targetPort: 9108
  selector:
    k8s-app: es
  sessionAffinity: None
  type: ClusterIP

ES集羣監控

工欲善其事必先利其器,中間件的運維首先要有充分的監控手段,ES集羣的監控經常使用的三種監控手段:exporter、eshead、kopf,因爲ES集羣是採用k8s架構部署,不少特性都會結合k8s來開展github

Grafana監控

經過k8s部署es-exporter將監控metrics導出,prometheus採集監控數據,grafana定製dashboard展現segmentfault

ES-head組件

github地址:https://github.com/mobz/elast...
ES-head組件可經過谷歌瀏覽器應用商店搜索安裝,使用Chrome插件可查看ES集羣的狀況 image.pngapi

Cerebro(kopf)組件

github地址:https://github.com/lmenezes/c...
image.png
image.png瀏覽器

ES集羣問題處理

ES配置

資源配置:關注ES的CPU、Memory以及Heap Size,Xms Xmx的配置,建議如機器是8u32g內存的狀況下,堆內存和Xms Xmx配置爲50%,官網建議單個node的內存不要超過64Gsession

索引配置:因爲ES檢索經過索引來定位,檢索的時候ES會將相關的索引數據裝載到內存中加快檢索速度,所以合理的對索引進行設置對ES的性能影響很大,當前咱們經過按日期建立索引的方法(個別數據量小的可不分割索引)架構

ES負載

CPU和Load比較高的節點重點關注,可能的緣由是shard分配不均勻,此時可手動講不均衡的shard relocate一下
image.png
image.pngapp

shard配置

shard配置最好是data node數量的整數倍,shard數量不是越多越好,應該按照索引的數據量合理進行分片,確保每一個shard不要超過單個data node分配的堆內存大小,好比數據量最大的index單日150G左右,分爲24個shard,計算下來單個shard大小大概6-7G左右

副本數建議爲1,副本數過大,容易致使數據的頻繁relocate,加大集羣負載

刪除異常index

curl -X DELETE "10.64.xxx.xx:9200/szv-prod-ingress-nginx-2021.05.01"

索引名可以使用進行正則匹配進行批量刪除,如:-2021.05.*

節點負載高的另外一個緣由

在定位問題的時候發現節點數據shard已經移走可是節點負載一直下不去,登入節點使用top命令發現節點kubelet的cpu佔用很是高,重啓kubelet也無效,重啓節點後負載才獲得緩解

ES集羣常規運維經驗總結(參考官網)

查看集羣健康狀態

ES集羣的健康狀態分爲三種:Green、Yellow、Red。

  • Green(綠色):集羣健康;
  • Yellow(黃色):集羣非健康,但在負載容許範圍內可自動rebalance恢復;
  • Red(紅色):集羣存在問題,有部分數據未就緒,至少有一個主分片未分配成功。

可經過API查詢集羣的健康狀態及未分配的分片:

GET _cluster/health
{
  "cluster_name": "camp-es",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 15,
  "number_of_data_nodes": 12,
  "active_primary_shards": 2176,
  "active_shards": 4347,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100
}

查看pending tasks:

GET /_cat/pending_tasks
其中 priority 字段則表示該 task 的優先級

查看分片未分配緣由

GET _cluster/allocation/explain
其中reason 字段表示哪一種緣由致使的分片未分配,detail 表示詳細未分配的緣由

查看全部未分配的索引和主分片:

GET /_cat/indices?v&health=red

查看哪些分片出現異常

curl -s http://ip:port/_cat/shards | grep UNASSIGNED

從新分配一個主分片:

POST _cluster/reroute?pretty" -d '{
    "commands" : [
        {
          "allocate_stale_primary" : {
              "index" : "xxx",
              "shard" : 1,
              "node" : "12345...",
              "accept_data_loss": true
          }
        }
    ]
}

其中node爲es集羣節點的id,能夠經過curl ‘ip:port/_node/process?pretty’ 進行查詢

下降索引的副本的數量

PUT /szv_ingress_*/settings
{
  "index": {
    "number_of_replicas": 1
  }
}

點擊關注,第一時間瞭解華爲雲新鮮技術~

相關文章
相關標籤/搜索