Kubernetes 彈性伸縮全場景解析(三) - HPA 實踐手冊

 

在上一篇文章中,給你們介紹和剖析了 HPA 的實現原理以及演進的思路與歷程。本文咱們將會爲你們講解如何使用 HPA 以及一些須要注意的細節。php

 

autoscaling/v1 實踐

v1 的模板多是你們平時見到最多的也是最簡單的,v1 版本的 HPA 只支持一種指標 ——  CPU。傳統意義上,彈性伸縮最少也會支持 CPU 與 Memory 兩種指標,爲何在 Kubernetes 中只放開了 CPU 呢?其實最先的 HPA 是計劃同時支持這兩種指標的,可是實際的開發測試中發現:內存不是一個很是好的彈性伸縮判斷條件。由於和 CPU不 同,不少內存型的應用,並不會由於 HPA 彈出新的容器而帶來內存的快速回收,不少應用的內存都要交給語言層面的 VM 進行管理,也就是說,內存的回收是由 VM 的 GC 來決定的。這就有可能由於 GC 時間的差別致使 HPA 在不恰當的時間點震盪,所以在 v1 的版本中,HPA 就只支持了 CPU 這一種指標。node

 

一個標準的 v1 模板大體以下:web

 

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: php-apache
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

 

 

其中 scaleTargetRef 表示當前要操做的伸縮對象是誰。在本例中,伸縮的對象是一個 apps/v1 版本的 Deployment。 targetCPUUtilizationPercentage 表示:當總體的資源利用率超過 50% 的時候,會進行擴容。接下來咱們作一個簡單的 Demo 來實踐下。apache

 

  1. 登陸容器服務控制檯,首先建立一個應用部署,選擇使用模板建立,模板內容以下:

 

apiVersion: apps/v1beta1
  kind: Deployment
  metadata:
name: php-apache
labels:
 app: php-apache
  spec:
replicas: 1
selector:
 matchLabels:
   app: php-apache
template:
 metadata:
   labels:
     app: php-apache
 spec:
   containers:
   - name: php-apache
     image: registry.cn-hangzhou.aliyuncs.com/ringtail/hpa-example:v1.0
     ports:
     - containerPort: 80
     resources:
       requests:
         memory: "300Mi"
         cpu: "250m"
  --- 
  apiVersion: v1
  kind: Service
  metadata:
name: php-apache
labels:
 app: php-apache
  spec:
selector:
 app: php-apache
ports:
- protocol: TCP
 name: http
 port: 80 
 targetPort: 80
type: ClusterIP

 

 

  1. 部署壓測模組 HPA 模板

 

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
  apiVersion: apps/v1beta1
  kind: Deployment
  name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50

 

  1. 開啓壓力測試

 

apiVersion: apps/v1beta1
   kind: Deployment
   metadata:
     name: load-generator 
     labels:
       app: load-generator
   spec:
     replicas: 1
     selector:
       matchLabels:
         app: load-generator
     template:
       metadata:
         labels:
           app: load-generator
       spec:
         containers:
         - name: load-generator
           image: busybox 
           command:
             - "sh"
             - "-c"
             - "while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done"

 

 

  1. 檢查擴容狀態 


 

  1. 關閉壓測應用


 

  1. 檢查縮容狀態 


 

這樣一個使用 autoscaling/v1 的 HPA 就完成了。相對而言,這個版本的 HPA 目前是最簡單的,不管是否升級 Metrics-Server 均可以實現。vim

 

autoscaling/v2beta1 實踐

在前面的內容中爲你們講解了 HPA 還有 autoscaling/v2beta1 和 autoscaling/v2beta2 兩個版本。這兩個版本的區別是 autoscaling/v1beta1 支持了 Resource Metrics 和 Custom Metrics。而在 autoscaling/v2beta2 的版本中額外增長了 External Metrics 的支持。對於 External Metrics 在本文中就不進行過多贅述,由於 External Metrics 目前在社區裏面沒有太多成熟的實現,比較成熟的實現是 Prometheus Custom Metrics後端

 

 

上面這張圖爲你們展示了開啓 Metrics Server 後, HPA 如何使用不一樣類型的Metrics,若是須要使用 Custom Metrics ,則須要配置安裝相應的 Custom Metrics Adapter。在下文中,主要爲你們介紹一個基於 QPS 來進行彈性伸縮的例子。api

 

  1. 安裝 Metrics Server 並在 kube-controller-manager 中進行開啓

 

目前默認的阿里雲容器服務 Kubernetes 集羣使用仍是 Heapster,容器服務計劃在 1.12 中更新 Metrics Server,這個地方須要特別說明下,社區雖然已經逐漸開始廢棄 Heapster,可是社區中還有大量的組件是在強依賴 Heapster 的 API,所以阿里雲基於 Metrics Server 進行了 Heapster 完整的兼容,既可讓開發者使用 Metrics Server 的新功能,又能夠無需擔憂其餘組件的宕機。緩存

 

在部署新的 Metrics Server 以前,咱們首先要備份一下 Heapster 中的一些啓動參數,由於這些參數稍後會直接用在 Metrics Server 的模板中。其中重點關心的是兩個 Sink,若是須要使用 Influxdb 的開發者,能夠保留第一個 Sink;若是須要保留雲監控集成能力的開發者,則保留第二個 Sink。 app

 

 

將這兩個參數拷貝到 Metrics Server 的啓動模板中,在本例中是兩個都兼容,並下發部署。frontend

 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: 443
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: admin
      containers:
      - name: metrics-server
        image: registry.cn-hangzhou.aliyuncs.com/ringtail/metrics-server:1.1
        imagePullPolicy: Always
        command:
        - /metrics-server
        - '--source=kubernetes:https://kubernetes.default'
        - '--sink=influxdb:http://monitoring-influxdb:8086'
        - '--sink=socket:tcp://monitor.csk.[region_id].aliyuncs.com:8093?clusterId=[cluster_id]&public=true'

 

 

接下來咱們修改下 Heapster 的 Service,將服務的後端從 Heapster 轉移到 Metrics Server。 

 

 

若是此時從控制檯的節點頁面能夠獲取到右側的監控信息的話,說明 Metrics Server 已經徹底兼容 Heapster

 

 

此時經過 kubectl get apiservice,若是能夠看到註冊的 v1beta1.metrics.k8s.io 的 api,則說明已經註冊成功。

 

 

接下來咱們須要在 kube-controller-manager 上切換 Metrics 的數據來源。kube-controller-manger 部署在每一個 master 上,是經過 Static Pod 的託管給 kubelet 的。所以只須要修改 kube-controller-manager 的配置文件,kubelet 就會自動進行更新。kube-controller-manager 在主機上的路徑是 /etc/kubernetes/manifests/kube-controller-manager.yaml

 

 

須要將 --horizontal-pod-autoscaler-use-rest-clients=true,這裏有一個注意點,由於若是使用 vim 進行編輯,vim 會自動生成一個緩存文件影響最終的結果,因此比較建議的方式是將這個配置文件移動到其餘的目錄下進行修改,而後再移回原來的目錄。至此,Metrics Server 已經能夠爲 HPA 進行服務了,接下來咱們來作自定義指標的部分。

 

  1. 部署 Custom Metrics Adapter

 

如集羣中未部署 Prometheus,能夠參考《阿里雲容器Kubernetes監控(七) - Prometheus監控方案部署》先部署 Prometheus。接下來咱們部署 Custom Metrics Adapter

 

kind: Namespace
apiVersion: v1
metadata:
  name: custom-metrics
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-metrics-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-resource-reader
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  - services
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics-apiserver-resource-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: custom-metrics
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-getter
rules:
- apiGroups:
  - custom.metrics.k8s.io
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hpa-custom-metrics-getter
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-getter
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: custom-metrics-apiserver
  namespace: custom-metrics
  labels:
    app: custom-metrics-apiserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: custom-metrics-apiserver
  template:
    metadata:
      labels:
        app: custom-metrics-apiserver
    spec:
      tolerations:
      - key: beta.kubernetes.io/arch
        value: arm
        effect: NoSchedule
      - key: beta.kubernetes.io/arch
        value: arm64
        effect: NoSchedule
      serviceAccountName: custom-metrics-apiserver
      containers:
      - name: custom-metrics-server
        image: luxas/k8s-prometheus-adapter:v0.2.0-beta.0
        args:
        - --prometheus-url=http://prometheus-k8s.monitoring.svc:9090
        - --metrics-relist-interval=30s
        - --rate-interval=60s
        - --v=10
        - --logtostderr=true
        ports:
        - containerPort: 443
        securityContext:
          runAsUser: 0
---
apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: custom-metrics
spec:
  ports:
  - port: 443
    targetPort: 443
  selector:
    app: custom-metrics-apiserver
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1beta1.custom.metrics.k8s.io
spec:
  insecureSkipTLSVerify: true
  group: custom.metrics.k8s.io
  groupPriorityMinimum: 1000
  versionPriority: 5
  service:
    name: api
    namespace: custom-metrics
  version: v1beta1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-server-resources
rules:
- apiGroups:
  - custom-metrics.metrics.k8s.io
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hpa-controller-custom-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-server-resources
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system

 

 

  1. 部署手壓測應用與 HPA 模板

 

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: sample-metrics-app
  name: sample-metrics-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-metrics-app
  template:
    metadata:
      labels:
        app: sample-metrics-app
    spec:
      tolerations:
      - key: beta.kubernetes.io/arch
        value: arm
        effect: NoSchedule
      - key: beta.kubernetes.io/arch
        value: arm64
        effect: NoSchedule
      - key: node.alpha.kubernetes.io/unreachable
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 0
      - key: node.alpha.kubernetes.io/notReady
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 0
      containers:
      - image: luxas/autoscale-demo:v0.1.2
        name: sample-metrics-app
        ports:
        - name: web
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /
            port: 8080
          initialDelaySeconds: 3
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: sample-metrics-app
  labels:
    app: sample-metrics-app
spec:
  ports:
  - name: web
    port: 80
    targetPort: 8080
  selector:
    app: sample-metrics-app
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: sample-metrics-app
  labels:
    service-monitor: sample-metrics-app
spec:
  selector:
    matchLabels:
      app: sample-metrics-app
  endpoints:
  - port: web
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: sample-metrics-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: sample-metrics-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      target:
        kind: Service
        name: sample-metrics-app
      metricName: http_requests
      targetValue: 100
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-metrics-app
  namespace: default
  annotations:
    traefik.frontend.rule.type: PathPrefixStrip
spec:
  rules:
  - http:
      paths:
      - path: /sample-app
        backend:
          serviceName: sample-metrics-app
          servicePort: 80

 

 

這個壓測的應用暴露了一個 Prometheus 的接口。接口中的數據以下,其中 http_requests_total 這個指標就是咱們接下來伸縮使用的自定義指標。

 

[root@iZwz99zrzfnfq8wllk0dvcZ manifests]# curl 172.16.1.160:8080/metrics
# HELP http_requests_total The amount of requests served by the server in total
# TYPE http_requests_total counter
http_requests_total 3955684

 

 

  1. 部署壓測應用

 

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: load-generator 
  labels:
    app: load-generator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: load-generator
  template:
    metadata:
      labels:
        app: load-generator
    spec:
      containers:
      - name: load-generator
        image: busybox 
        command:
          - "sh"
          - "-c"
          - "while true; do wget -q -O- http://sample-metrics-app.default.svc.cluster.local; done"

 

 

  1. 查看 HPA 的狀態與伸縮,稍等幾分鐘,Pod 已經伸縮成功了。

 

workspace kubectl get hpa
NAME                     REFERENCE                       TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
php-apache               Deployment/php-apache           0%/50%        1         10        1          21d
sample-metrics-app-hpa   Deployment/sample-metrics-app   538133m/100   2         10        10         15h

 

 

最後

這篇文章主要是給你們帶來一個對於 autoscaling/v1 和 autoscaling/v2beta1 的感性認知和大致的操做方式,對於 autoscaling/v1 咱們不作過多的贅述,對於但願使用支持 Custom Metrics 的 autoscaling/v2beta1 的開發者而言,也許會認爲總體的操做流程過於複雜難以理解,咱們會在下一篇文章中爲你們詳解 autoscaling/v2beta1 使用 Custom Metrics 的種種細節,幫助你們更深刻地理解其中的原理與設計思路。

相關文章
相關標籤/搜索