雲原生中定時彈性伸縮之CronHPA實戰

這是我參與更文挑戰的第1天,活動詳情查看: 更文挑戰html

一 背景

在目前大部分互聯網企業而言,應用的負載呈現明顯的峯谷分佈,有必定的規律可循,例如訂餐應用,在中午12點,下午6點,以及晚上9點等呈現明顯的峯值流量,其應用負載也達到最高,不通類型的業務更具長時間的監控數據以及業務用戶畫像,可以精確的預知應用的高峯期,通常對於應用資源的波峯和波谷之間相差3-4倍,並且波峯來臨可能不是緩慢上升,而是瞬間負載拉滿。nginx

rgit

在雲原生中,容器的出現,可讓咱們更加標準、輕量、自動的動態擴所容,在Kubernetes中,只用簡單的修復replicase數便可完成,對於彈性擴所容只用定義HPA便可,根據不一樣的metric來動態調整副本數量,可是對於尖刺形的業務峯值,因爲HPA中後端POD的拉起延遲,顯然在這種場景是不能很好的知足用戶需求。可是若是隻是人工手動的提早擴展POD,對於這種週期性的業務顯然也是不合理。github

二 方案

2.1 思路

對於標準的 HPA 是基於指標閾值進行伸縮的,常見的指標主要是 CPU、內存,固然也能夠經過自定義指標例如 QPS、鏈接數等進行伸縮。可是這裏存在一個問題:基於資源的伸縮存在必定的時延,這個時延主要包含:採集時延(分鐘級) + 判斷時延(分鐘級) + 伸縮時延(分鐘級)。而對於上圖中,咱們能夠發現負載的峯值毛刺仍是很是尖銳的,這有可能會因爲 HPA 分鐘級別的伸縮時延形成負載數目沒法及時變化,短期內應用的總體負載飆高,響應時間變慢。特別是對於一些遊戲業務而言,因爲負載太高帶來的業務抖動會形成玩家很是差的體驗。後端

爲了解決這個場景,目前阿里提供了 kube-cronhpa-controller,專門應對資源畫像存在週期性的場景。開發者能夠根據資源畫像的週期性規律,定義 time schedule,提早擴容好資源,而在波谷到來後定時回收資源。底層再結合 cluster-autoscaler 的節點伸縮能力,提供資源成本的節約。api

2.2 cronhpa

Cronhpa ,利用底層使用的是go-cron,所以支持更多的規則。markdown

Field name   | Mandatory? | Allowed values  | Allowed special characters
  ----------   | ---------- | --------------  | --------------------------
  Seconds      | Yes        | 0-59            | * / , -
  Minutes      | Yes        | 0-59            | * / , -
  Hours        | Yes        | 0-23            | * / , -
  Day of month | Yes        | 1-31            | * / , - ?
  Month        | Yes        | 1-12 or JAN-DEC | * / , -
  Day of week  | Yes        | 0-6 or SUN-SAT  | * / , - ?    
複製代碼

kubernetes-cronhpa-controller is a kubernetes cron horizontal pod autoscaler controller using crontab like scheme. You can use CronHorizontalPodAutoscaler with any kind object defined in kubernetes which support scale subresource(such as Deployment and StatefulSet).app

三 安裝部署

3.1 下載相關資源

git clone https://github.com/AliyunContainerService/kubernetes-cronhpa-controller.git
複製代碼

3.2 安裝

  • install CRD
kubectl apply -f config/crds/autoscaling_v1beta1_cronhorizontalpodautoscaler.yaml
複製代碼
  • install RBAC settings
# create ClusterRole 
kubectl apply -f config/rbac/rbac_role.yaml

# create ClusterRolebinding and ServiceAccount 
kubectl apply -f config/rbac/rbac_role_binding.yaml

[root@master ~]# kubectl api-resources |grep cronhpa
cronhorizontalpodautoscalers      cronhpa      autoscaling.alibabacloud.com   true         CronHorizontalPodAutoscaler
複製代碼
  • deploy kubernetes-cronhpa-controller
kubectl apply -f config/deploy/deploy.yaml
複製代碼
  • verify installation
kubectl get deploy kubernetes-cronhpa-controller -n kube-system -o wide 

➜  kubernetes-cronhpa-controller git:(master) ✗ kubectl get deploy kubernetes-cronhpa-controller -n kube-system
NAME                            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-cronhpa-controller   1         1         1            1           49s
複製代碼

四 測試

進入examples folder目錄ide

  • 部署建立workload和cronhpa
kubectl apply -f examples/deployment_cronhpa.yaml 
複製代碼
  • 查看deploy副本數
kubectl get deploy nginx-deployment-basic 

➜  kubernetes-cronhpa-controller git:(master) ✗ kubectl get deploy nginx-deployment-basic
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment-basic   2         2         2            2           9s
複製代碼
  • 查看cronhpa status
[root@master kubernetes-cronhpa-controller]# kubectl describe cronhpa cronhpa-sample 
'Name:         cronhpa-sample
Namespace:    default
Labels:       controller-tools.k8s.io=1.0
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"autoscaling.alibabacloud.com/v1beta1","kind":"CronHorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"controll...
API Version:  autoscaling.alibabacloud.com/v1beta1
Kind:         CronHorizontalPodAutoscaler
Metadata:
  Creation Timestamp:  2020-07-31T07:27:12Z
  Generation:          5
  Resource Version:    67753389
  Self Link:           /apis/autoscaling.alibabacloud.com/v1beta1/namespaces/default/cronhorizontalpodautoscalers/cronhpa-sample
  UID:                 3be95264-9b08-4efc-b9f9-db1aca7a80d8
Spec:
  Exclude Dates:  <nil>
  Jobs:
    Name:         scale-down
    Run Once:     false
    Schedule:     30 */1 * * * *
    Target Size:  1
    Name:         scale-up
    Run Once:     false
    Schedule:     0 */1 * * * *
    Target Size:  3
  Scale Target Ref:
    API Version:  apps/v1beta2
    Kind:         Deployment
    Name:         nginx-deployment-basic
Status:
  Conditions:
    Job Id:           184b967e-a149-488d-b4ae-765f9a792f93
    Last Probe Time:  2020-07-31T07:28:31Z
    Message:          cron hpa job scale-down executed successfully. current replicas:3, desired replicas:1.
    Name:             scale-down
    Run Once:         false
    Schedule:         30 */1 * * * *
    State:            Succeed
    Target Size:      1
    Job Id:           870b014f-cfbc-4f60-a628-16f2995c288d
    Last Probe Time:  2020-07-31T07:28:01Z
    Message:          cron hpa job scale-up executed successfully. current replicas:1, desired replicas:3.
    Name:             scale-up
    Run Once:         false
    Schedule:         0 */1 * * * *
    State:            Succeed
    Target Size:      3
  Exclude Dates:      <nil>
  Scale Target Ref:
    API Version:  apps/v1beta2
    Kind:         Deployment
    Name:         nginx-deployment-basic
Events:
  Type    Reason   Age   From                            Message
  ----    ------   ----  ----                            -------
  Normal  Succeed  87s   cron-horizontal-pod-autoscaler  cron hpa job scale-down executed successfully. current replicas:2, desired replicas:1.
  Normal  Succeed  57s   cron-horizontal-pod-autoscaler  cron hpa job scale-up executed successfully. current replicas:1, desired replicas:3.
  Normal  Succeed  27s   cron-horizontal-pod-autoscaler  cron hpa job scale-down executed successfully. current replicas:3, desired replicas:1.
複製代碼

在這個例子中,設定的是每分鐘的第 0 秒擴容到 3 個 Pod,每分鐘的第 30s 縮容到 1 個 Pod。若是執行正常,咱們能夠在 30s 內看到負載數目的兩次變化。能夠經過查看pod的增長和縮減狀況,能夠使得咱們能夠更容易的來定時控制POD數量,從而在波段性業務中,最大效率利用資源。oop

五 反思

目前對於無狀態應用能夠配合HPA,或CronHPA來應對橫向擴展的壓力,對於有狀態應用,可藉助Vertical Pod Autoscaler(VPA)來進行實現,根據業務用戶畫像,精確的預估容量水位,最小化的成原本自動化的知足業務需求。

參考連接

相關文章
相關標籤/搜索