容器技術的發展讓軟件交付和運維變得更加標準化、輕量化、自動化。這使得動態調整負載的容量變成一件很是簡單的事情。在kubernetes中,一般只須要修改對應的replicas數目便可完成。當負載的容量調整變得如此簡單後,咱們再回過頭來看下應用的資源畫像。對於大部分互聯網的在線應用而言,負載的峯谷分佈是存在必定規律的。例以下圖是一個典型web應用的負載曲線。從天天早上8點開始,負載開始飆高,在中午12點到14點之間,負載會回落;14點到18點會迎來第二個高峯;在18點以後負載會逐漸回落到最低點。nginx
資源的波峯和波谷之間相差3~4倍左右的容量,低負載的時間會維持8個小時左右。若是使用純靜態的容量規劃方式進行應用管理與部署,咱們能夠計算得出資源浪費比爲25% (計算方式:1 - (1*8+4*16)/4*24 = 0.25 )。而當波峯和波谷之間的差異到達10倍的時候,資源浪費比就會飆升至57%(計算方式:1 - (1*8+10*16)/10*24 = 0.57 )。git
那麼當咱們面對這麼多的資源浪費時,是否能夠經過彈性的方式來解決呢?標準的HPA是基於指標閾值進行伸縮的,常見的指標主要是CPU、內存,固然也能夠經過自定義指標例如QPS、鏈接數等進行伸縮。可是這裏存在一個問題,由於基於資源的伸縮存在必定的時延,這個時延主要包含:採集時延(分鐘級) + 判斷時延(分鐘級) + 伸縮時延(分鐘級)。而對於上圖中,咱們能夠發現負載的峯值毛刺仍是很是尖銳的,這有可能會因爲HPA分鐘級別的伸縮時延形成負載數目沒法及時變化,短期內應用的總體負載飆高,響應時間變慢。特別是對於一些遊戲業務而言,因爲負載太高帶來的業務抖動會形成玩家很是差的體驗。github
爲了解決這個場景,阿里雲容器服務提供了kube-cronhpa-controller
,專門應對資源畫像存在週期性的場景。開發者能夠根據資源畫像的週期性規律,定義time schedule,提早擴容好資源,而在波谷到來後定時回收資源。底層再結合cluster-autoscaler
的節點伸縮能力,提供資源成本的節約。web
cronhpa
是基於CRD的方式開發的controller,使用cronhpa
的方式很是簡單,總體的使用習慣也儘量的和HPA保持一致。代碼倉庫地址api
kubectl apply -f config/crds/autoscaling_v1beta1_cronhorizontalpodautoscaler.yaml
# create ClusterRole kubectl apply -f config/rbac/rbac_role.yaml # create ClusterRolebinding and ServiceAccount kubectl apply -f config/rbac/rbac_role_binding.yaml
kubernetes-cronhpa-controller
kubectl apply -f config/deploy/deploy.yaml
kubernetes-cronhpa-controller
安裝狀態kubectl get deploy kubernetes-cronhpa-controller -n kube-system -o wide kubernetes-cronhpa-controller git:(master) kubectl get deploy kubernetes-cronhpa-controller -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-cronhpa-controller 1 1 1 1 49s
安裝了kubernetes-cronhpa-controller
後,咱們能夠經過一個簡單的demo進行功能的驗證。在部署前,咱們先看下一個標準的cronhpa的定義。app
apiVersion: autoscaling.alibabacloud.com/v1beta1 kind: CronHorizontalPodAutoscaler metadata: labels: controller-tools.k8s.io: "1.0" name: cronhpa-sample namespace: default spec: scaleTargetRef: apiVersion: apps/v1beta2 kind: Deployment name: nginx-deployment-basic jobs: - name: "scale-down" schedule: "30 */1 * * * *" targetSize: 1 - name: "scale-up" schedule: "0 */1 * * * *" targetSize: 3
其中scaleTargetRef
字段負責描述伸縮的對象,jobs
中定義了擴展的crontab
定時任務。在這個例子中,設定的是每分鐘的第0秒擴容到3個Pod,每分鐘的第30s縮容到1個Pod。若是執行正常,咱們能夠在30s內看到負載數目的兩次變化。運維
kubectl apply -f examples/deployment_cronhpa.yaml
kubectl get deploy nginx-deployment-basic kubernetes-cronhpa-controller git:(master) kubectl get deploy nginx-deployment-basic NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-basic 2 2 2 2 9s
kubectl describe cronhpa cronhpa-sample Name: cronhpa-sample Namespace: default Labels: controller-tools.k8s.io=1.0 Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"autoscaling.alibabacloud.com/v1beta1","kind":"CronHorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"controll... API Version: autoscaling.alibabacloud.com/v1beta1 Kind: CronHorizontalPodAutoscaler Metadata: Creation Timestamp: 2019-04-14T10:42:38Z Generation: 1 Resource Version: 4017247 Self Link: /apis/autoscaling.alibabacloud.com/v1beta1/namespaces/default/cronhorizontalpodautoscalers/cronhpa-sample UID: 05e41c95-5ea2-11e9-8ce6-00163e12e274 Spec: Jobs: Name: scale-down Schedule: 30 */1 * * * * Target Size: 1 Name: scale-up Schedule: 0 */1 * * * * Target Size: 3 Scale Target Ref: API Version: apps/v1beta2 Kind: Deployment Name: nginx-deployment-basic Status: Conditions: Job Id: 38e79271-9a42-4131-9acd-1f5bfab38802 Last Probe Time: 2019-04-14T10:43:02Z Message: Name: scale-down Schedule: 30 */1 * * * * State: Submitted Job Id: a7db95b6-396a-4753-91d5-23c2e73819ac Last Probe Time: 2019-04-14T10:43:02Z Message: Name: scale-up Schedule: 0 */1 * * * * State: Submitted Events: <none>
kubernetes-cronhpa-controller git:(master) kubectl describe cronhpa cronhpa-sample Name: cronhpa-sample Namespace: default Labels: controller-tools.k8s.io=1.0 Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"autoscaling.alibabacloud.com/v1beta1","kind":"CronHorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"controll... API Version: autoscaling.alibabacloud.com/v1beta1 Kind: CronHorizontalPodAutoscaler Metadata: Creation Timestamp: 2019-04-15T06:41:44Z Generation: 1 Resource Version: 15673230 Self Link: /apis/autoscaling.alibabacloud.com/v1beta1/namespaces/default/cronhorizontalpodautoscalers/cronhpa-sample UID: 88ea51e0-5f49-11e9-bd0b-00163e30eb10 Spec: Jobs: Name: scale-down Schedule: 30 */1 * * * * Target Size: 1 Name: scale-up Schedule: 0 */1 * * * * Target Size: 3 Scale Target Ref: API Version: apps/v1beta2 Kind: Deployment Name: nginx-deployment-basic Status: Conditions: Job Id: 84818af0-3293-43e8-8ba6-6fd3ad2c35a4 Last Probe Time: 2019-04-15T06:42:30Z Message: cron hpa job scale-down executed successfully Name: scale-down Schedule: 30 */1 * * * * State: Succeed Job Id: f8579f11-b129-4e72-b35f-c0bdd32583b3 Last Probe Time: 2019-04-15T06:42:20Z Message: Name: scale-up Schedule: 0 */1 * * * * State: Submitted Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Succeed 5s cron-horizontal-pod-autoscaler cron hpa job scale-down executed successfully
此時能夠在event中發現負載的定時伸縮已經生效。ide
kubernetes-cronhpa-controller
能夠很好的解決擁有周期性資源畫像的負載彈性,結合底層的cluster-autoscaler
能夠下降大量的資源成本。目前kubernetes-cronhpa-controller
已經正式開源,更詳細的用法與文檔請查閱代碼倉庫的文檔,歡迎開發者提交issue與pr。阿里雲
原文連接
本文爲雲棲社區原創內容,未經容許不得轉載。spa