k8s是經過sceduler來調度pod的,在調度過程當中,因爲一些緣由,會出現調度不均衡的問題,例如:node
節點故障git
新節點被加到集羣中github
節點資源利用不足api
這些都會致使pod在調度過程當中分配不均,例如會形成節點負載太高,引起pod觸發OOM等操做形成服務不可用app
其中,節點資源利用不足時是最容易出現問題的,例如,設置的requests和limits不合理,或者沒有設置requests/limits都會形成調度不均衡async
在這以前,咱們須要先裝一個metrics,安裝方法可參考:k8s的metrics部署ide
Scheduler在調度過程當中,通過了預選階段和優選階段,選出一個可用的node節點,來把pod調度到該節點上。那麼在預選階段和優選階段是如何選出node節點的呢?svg
最根本的一個調度策略就是判斷節點是否有可分配的資源,咱們能夠經過如下kubectl describe node node名
來查看,如今按照這個調度策略來分析下工具
查看當前的節點資源佔用狀況ui
能夠看到,當前的k8s集羣共有三個node節點,可是節點的資源分佈狀況極其不均勻,而實際上,k8s在進行調度時,計算的就是requests的值,無論你limits設置多少,k8s都不關心。因此當這個值沒有達到資源瓶頸時,理論上,該節點就會一直有pod調度上去。因此這個時候就會出現調度不均衡的問題。有什麼解決辦法?
給每個pod設置requests和limits,若是資源充足,最好將requests和limits設置成同樣的,提升Pod的QoS
重平衡,採起人爲介入或定時任務方式,根據多維度去對當前pod分佈作重平衡
Descheduler 的出現就是爲了解決 Kubernetes 自身調度(一次性調度)不足的問題。它以定時任務方式運行,根據已實現的策略,從新去平衡 pod 在集羣中的分佈。
截止目前,Descheduler 已實現的策略和計劃中的功能點以下:
已實現的調度策略
RemoveDuplicates 移除重複 pod
LowNodeUtilization 節點低度使用
RemovePodsViolatingInterPodAntiAffinity 移除違反pod反親和性的 pod
RemovePodsViolatingNodeAffinity
路線圖中計劃實現的功能點
Strategy to consider taints and tolerations 考慮污點和容忍
Consideration of pod affinity 考慮 pod 親和性
Strategy to consider pod life time 考慮 pod 生命週期
Strategy to consider number of pending pods 考慮待定中的 pod 數量
Integration with cluster autoscaler 與集羣自動伸縮集成
Integration with metrics providers for obtaining real load metrics 與監控工具集成來獲取真正的負載指標
Consideration of Kubernetes’s scheduler’s predicates 考慮 k8s 調度器的預判機制
RemoveDuplicates 此策略確保每一個副本集(RS)、副本控制器(RC)、部署(Deployment)或任務(Job)只有一個 pod 被分配到同一臺 node 節點上。若是有多個則會被驅逐到其它節點以便更好的在集羣內分散 pod。
LowNodeUtilization a. 此策略會找到未充分使用的 node 節點並在可能的狀況下將那些被驅逐後但願重建的 pod 調度到該節點上。 b. 節點是否利用不足由一組可配置的 閾值(thresholds
) 決定。這組閾值是以百分比方式指定了 CPU、內存以及 pod數量 的。只有當全部被評估資源都低於它們的閾值時,該 node 節點纔會被認爲處於利用不足狀態。 c. 同時還存在一個 目標閾值(targetThresholds
),用於評估那些節點是否由於超出了閾值而應該從其上驅逐 pod。任何閾值介於 thresholds
和 targetThresholds
之間的節點都被認爲資源被合理利用了,所以不會發生 pod 驅逐行爲(不管是被驅逐走仍是被驅逐來)。 d. 與之相關的還有另外一個參數numberOfNodes
,這個參數用來激活指定數量的節點是否處於資源利用不足狀態而發生 pod 驅逐行爲。
RemovePodsViolatingInterPodAntiAffinity 此策略會確保 node 節點上違反 pod 間親和性的 pod 被驅逐。好比節點上有 podA 而且 podB 和 podC(也在同一節點上運行)具備禁止和 podA 在同一節點上運行的反親和性規則,則 podA 將被從節點上驅逐,以便讓 podB 和 podC 能夠運行。
RemovePodsViolatingNodeAffinity 此策略會確保那些違反 node 親和性的 pod 被驅逐。好比 podA 運行在 nodeA 上,後來該節點再也不知足 podA 的 node 親和性要求,若是此時存在 nodeB 知足這一要求,則 podA 會被驅逐到 nodeB 節點上。
當 Descheduler 調度器決定於驅逐 pod 時,它將遵循下面的機制:
Critical pods (with annotations scheduler.alpha.kubernetes.io/critical-pod) are never evicted 關鍵 pod(帶註釋 scheduler.alpha.kubernetes.io/critical-pod)永遠不會被驅逐。
Pods (static or mirrored pods or stand alone pods) not part of an RC, RS, Deployment or Jobs are never evicted because these pods won’t be recreated 不屬於RC,RS,部署或做業的Pod(靜態或鏡像pod或獨立pod)永遠不會被驅逐,由於這些pod不會被從新建立。
Pods associated with DaemonSets are never evicted 與 DaemonSets 關聯的 Pod 永遠不會被驅逐。
Pods with local storage are never evicted 具備本地存儲的 Pod 永遠不會被驅逐。
BestEffort pods are evicted before Burstable and Guaranteed pods QoS 等級爲 BestEffort 的 pod 將會在等級爲 Burstable 和 Guaranteed 的 pod 以前被驅逐。
Descheduler 會以 Job 形式在 pod 內運行,由於 Job 具備屢次運行而無需人爲介入的優點。爲了不被本身驅逐 Descheduler 將會以 關鍵型 pod 運行,所以它只能被建立建到 kube-system
namespace 內。 關於 Critical pod 的介紹請參考:Guaranteed Scheduling For Critical Add-On Pods
要使用 Descheduler,咱們須要編譯該工具並構建 Docker 鏡像,建立 ClusterRole、ServiceAccount、ClusterRoleBinding、ConfigMap 以及 Job。
yaml文件下載地址:https://github.com/kubernetes-sigs/descheduler
git clone https://github.com/kubernetes-sigs/descheduler.git
kubectl create -f kubernetes/rbac.yaml kubectl create -f kubernetes/configmap.yaml kubectl create -f kubernetes/job.yaml
kubectl create -f kubernetes/rbac.yaml kubectl create -f kubernetes/configmap.yaml kubectl create -f kubernetes/cronjob.yaml
兩種方式,一種是以任務的形式啓動,另外一種是以計劃任務的形式啓動,建議以計劃任務方式去啓動
啓動以後,能夠來驗證下descheduler是否啓動成功
# kubectl get pod -n kube-system | grep descheduler descheduler-job-6qtk2 1/1 Running 0 158m
再來驗證下pod是否分佈均勻
能夠看到,目前node02這個節點的pod數是20個,相比較其餘節點,仍是差了幾個,那麼咱們只對pod數量作重平衡的話,能夠對descheduler作以下的配置修改
# cat kubernetes/configmap.yaml ---apiVersion: v1 kind: ConfigMap metadata: name: descheduler-policy-configmap namespace: kube-system data: policy.yaml: | apiVersion: "descheduler/v1alpha1" kind: "DeschedulerPolicy" strategies: "RemoveDuplicates": enabled: true "RemovePodsViolatingInterPodAntiAffinity": enabled: true "LowNodeUtilization": enabled: true params: nodeResourceUtilizationThresholds: thresholds: #閾值 #"cpu" : 20 #註釋掉下面這些關於cpu和內存的配置項 #"memory": 20 "pods": 24 #把pod的數值調高一些 targetThresholds: #目標閾值 #"cpu" : 50 #"memory": 50 "pods": 25
修改完成後,重啓下便可
kubectl delete -f kubernetes/configmap.yaml kubectl apply -f kubernetes/configmap.yaml kubectl delete -f kubernetes/cronjob.yaml kubectl apply -f kubernetes/cronjob.yaml
而後,看下Descheduler的調度日誌
# kubectl logs -n kube-system descheduler-job-9rc9h I0729 08:48:45.361655 1 lownodeutilization.go:151] Node "k8s-node02" is under utilized with usage: api.ResourceThresholds{"cpu":44.375, "memory":24.682000160690105, "pods":22.727272727272727}I0729 08:48:45.361772 1 lownodeutilization.go:154] Node "k8s-node03" is over utilized with usage: api.ResourceThresholds{"cpu":49.375, "memory":27.064916842870552, "pods":24.545454545454547}I0729 08:48:45.361807 1 lownodeutilization.go:151] Node "k8s-master01" is under utilized with usage: api.ResourceThresholds{"cpu":50, "memory":3.6347778465158265, "pods":8.181818181818182}I0729 08:48:45.361828 1 lownodeutilization.go:151] Node "k8s-master02" is under utilized with usage: api.ResourceThresholds{"cpu":40, "memory":0, "pods":5.454545454545454}I0729 08:48:45.361863 1 lownodeutilization.go:151] Node "k8s-master03" is under utilized with usage: api.ResourceThresholds{"cpu":40, "memory":0, "pods":5.454545454545454}I0729 08:48:45.361977 1 lownodeutilization.go:154] Node "k8s-node01" is over utilized with usage: api.ResourceThresholds{"cpu":46.875, "memory":32.25716687667426, "pods":27.272727272727273}I0729 08:48:45.361994 1 lownodeutilization.go:66] Criteria for a node under utilization: CPU: 0, Mem: 0, Pods: 23I0729 08:48:45.362016 1 lownodeutilization.go:73] Total number of underutilized nodes: 4I0729 08:48:45.362025 1 lownodeutilization.go:90] Criteria for a node above target utilization: CPU: 0, Mem: 0, Pods: 23I0729 08:48:45.362033 1 lownodeutilization.go:92] Total number of nodes above target utilization: 2I0729 08:48:45.362051 1 lownodeutilization.go:202] Total capacity to be moved: CPU:0, Mem:0, Pods:55.2I0729 08:48:45.362059 1 lownodeutilization.go:203] ********Number of pods evicted from each node:***********I0729 08:48:45.362066 1 lownodeutilization.go:210] evicting pods from node "k8s-node01" with usage: api.ResourceThresholds{"cpu":46.875, "memory":32.25716687667426, "pods":27.272727272727273}I0729 08:48:45.362236 1 lownodeutilization.go:213] allPods:30, nonRemovablePods:3, bestEffortPods:2, burstablePods:25, guaranteedPods:0I0729 08:48:45.362246 1 lownodeutilization.go:217] All pods have priority associated with them. Evicting pods based on priority I0729 08:48:45.381931 1 evictions.go:102] Evicted pod: "flink-taskmanager-7c7557d6bc-ntnp2" in namespace "default"I0729 08:48:45.381967 1 lownodeutilization.go:270] Evicted pod: "flink-taskmanager-7c7557d6bc-ntnp2"I0729 08:48:45.381980 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":46.875, "memory":32.25716687667426, "pods":26.363636363636363}I0729 08:48:45.382268 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"flink-taskmanager-7c7557d6bc-ntnp2", UID:"6a5374de-a204-4d2c-a302-ff09c054a43b", APIVersion:"v1", ResourceVersion:"4945574", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.399567 1 evictions.go:102] Evicted pod: "flink-taskmanager-7c7557d6bc-t2htk" in namespace "default"I0729 08:48:45.399613 1 lownodeutilization.go:270] Evicted pod: "flink-taskmanager-7c7557d6bc-t2htk"I0729 08:48:45.399626 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":46.875, "memory":32.25716687667426, "pods":25.454545454545453}I0729 08:48:45.400503 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"flink-taskmanager-7c7557d6bc-t2htk", UID:"bd255dbc-bb05-4258-ac0b-e5be3dc4efe8", APIVersion:"v1", ResourceVersion:"4705479", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.450568 1 evictions.go:102] Evicted pod: "oauth-center-tools-api-645d477bcf-hnb8g" in namespace "default"I0729 08:48:45.450603 1 lownodeutilization.go:270] Evicted pod: "oauth-center-tools-api-645d477bcf-hnb8g"I0729 08:48:45.450619 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":45.625, "memory":31.4545002047819, "pods":24.545454545454543}I0729 08:48:45.451240 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"oauth-center-tools-api-645d477bcf-hnb8g", UID:"caba0aa8-76de-4e23-b163-c660df0ba54d", APIVersion:"v1", ResourceVersion:"3800151", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.477605 1 evictions.go:102] Evicted pod: "dazzle-core-api-5d4c899b84-xhlkl" in namespace "default"I0729 08:48:45.477636 1 lownodeutilization.go:270] Evicted pod: "dazzle-core-api-5d4c899b84-xhlkl"I0729 08:48:45.477649 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":44.375, "memory":30.65183353288954, "pods":23.636363636363633}I0729 08:48:45.477992 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"dazzle-core-api-5d4c899b84-xhlkl", UID:"ce216892-6c50-4c31-b30a-cbe5c708285e", APIVersion:"v1", ResourceVersion:"3800074", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.523774 1 request.go:557] Throttling request took 141.499557ms, request: POST:https://10.96.0.1:443/api/v1/namespaces/default/events I0729 08:48:45.569073 1 evictions.go:102] Evicted pod: "live-foreignapi-api-7bc679b789-z8jnr" in namespace "default"I0729 08:48:45.569105 1 lownodeutilization.go:270] Evicted pod: "live-foreignapi-api-7bc679b789-z8jnr"I0729 08:48:45.569119 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":43.125, "memory":29.84916686099718, "pods":22.727272727272723}I0729 08:48:45.569151 1 lownodeutilization.go:236] 6 pods evicted from node "k8s-node01" with usage map[cpu:43.125 memory:29.84916686099718 pods:22.727272727272723]I0729 08:48:45.569172 1 lownodeutilization.go:210] evicting pods from node "k8s-node03" with usage: api.ResourceThresholds{"cpu":49.375, "memory":27.064916842870552, "pods":24.545454545454547}I0729 08:48:45.569418 1 lownodeutilization.go:213] allPods:27, nonRemovablePods:2, bestEffortPods:0, burstablePods:25, guaranteedPods:0I0729 08:48:45.569430 1 lownodeutilization.go:217] All pods have priority associated with them. Evicting pods based on priority I0729 08:48:45.603962 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"live-foreignapi-api-7bc679b789-z8jnr", UID:"37c698e3-b63e-4ef1-917b-ac6bc1be05e0", APIVersion:"v1", ResourceVersion:"3800113", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.639483 1 evictions.go:102] Evicted pod: "dazzle-contentlib-api-575f599994-khdn5" in namespace "default"I0729 08:48:45.639512 1 lownodeutilization.go:270] Evicted pod: "dazzle-contentlib-api-575f599994-khdn5"I0729 08:48:45.639525 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":48.125, "memory":26.26225017097819, "pods":23.636363636363637}I0729 08:48:45.645446 1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"dazzle-contentlib-api-575f599994-khdn5", UID:"068aa2ad-f160-4aaa-b25b-f0a9603f9011", APIVersion:"v1", ResourceVersion:"3674763", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler I0729 08:48:45.780324 1 evictions.go:102] Evicted pod: "dazzle-datasync-task-577c46668-lltg4" in namespace "default"I0729 08:48:45.780544 1 lownodeutilization.go:270] Evicted pod: "dazzle-datasync-task-577c46668-lltg4"I0729 08:48:45.780565 1 lownodeutilization.go:283] updated node usage: api.ResourceThresholds{"cpu":46.875, "memory":25.45958349908583, "pods":22.727272727272727}I0729 08:48:45.780600 1 lownodeutilization.go:236] 4 pods evicted from node "k8s-node03" with usage map[cpu:46.875 memory:25.45958349908583 pods:22.727272727272727]I0729 08:48:45.780620 1 lownodeutilization.go:102] Total number of pods evicted: 11
經過這個日誌,能夠看到
Node 「k8s-node01」 is over utilized,而後就是有提示evicting pods from node 「k8s-node01」,這就說明,Descheduler已經在從新調度了,最終調度結果以下:
本文出自https://cloud.tencent.com/developer/article/1671811