K8S學習筆錄 - Pod的優先級調度

原文連接node

所謂「優先級」調度,就是優先部署較爲重要的是Pod,在資源緊張的時候,甚至經過驅逐不那麼重要的Pod釋放資源來部署重要的是Podnginx

優先級調度能夠經過聲明PriorityClass和在Pod的配置中經過配置 PriorityClassName 名字來使用git

那麼如何聲明一個Pod相對於其餘的Pod「更重要」?github

「重要」能夠經過如下幾個維度來定義json

  1. Priority優先級
  2. QoS服務質量等級
  3. 系統定義的其餘度量指標

優先級 搶佔調度策略的行爲分別是 驅逐(Eviction)搶佔(Preemption) ,這兩種行爲的場景不一樣,但效果相同。api

驅逐行爲 是kubelet進程的行爲,當Node資源不足的時候,該節點上的kubelet會綜合考慮節點上Pod的優先級、資源申請量與實際使用量等信息來計算哪些Pod應當被驅逐。 當被驅逐的Pod中有幾個優先級相同時,會優先驅逐實際使用量超過申請量最多倍數的Pod。bash

搶佔行爲 是Scheduler的行爲,當一個新的Pod由於資源沒法知足而不能被調度時,Scheduler有權決定選擇驅逐部分低優先級的Pod實例來知足此Pod的調度目標。markdown

PriorityClass

type PriorityClass struct {
    metav1.TypeMeta `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    // 優先級數
    Value int32 `json:"value"`

    // globalDefault表示是在Pod沒有設置優先級的時候,此優先級配置是否爲默認的優先級
    // 只能有一個PriorityClasses能夠設置該值爲true
    // 若是設置多個,則使用數值較小的那個
    GlobalDefault bool `json:"globalDefault,omitempty"`

    // 字符串描述,記錄一些有用信息
    Description string `json:"description,omitempty"`

    // 標記是否會搶佔低優先級的Pod,默認值爲PreemptLowerPriority
    PreemptionPolicy *apiv1.PreemptionPolicy `json:"preemptionPolicy,omitempty"`
}

type PreemptionPolicy string
const (
    // 會搶佔其餘低優先級Pod
    PreemptLowerPriority PreemptionPolicy = "PreemptLowerPriority"

    // 不會搶佔其餘低優先級Pod
    PreemptNever PreemptionPolicy = "Never"
)
複製代碼

配置示例

  1. 建立兩個個優先級配置low-priority和high-priority
  2. 建立一個使用low-priority而且帶有標籤priority=low的Pod
  3. 建立一個使用high-priority可是不與priority=low的Pod在同一個節點的Pod
  4. 檢查high-priority的Pod是否將low-priority的Pod「擠走」

Step - 1 建立兩個個優先級配置low-priority和high-priority

配置文件priority.yamlide

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
 name: low-priority
value: 10
globalDefault: false
description: "低優先級Pod"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
 name: high-priority
value: 100
globalDefault: false
description: "高優先級Pod"
複製代碼
$ kubectl create -f priority.yaml
priorityclass.scheduling.k8s.io/low-priority created
priorityclass.scheduling.k8s.io/high-priority created
複製代碼

Step - 2 建立一個使用low-priority的Pod

配置文件nginx-low.yamloop

apiVersion: v1
kind: Pod
metadata:
 name: nginx-low
 labels:
 priority: low
spec:
 priorityClassName: low-priority
 containers:
 - name: nginx
 image: nginx
 nodeSelector:
    kubernetes.io/hostname: tx
複製代碼
$ kubectl create -f nginx-low.yaml
pod/nginx-low created

$ kubectl get pods nginx-low -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP          NODE   NOMINATED NODE   READINESS GATES
nginx-low   1/1     Running   0          21s   10.32.0.1   tx     <none>           <none>
複製代碼

Step - 3 建立一個使用high-priority的Pod

配置文件nginx-high.yaml

apiVersion: v1
kind: Pod
metadata:
 name: nginx-high
spec:
 affinity:
 podAntiAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 - labelSelector:
 matchExpressions:
 - key: priority
 operator: In
 values:
 - low
 topologyKey: kubernetes.io/hostname
 priorityClassName: high-priority
 containers:
 - name: nginx
 image: nginx
 nodeSelector:
    kubernetes.io/hostname: tx
複製代碼

建立該Pod

$ kubectl create -f nginx-high.yaml
pod/nginx-high created
複製代碼

Step - 4 檢查high-priority的Pod是否將low-priority的Pod「擠走」

$ kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE   NOMINATED NODE   READINESS GATES
nginx-high   1/1     Running   0          15s   <none>   tx     <none>           <none>
複製代碼

發現以前建立的低優先級的Pod消失了!!

經過查看Pod的事件列表能夠發現,高優先級的Pod由於反親和性配置沒法部署在指望的 tx 節點上

而調度器也由於找不到知足條件的節點沒法將低優先級的Pod調度到其餘節點

因此低優先級的Pod nginx-low 被驅逐後移除了

$ kubectl describe pod nginx-high
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match node selector. Warning FailedScheduling <unknown> default-scheduler 0/4 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules, 3 node(s) didn't match node selector.
  Normal   Scheduled         <unknown>  default-scheduler  Successfully assigned default/nginx-high to tx
  Normal   Pulling           16s        kubelet, tx        Pulling image "nginx"
  Normal   Pulled            12s        kubelet, tx        Successfully pulled image "nginx"
  Normal   Created           12s        kubelet, tx        Created container nginx
  Normal   Started           11s        kubelet, tx        Started container nginx
複製代碼
相關文章
相關標籤/搜索