k8s基本概念-配置調度策略之(Taints-and-Tolerations)

k8s基本概念-配置調度策略之(Taints-and-Tolerations)

2018/4/12node

經過定義 Taints and Tolerations 來達到 node 排斥 pod 的目的

  • 經過一個典型實例來描述 taint 和 toleration 之間的關聯
    • 測試前的集羣狀態
    • 部署app whoami-t1
    • 測試 taint 的用法
    • 測試結果
    • 測試使用 toleration
    • 測試結果
    • 如何移除指定的 taint 呢?
  • 聊一聊 Taints and Tolerations 的細節
    • 概念

經過一個典型實例來描述 taint 和 toleration 之間的關聯

測試前的集羣狀態

部署集羣的時候,你很可能有留意到,集羣中設置爲 master 角色的節點,是不會有任務調度到這裏來執行的,這是爲什麼呢?api

[root@tvm-02 whoami]# kubectl get nodes
NAME     STATUS    ROLES     AGE       VERSION
tvm-01   Ready     master    8d        v1.9.0
tvm-02   Ready     master    8d        v1.9.0
tvm-03   Ready     master    8d        v1.9.0
tvm-04   Ready     <none>    8d        v1.9.0
[root@tvm-02 whoami]# kubectl describe nodes tvm-01 |grep -E '(Roles|Taints)'
Roles:              master
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@tvm-02 whoami]# kubectl describe nodes tvm-02 |grep -E '(Roles|Taints)'
Roles:              master
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@tvm-02 whoami]# kubectl describe nodes tvm-03 |grep -E '(Roles|Taints)'
Roles:              master
Taints:             node-role.kubernetes.io/master:NoSchedule
部署app whoami-t1
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: whoami-t1
  labels:
    app: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: opera443399/whoami:0.9
          ports:
            - containerPort: 80

部署後能夠發現,全部任務都被調度到 worker 節點上bash

[root@tvm-02 whoami]# kubectl apply -f app-t1.yaml
deployment "whoami-t1" created
[root@tvm-02 whoami]# kubectl get ds,deploy,svc,pods --all-namespaces -o wide -l app=whoami
NAMESPACE   NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES                   SELECTOR
default     deploy/whoami-t1   3         3         3            3           46s       whoami       opera443399/whoami:0.9   app=whoami

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default po/whoami-t1-6cf9cd6bf4-62bhc 1/1 Running 0 46s 172.30.105.1 tvm-04
default po/whoami-t1-6cf9cd6bf4-dss72 1/1 Running 0 46s 172.30.105.2 tvm-04
default po/whoami-t1-6cf9cd6bf4-zvpsk 1/1 Running 0 46s 172.30.105.0 tvm-04app

##### 測試 taint 的用法
給 tvm-04 配置一個 taint 來調整調度策略
```bash
[root@tvm-02 whoami]# kubectl taint nodes tvm-04 node-role.kubernetes.io/master=:NoSchedule
node "tvm-04" tainted
##### 符合預期
[root@tvm-02 whoami]# kubectl describe nodes tvm-04 |grep -E '(Roles|Taints)'
Roles:              <none>
Taints:             node-role.kubernetes.io/master:NoSchedule

上述 taint 的指令含義是:
給節點 tvm-04 配置一個 taint (能夠理解爲:污點)
其中,這個 taint 的
key 是 node-role.kubernetes.io/master
value 是 ` (值爲空)<br/>taint effect 是NoSchedule<br/>這意味着,沒有任何 pod 能夠調度到這個節點上面,除非在這個 pod 的描述文件中有一個對應的toleration` (能夠理解爲:設置 pod 容忍了這個污點)ide

測試結果

咱們發現,以前部署的 deploy/whoami-t1 並未被驅逐測試

[root@tvm-02 whoami]# kubectl get ds,deploy,svc,pods --all-namespaces -o wide -l app=whoami
NAMESPACE   NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES                   SELECTOR
default     deploy/whoami-t1   3         3         3            3           17m       whoami       opera443399/whoami:0.9   app=whoami

NAMESPACE   NAME                            READY     STATUS    RESTARTS   AGE       IP             NODE
default     po/whoami-t1-6cf9cd6bf4-62bhc   1/1       Running   0          17m       172.30.105.1   tvm-04
default     po/whoami-t1-6cf9cd6bf4-dss72   1/1       Running   0          17m       172.30.105.2   tvm-04
default     po/whoami-t1-6cf9cd6bf4-zvpsk   1/1       Running   0          17m       172.30.105.0   tvm-04

接着咱們嘗試着再部署一個app whoami-t2spa

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: whoami-t2
  labels:
    app: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: opera443399/whoami:0.9
          ports:
            - containerPort: 80
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSched

下述操做代表:策略已經生效,只是舊的 deploy 默認不會受到影響(被強制驅逐)code

##### 部署
[root@tvm-02 whoami]# kubectl apply -f app-t2.yaml
deployment "whoami-t2" created
[root@tvm-02 whoami]# kubectl get ds,deploy,svc,pods --all-namespaces -o wide -l app=whoami
NAMESPACE   NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES                   SELECTOR
default     deploy/whoami-t1   3         3         3            3           20m       whoami       opera443399/whoami:0.9   app=whoami
default     deploy/whoami-t2   3         3         3            0           38s       whoami       opera443399/whoami:0.9   app=whoami

NAMESPACE   NAME                            READY     STATUS    RESTARTS   AGE       IP             NODE
default     po/whoami-t1-6cf9cd6bf4-62bhc   1/1       Running   0          20m       172.30.105.1   tvm-04
default     po/whoami-t1-6cf9cd6bf4-dss72   1/1       Running   0          20m       172.30.105.2   tvm-04
default     po/whoami-t1-6cf9cd6bf4-zvpsk   1/1       Running   0          20m       172.30.105.0   tvm-04
default     po/whoami-t2-6cf9cd6bf4-5f9wl   0/1       Pending   0          38s       <none>         <none>
default     po/whoami-t2-6cf9cd6bf4-8l59z   0/1       Pending   0          38s       <none>         <none>
default     po/whoami-t2-6cf9cd6bf4-lqpzp   0/1       Pending   0          38s       <none>         <none>

[root@tvm-02 whoami]# kubectl describe deploy/whoami-t2
Name:                   whoami-t2
(omited)
Annotations:            deployment.kubernetes.io/revision=1
Replicas:               3 desired | 3 updated | 3 total | 0 available | 3 unavailable
(omited)
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
(omited)
[root@tvm-02 whoami]# kubectl describe po/whoami-t2-6cf9cd6bf4-5f9wl
Name:           whoami-t2-6cf9cd6bf4-5f9wl
(omited)
Status:         Pending
IP:
Controlled By:  ReplicaSet/whoami-t2-6cf9cd6bf4
(omited)
Conditions:
  Type           Status
  PodScheduled   False
(omited)
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  27s (x14 over 3m)  default-scheduler  0/4 nodes are available: 4 PodToleratesNodeTaints.
測試使用 toleration

增長 toleration 相關的配置來調度 whoami-t2master 節點上orm

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: whoami-t2
  labels:
    app: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: opera443399/whoami:0.9
          ports:
            - containerPort: 80
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
測試結果

下述操做代表:以前不可用的節點,調整後,節點處於可用狀態, pod 部署成功vps

##### 更新
[root@tvm-02 whoami]# kubectl apply -f app-t2.yaml
deployment "whoami-t2" configured
##### 連續 2 次查看狀態
[root@tvm-02 whoami]# kubectl describe deploy/whoami-t2
Name:                   whoami-t2
(omitted)
Annotations:            deployment.kubernetes.io/revision=2
Replicas:               3 desired | 3 updated | 4 total | 2 available | 2 unavailable
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  whoami-t2-6cf9cd6bf4 (1/1 replicas created)
NewReplicaSet:   whoami-t2-647c9cb7c5 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  39m   deployment-controller  Scaled up replica set whoami-t2-6cf9cd6bf4 to 3
  Normal  ScalingReplicaSet  14s   deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 1
  Normal  ScalingReplicaSet  12s   deployment-controller  Scaled down replica set whoami-t2-6cf9cd6bf4 to 2
  Normal  ScalingReplicaSet  12s   deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 2
  Normal  ScalingReplicaSet  6s    deployment-controller  Scaled down replica set whoami-t2-6cf9cd6bf4 to 1
  Normal  ScalingReplicaSet  6s    deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 3
[root@tvm-02 whoami]# kubectl describe deploy/whoami-t2
Name:                   whoami-t2
(omitted)
Annotations:            deployment.kubernetes.io/revision=2
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
(omitted)
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   whoami-t2-647c9cb7c5 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  39m   deployment-controller  Scaled up replica set whoami-t2-6cf9cd6bf4 to 3
  Normal  ScalingReplicaSet  28s   deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 1
  Normal  ScalingReplicaSet  26s   deployment-controller  Scaled down replica set whoami-t2-6cf9cd6bf4 to 2
  Normal  ScalingReplicaSet  26s   deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 2
  Normal  ScalingReplicaSet  20s   deployment-controller  Scaled down replica set whoami-t2-6cf9cd6bf4 to 1
  Normal  ScalingReplicaSet  20s   deployment-controller  Scaled up replica set whoami-t2-647c9cb7c5 to 3
  Normal  ScalingReplicaSet  12s   deployment-controller  Scaled down replica set whoami-t2-6cf9cd6bf4 to 0

[root@tvm-02 whoami]# kubectl get ds,deploy,svc,pods --all-namespaces -o wide -l app=whoami
NAMESPACE   NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES                   SELECTOR
default     deploy/whoami-t1   3         3         3            3           1h        whoami       opera443399/whoami:0.9   app=whoami
default     deploy/whoami-t2   3         3         3            3           45m       whoami       opera443399/whoami:0.9   app=whoami

NAMESPACE   NAME                            READY     STATUS    RESTARTS   AGE       IP               NODE
default     po/whoami-t1-6cf9cd6bf4-62bhc   1/1       Running   0          1h        172.30.105.1     tvm-04
default     po/whoami-t1-6cf9cd6bf4-dss72   1/1       Running   0          1h        172.30.105.2     tvm-04
default     po/whoami-t1-6cf9cd6bf4-zvpsk   1/1       Running   0          1h        172.30.105.0     tvm-04
default     po/whoami-t2-647c9cb7c5-9b5b6   1/1       Running   0          6m        172.30.105.3     tvm-04
default     po/whoami-t2-647c9cb7c5-kmj6k   1/1       Running   0          6m        172.30.235.129   tvm-01
default     po/whoami-t2-647c9cb7c5-p5gwm   1/1       Running   0          5m        172.30.60.193    tvm-03
如何移除指定的 taint 呢?
[root@tvm-02 whoami]# kubectl taint nodes tvm-04 node-role.kubernetes.io/master:NoSchedule-
node "tvm-04" untainted
##### 符合預期
[root@tvm-02 whoami]# kubectl describe nodes tvm-04 |grep -E '(Roles|Taints)'
Roles:              <none>
Taints:             <none>

聊一聊 Taints and Tolerations 的細節

TaintsNode affinity 是對立的概念,用來容許一個 node 拒絕某一類 pods
Taintstolerations 配合起來能夠保證 pods 不會被調度到不合適的 nodes 上幹活
一個 node 上能夠有多個 taints
tolerations 應用到 pods 來容許被調度到合適的 nodes 上幹活

概念

示範增長一個 taint 到 node 上的操做:

kubectl taint nodes tvm-04 demo.test.com/app=whoami:NoSchedule

在節點 tvm-04 上配置了一個 taint ,其中:
keydemo.test.com/app
valuewhoami
taint effectNoSchedule

若是要移除 taint 則:

kubectl taint nodes tvm-04 demo.test.com/app:NoSchedule-

而後在 PodSpec 中定義 toleration 使得該 pod 能夠被調度到 tvm-04 上,有下述 2 種方式:

tolerations:
- key: "demo.test.com/app"
  operator: "Equal"
  value: "whoami"
  effect: "NoSchedule"
tolerations:
- key: "demo.test.com/app"
  operator: "Exists"
  effect: "NoSchedule"

tainttoleration 要匹配上,須要知足二者的 keyseffects 是一致的,且:

  • operatorExists (意味着不用指定 value 的內容)時,或者
  • operatorEqualvalues 也相同

注1: operator 默認值是 Equal 若是不指定的話

注2: 留意下面 2 個使用 Exists 的特例

  • key 爲空且 operatorExists 時,將匹配全部的 keys, valueseffects ,這代表能夠 tolerate 全部的 taint
    tolerations:
  • operator: "Exists"

     
  • effect 爲空將匹配 demo.test.com/app 這個 key 對應的全部的 effects
    tolerations:
  • key: "demo.test.com/app"
    operator: "Exists"

上述 effect 使用的是 NoSchedule ,其實還可使用其餘的調度策略,例如:

  • PreferNoSchedule : 這意味着不是一個強制必須的調度策略(儘可能不去知足不合要求的 pod 調度到 node 上來)
  • NoExecute : 後續解釋

能夠在同一個 node 上使用多個 taints ,也能夠在同一個 pod 上使用多個 tolerations ,而 k8s 在處理 taints and tolerations 時相似一個過濾器:

  • 對比一個 node 上全部的 taints
  • 忽略掉和 pod 中 toleration 匹配的 taints
  • 遺留下來未被忽略掉的全部 taints 將對 pod 產生 effect

尤爲是:

  • 至少有 1 個未被忽略的 tainteffectNoSchedule 時,則 k8s 不會將該 pod 調度到這個 node 上
  • 不知足上述場景,但至少有 1 個未被忽略的 tainteffectPreferNoSchedule 時,則 k8s 將嘗試不把該 pod 調度到這個 node 上
  • 至少有 1 個未被忽略的 tainteffectNoExecute 時,則 k8s 會當即將該 pod 從該 node 上驅逐(若是已經在該 node 上運行),或着不會將該 pod 調度到這個 node 上(若是還沒在這個 node 上運行)

實例,有下述 node 和 pod 的定義:

kubectl taint nodes tvm-04 key1=value1:NoSchedule
kubectl taint nodes tvm-04 key1=value1:NoExecute
kubectl taint nodes tvm-04 key2=value2:NoSchedule
tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoSchedule"
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoExecute"

上述場景中,

  • 該 pod 不會調度到 node 上,由於第 3 個 taint 不知足
  • 若是該 pod 已經在該 node 上運行,則不會被驅逐

一般而言,不能 tolerate 一個 effectNoExecute 的 pod 將被當即驅逐,可是,經過指定可選的字段 tolerationSeconds 則能夠規定該 pod 延遲到一個時間段後再被驅逐,例如:

tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoExecute"
  tolerationSeconds: 3600

也就是說,在 3600 秒後將被驅逐。可是,若是在這個時間點前移除了相關的 taint 則也不會被驅逐
注3:關於被驅逐,若是該 pod 沒有其餘地方能夠被調度,也不會被驅逐出去(我的實驗結果,請自行驗證)

ZYXW、參考

  1. Taints and Tolerations
相關文章
相關標籤/搜索