容器編排系統K8s之節點污點和pod容忍度

  前文咱們瞭解了k8s上的kube-scheduler的工做方式,以及pod調度策略的定義;回顧請參考:http://www.javashuo.com/article/p-rejclxsv-nz.html;今天咱們來聊一下k8s上的節點污點和pod容忍度相關話題;html

  節點污點是什麼呢?node

  節點污點有點相似節點上的標籤或註解信息,它們都是用來描述對應節點的元數據信息;污點定義的格式和標籤、註解的定義方式很相似,都是用一個kv數據來表示,不一樣於節點標籤,污點的鍵值數據中包含對應污點的effect,污點的effect是用於描述對應節點上的污點有什麼做用;在k8s上污點有三個效用(effect),第一個效用是NoSchedule,表示拒絕pod調度到對應節點上運行;第二個效用是PreferSchedule,表示儘可能不把pod調度到此節點上運行;第三個效用是NoExecute,表示拒絕將pod調度到此節點上運行;該效用相比NoSchedule要嚴苛一點;從上面的描述來看,對應污點就是來描述拒絕pod運行在對應節點的節點屬性;redis

  pod對節點污點的容忍度api

  從字面意思就可以理解,pod要想運行在對應有污點的節點上,對應pod就要容忍對應節點上的污點;咱們把這種容忍節點污點的定義叫作pod對節點污點的容忍度;pod對節點污點的容忍度就是在對應pod中定義怎麼去匹配節點污點;一般匹配節點污點的方式有兩種,一種是等值匹配,一種是存在性匹配;所謂等值匹配表示對應pod的污點容忍度,必須和節點上的污點屬性相等,所謂污點屬性是指污點的key、value以及effect;即容忍度必須知足和對應污點的key,value和effect相同,這樣表示等值匹配關係,其操做符爲Equal;存在性匹配是指對應容忍度只須要匹配污點的key和effect便可,value不歸入匹配標準,即容忍度只要知足和對應污點的key和effect相同就表示可以容忍對應污點,其操做符爲Exists;bash

  節點污點和pod容忍度的關係app

  提示:如上圖所示,只有可以容忍對應節點污點的pod纔可以被調度到對應節點運行,不能容忍節點污點的pod是必定不能調度到對應節點上運行(除節點污點爲PreferNoSchedule);ide

  節點污點管理htm

  給節點添加污點命令使用語法格式對象

Usage:
  kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]

  提示:給節點增長污點咱們能夠用kubectl taint node命令來增長節點污點,只須要指定對應節點名稱和污點便可,污點能夠指定多個,用空格隔開;blog

  示例:給node01添加一個test=test:NoSchedule的污點

[root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]#

  查看節點污點

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint
Taints:             test=test:NoSchedule
[root@master01 ~]# 

  刪除污點

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint
Taints:             test=test:NoSchedule
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule-
node/node01.k8s.org untainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint  
Taints:             <none>
[root@master01 ~]# 

  提示:刪除污點能夠指定對應節點上的污點的key和對應污點的effect,也能夠直接在對應污點的key後面加「-」,表示刪除對應名爲對應key的全部污點;

   pod容忍度定義

  示例:建立一個pod,其容忍度爲對應節點有 node-role.kubernetes.io/master:NoSchedule的污點

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# 

  提示:定義pod對節點污點的容忍度須要用tolerations字段定義,該字段爲一個列表對象;其中key是用來指定對應污點的key,這個key必須和對應節點污點上的key相等;operator字段用於指定對應的操做符,即描述容忍度怎麼匹配污點,這個操做符只有兩個,Equal和Exists;effect字段用於描述對應的效用,該字段的值一般有三個,NoSchedule、PreferNoSchedule、NoExecute;這個字段的值必須和對應的污點相同;上述清單表示,redis-demo這個pod可以容忍節點上有node-role.kubernetes.io/master:NoSchedule的污點;

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          7s    10.244.4.35   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod運行在node04上;這裏須要注意,定義pod容忍度只是表示對應pod能夠運行在對應有污點的節點上,並不是它必定運行在對應節點上;它也能夠運行在那些沒有污點的節點上;

  驗證:刪除pod,給node01,node02,03,04都打上test:NoSchedule的污點,再次應用清單,看看對應pod是否可以正常運行?

[root@master01 ~]# kubectl delete -f pod-demo-taints.yaml
pod "redis-demo" deleted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node03.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node04.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          18s   10.244.0.14   master01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod,被調度到master節點上運行了;其緣由是對應pod可以容忍master節點上的污點;對應其餘node節點上的污點,它並不能容忍,因此只能運行在master節點;

  刪除對應pod中容忍度的定義,再次應用pod清單,看看對應pod是否會正常運行?

[root@master01 ~]# kubectl delete pod redis-demo 
pod "redis-demo" deleted
[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo   0/1     Pending   0          6s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod處於pending狀態;其緣由是對應pod無法容忍對應節點污點;即全部節點都排斥對應pod運行在對應節點上;

  示例:定義等值匹配關係污點容忍度

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Equal
    value: test
    effect: NoSchedule

[root@master01 ~]# 

  提示:定義等值匹配關係的容忍度,須要指定對應污點中的value屬性;

  刪除原有pod,應用清單

[root@master01 ~]# kubectl delete pod redis-demo
pod "redis-demo" deleted
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo   0/1     Pending   0          4s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到應用對應清單之後,pod處於pending狀態,其緣由是沒有知足對應pod容忍度的節點,因此對應pod沒法正常調度到節點上運行;

  驗證:修改node01節點的污點爲test=test:NoSchedule

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints
Taints:             test:NoSchedule
[root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule --overwrite 
node/node01.k8s.org modified
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints                 
Taints:             test=test:NoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          4m46s   10.244.1.44   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到把node01的污點修改成test=test:NoSchedule之後,對應pod就被調度到node01上運行;

  驗證:修改node01節點上的污點爲test:NoSchedule,看看對應pod是否被驅離呢?

[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule --overwrite     
node/node01.k8s.org modified
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints                 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          7m27s   10.244.1.44   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應節點污點修改成test:NoSchedule之後,對應pod也不會被驅離,說明效用爲NoSchedule的污點只是在pod調度時起做用,對於調度完成的pod不起做用;

  示例:定義pod容忍度爲test:PreferNoSchedule

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo1
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: PreferNoSchedule

[root@master01 ~]# 

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo1 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          11m   10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   0/1     Pending   0          6s    <none>        <none>           <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod處於pending狀態,其緣由是沒有節點污點是test:PerferNoSchedule,因此對應pod不能被調度運行;

  給node02節點添加test:PreferNoSchedule污點

[root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl taint node node02.k8s.org test:PreferNoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
                    test:PreferNoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          18m     10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   0/1     Pending   0          6m21s   <none>        <none>           <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應node02上有兩個污點,對應pod也沒有正常運行起來,其緣由是node02上有一個test:NoSchedule污點,對應pod容忍度不能容忍此類污點;

  驗證:修改node01,node03,node04上的節點污點爲test:PreferNoSchedule,修改pod的容忍度爲test:NoSchedule,再次應用清單,看看對應pod怎麼調度

[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule-     
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:PreferNoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:PreferNoSchedule  
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:PreferNoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
                    test:PreferNoSchedule
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          31m   10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   1/1     Running   0          19m   10.244.1.45   node01.k8s.org   <none>           <none>
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo" deleted
pod "redis-demo1" deleted
[root@master01 ~]# cat pod-demo-taints.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo1
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo1 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo1   1/1     Running   0          5s    10.244.4.36   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:從上面的驗證過程來看,當咱們把node01,node03,node04節點上的污點刪除之後,剛纔建立的redis-demo1pod被調度到node01上運行了;其緣由是node01上的污點第一個被刪除;但咱們把pod的容忍對修改爲test:NoSchedule之後,再次應用清單,對應pod被調度到node04上運行;這意味着NoSchedule效用污點容忍度是能夠正常容忍PreferNoSchedule污點;

  示例:定義pod容忍度爲test:NoExecute

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoExecute
[root@master01 ~]# 

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo1   1/1     Running   0          35m   10.244.4.36   node04.k8s.org   <none>           <none>
redis-demo2   1/1     Running   0          5s    10.244.4.38   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod被調度到node04上運行,說明容忍效用爲NoExecute可以容忍污點效用爲PreferNoSchedule的節點;

  驗證:更改全部node節點污點爲test:NoSchedule,刪除原有pod,再次應用清單,看看對應pod是否還會正常運行?

[root@master01 ~]# kubectl taint node node01.k8s.org test-
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node02.k8s.org test- 
node/node02.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo1" deleted
pod "redis-demo2" deleted
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          6s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到對應pod處於pending狀態,說明pod容忍效用爲NoExecute,並不能容忍污點效用爲NoSchedule;

  刪除pod,修改全部節點污點爲test:NoExecute,把pod容忍度修改成NoScheudle,而後應用清單,看看對應pod怎麼調度

[root@master01 ~]# kubectl delete pod --all
pod "redis-demo2" deleted
[root@master01 ~]# kubectl taint node node01.k8s.org test-               
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node02.k8s.org test- 
node/node02.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoExecute
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoExecute 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoExecute 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# cat pod-demo-taints.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          8s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:從上面的演示來看,pod容忍度效用爲NoSchedule也不能容忍污點效用爲NoExecute;

  刪除pod,修改對應pod的容忍度爲test:NoExecute

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          5m5s   <none>   <none>   <none>           <none>
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo2" deleted
[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoExecute
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          6s    10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  修改node04節點污點爲test:NoSchedule,看看對應pod是否能夠正常運行?

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          4m38s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl taint node node04.k8s.org test-
node/node04.k8s.org untainted
[root@master01 ~]# kubectl get pods -o wide               
NAME          READY   STATUS    RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          8m2s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl get pods -o wide                              
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          8m25s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:從NoExecute更改成NoSchedule,對原有pod不會進行驅離;

  修改pod的容忍度爲test:NoSchedule,再次應用清單

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo3
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
---
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo4
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo3 created
pod/redis-demo4 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          14m   10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   1/1     Running   0          4s    10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   1/1     Running   0          4s    10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到後面兩個pod都被調度node04上運行;其緣由是對應pod的容忍度test:NoSchedule只能容忍node04上的污點test:NoSchedule;

  修改node04的污點爲NoExecute,看看對應pod是否會被驅離?

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          17m     10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   1/1     Running   0          2m32s   10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   1/1     Running   0          2m32s   10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl taint node node04.k8s.org test-
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute
node/node04.k8s.org tainted
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS        RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running       0          18m     10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   0/1     Terminating   0          3m43s   10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   0/1     Terminating   0          3m43s   10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          18m   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到修改node04的污點爲test:NoExecute之後,對應pod容忍污點效用爲不是NoExecute的pod被驅離了;說明污點效用爲NoExecute,它會驅離不能容忍該污點效用的全部pod;

  建立一個deploy,其指定容器的容忍度爲test:NoExecute,並指定其驅離延遲施加爲10秒

[root@master01 ~]# cat deploy-demo-taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-demo
spec:
  replicas: 3
  selector:
     matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:4-alpine
        ports:
        - name: redis
          containerPort: 6379
      tolerations:
      - key: test
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 10
   
[root@master01 ~]# 

  提示:tolerationSeconds字段用於指定其驅離寬限其時長;該字段只能用在其容忍污點效用爲NoExecute的容忍度中使用;其餘污點效用不能使用該字段來指定其容忍寬限時長;

  應用配置清單

[root@master01 ~]# kubectl apply -f deploy-demo-taint.yaml
deployment.apps/deploy-demo created
[root@master01 ~]# kubectl get pods -o wide -w
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
deploy-demo-79b89f9847-9zk8j   1/1     Running   0          7s    10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Running   0          7s    10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Running   0          7s    10.244.1.62   node01.k8s.org   <none>           <none>
redis-demo2                    1/1     Running   0          54m   10.244.4.43   node04.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Terminating   0          10s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Terminating   0          10s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     Pending       0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     Pending       0          0s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     Pending       0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-9zk8j   1/1     Terminating   0          10s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     ContainerCreating   0          0s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     Pending             0          0s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     ContainerCreating   0          0s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     Pending             0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     Pending             0          0s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     ContainerCreating   0          0s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Terminating         0          10s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Terminating         0          10s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-9zk8j   1/1     Terminating         0          10s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   0/1     Terminating         0          11s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     ContainerCreating   0          1s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     ContainerCreating   0          1s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     ContainerCreating   0          1s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          11s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   1/1     Running             0          1s    10.244.3.62   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-9zk8j   0/1     Terminating         0          11s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   1/1     Running             0          1s    10.244.2.72   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   1/1     Running             0          2s    10.244.1.63   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          15s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          15s   10.244.3.61   node03.k8s.org   <none>           <none>
^C[root@master01 ~]# 

  提示:能夠看到對應pod只能在對應節點上運行10秒,隨後就被驅離,由於咱們建立的是一個deploy,對應pod被驅離之後,對應deploy又會重建;

  總結:對於污點效用爲NoSchedule來講,它只會拒絕新建的pod,不會對原有pod進行驅離;若是對應pod可以容忍該污點,則對應pod就有可能運行在對應節點上;若是不能容忍,則對應pod必定不會調度到對應節點運行;對於污點效用爲PreferNoSchedule來講,它也不會驅離已存在pod,它只有在全部節點都不知足對應pod容忍度時,對應pod能夠勉強運行在此類污點效用的節點上;對於污點效用爲NoExecute來講,默認不指定其容忍寬限時長,表示可以一直容忍,若是指定了其寬限時長,則到了寬限時長對應pod將會被驅離;對應以前被調度到該節點上的pod,在節點污點效用變爲NoExecute後,該節點會當即驅離全部不能容忍污點效用爲NoExecute的pod;

相關文章
相關標籤/搜索