1、概述

node
1).Kubernetes scheduler在整個系統中承擔了「承上啓下」的重要功能,「承上」是指它負責接收Controller Manager創新的新Pod,爲其安排一個落腳的Node; "啓下"是指安置工做完成後,目標Node上的kubelet服務進程接管後繼工做。
2).Kubernetes scheduler的做用是將待調度的Pod,按照特定的調度算法和調度策略綁定到集羣中的某個合適的Node上,並將綁定信息寫入etcd中,在整個調度過程當中涉及三個對象,分別是:待調度Pod列表,可用Node列表,以及調度算法和策略。
3).目標節點上的kubelet經過API Server監聽到Kubernetes scheduler 產生的Pod綁定事件,而後獲取對應的Pod清單,下載Image鏡像,並啓動容器。linux
2、調度流程算法
1).Predicate預先調度過程,即遍歷全部目標Node,篩選出符合要求的候選節點。
2).Priority肯定最優節點,在第1)步的基礎上,採用優選策略計算出每一個候選節點的積分,積分最高者勝出。
3). Selected 將pods與 最終nodes綁定docker
3、預選策略 全部配置的策略須要所有匹配,一票否決。經常使用的一些預選策略有:
1) CheckNodeCondition:檢查Node節點是不是正常
2) GeneralPred:(包含了多個預選策略)
Hostname:檢查pod對象是否認義了pods.spec.hostname,若是"是"就檢查Node的hostname是否匹配;
PodFitsHostPorts:檢查pod對象是否認義了;pods.spec.containers.ports.HostPort, 若是node端口被佔用,就須要pass;
MatchNodeSelector:檢查pod是否認義了pods.spec.nodeSelector,且進行適配;
PodFitsResources: 檢查Pod資源需求是否能被節點所知足(kubectl describe nodes NODE_NAME);
3)NoDiskConflict:檢查pod依賴的存儲卷是否能知足需求,默認沒有啓用;
4)PodToleratesNodeTaints:檢查Pod上的spec.pods.tolerations可容忍的污點是否徹底包含節點上的污點,若是"是",則經過預選。後添加的污點不會驅離pod;
5)PodToleratesNodeNoExecuteTaints:檢查pod上定義的污點是否能容忍節點上定義的擁有NoExecute屬性的污點,後添加的污點會驅離pod;
6)CheckNodeLabelPresence:檢查節點上指定的標籤是否存在;
7)CheckServiceAffinity:根據Pod所屬的service已有的其餘pod對象,儘量將此pod放在已有相似pod的node上去,默認沒啓用;
8)MaxEBSVolumeCount: Aws環境使用,默認開啓;
9)MaxGCEPDVolumeCount:Google環境使用,默認開啓;
10) MaxAzureDiskVolumeCount:Microsoft環境使用,默認的開啓;
11) CheckVolumeBinding: 檢查已綁定和未綁定的pvc是否能知足pod對象對存儲卷的需求;
12) NoVolumeZoneConflict: 檢查區域是否衝突;
13) CheckNodeMemoryPressure:檢查節點內存是否存在壓力過大的狀態;
14) CheckNodePIDPressure:檢查節點上PID是否存在壓力過大的狀態;
15) CheckNodeDiskPressure:檢查節點上磁盤IO是否存在壓力過大的狀態;
16) MatchInterPodAffinity: 要開啓纔會有親和性。shell
4、優選策略 全部匹配項優先級分數相加,得分最高的被選出。
1) LeastRequested: (cpu((capacity-sum(requested))10/capacity) + memory((capacity-sum(requested))10/capacity)/2 ;
2) BalancedResoureAllocation: CPU和內存資源佔用率相近節點的勝出。結合上面的使用,好比cpu和內存得分都爲2,二者相近,該函數是爲了平衡節點資源的使用情況;
3) NodePreferAvoidPods:該函數的優先級較高,pod傾向不要運行在該節點 ;節點註解信息"scheduler.alpha.kubernetes.io/preferAvoidPods", 若是沒有這個註解,得分是10,權重是10000
4) TaintToleration: 將Pod對象的spec.tolerations列表與污點進行檢查,匹配條越多,得分越低。
5) SelectorSpreding:使pod分散:查找與當前對象匹配的service/replica set/stateful set/匹配的現存節點有哪些,已經有此pod的pod對象得分會變低;
6) InterPodAffinity:遍歷pod對象的親和性條目,並將可以匹配節點的權重相加,值越大得分越高 ;
7) NodeAffinity:基於節點親和性匹配,根據NodeSelector來匹配;
8) MostRequested:同LeastRequest:越小的越先用,儘可能空出別的節點;默認不啓用;
9) NodeLabelPriority:只關注標籤自己,存在就得分,不存在就沒分;默認不啓用;
10) ImageLocality:根據知足當前pod對象需求的已有鏡像體積大小之和來計算;默認不啓用;vim
5、高級調度機制 影響節點選擇 ,機制以下:
1)、節點選擇器
nodeName: 指定某一個特定節點的名稱( 在pod.spec.nodeName給定Node名稱);
nodeSelector: 指定某一類特定節點的Labels( 指定pod.spec.nodeSelector匹配node的label );api
2)、節點親和調度
pods.selector.affinity.nodeAffinity 有兩個值:app
3)、Pod親和調度
若是定義多pods儘可能運行在同一node或同一rack上,如何獲知是否在同一node上?容許 scheduler 將第一個pod隨機調度到一個node上,再將其它pod調度到該node上,利用這種方法實現pod親和度。斷定是否在同一位置須要依據topologyKey的值,因爲判斷的維度較多因此topologyKey的值也不一樣。例如:topologyKey的值爲 kubernetes.io/hostname 時表示判斷依據爲nodeName;topologyKey的值爲 zone 時表示判斷依據爲自定義nodeLabel zone等等。pod親和性也分爲硬親和、軟親和。
pods.selector.affinity.podAffinity 有兩個值:frontend
4)、pod非親和性調度
podAntiAffinity (與nodeAffinity類似)ide
策略名稱 | 匹配目標 | 支持的操做符 | 支持拓撲域 | 設計目標 |
---|---|---|---|---|
nodeAffinity | 主機標籤 | In,NotIn,Exists,DoesNotExist,Gt,Lt | 不支持 | 決定Pod能夠部署在哪些主機上 |
podAffinity | Pod標籤 | In,NotIn,Exists,DoesNotExist | 支持 | 決定Pod能夠和哪些Pod部署在同一拓撲域 |
PodAntiAffinity | Pod標籤 | In,NotIn,Exists,DoesNotExist | 支持 | 決定Pod不能夠和哪些Pod部署在同一拓撲域 |
5)、污點調度(用於節點上)
Taints 是定義在nodes上的 key:value 屬性數據 (node配置污點)。
Tolerations 是定義在pods上的 key:value 屬性數據 (pod配置容忍度)。
taint的effect屬性 定義node對pod的排斥等級,其值以下:
NoSchedule: 僅影響調度,對現存的Pod對象不產生影響
NoExecute: 不只影響調度,還對現存的Pod對象產生影響(不能容忍污點的pod對象將會被驅逐)
PreferNoSchedule
PreferNoSchedule:pod不能容忍時 最好不調度在該節點上
標記(打)污點語法: kubectl taint node NODE_NAME key=value:NoSchedule
刪除污點語法: kubectl taint nodes slave2 name-
6、示例
1) 節點選擇器nodeSelector示例:
[root@docker79 scheduler]# cat pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 nodeSelector: disktype: ssd [root@docker79 scheduler]# kubectl apply -f pod-demo.yaml pod/pod-demo created [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS docker77 Ready <none> 15d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=docker77 docker78 Ready <none> 15d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker78 docker79 Ready master 15d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker79,node-role.kubernetes.io/master= [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-demo 1/1 Running 0 1m 10.244.1.2 docker77 <none> [root@docker79 scheduler]# kubectl delete -f pod-demo.yaml pod "pod-demo" deleted [root@docker79 scheduler]# [root@docker79 scheduler]# vim pod-demo.yaml [root@docker79 scheduler]# cat pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 nodeSelector: disktype: harddisk [root@docker79 scheduler]# kubectl apply -f pod-demo.yaml pod/pod-demo created [root@docker79 scheduler]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-demo 0/1 Pending 0 6s [root@docker79 scheduler]#
說明:pod狀態一直處於Pending狀態,因爲nodeSelector是強約束類型,因此沒有符合條件的node時一直處於Pending,沒法Running。
[root@docker79 scheduler]# kubectl label nodes docker78 disktype=harddisk node/docker78 labeled
說明:爲docker78 標上label以後,docker78符合nodeSelector的需求,pod狀態轉變爲Running
[root@docker79 scheduler]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-demo 1/1 Running 0 1m [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl delete -f pod-demo.yaml pod "pod-demo" deleted [root@docker79 scheduler]#
2) node親和性調度Required示例
[root@docker79 scheduler]# vim pod-nodeaffinity-demo.yaml [root@docker79 scheduler]# cat pod-nodeaffinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: zone operator: In values: ["foo","bar"] [root@docker79 scheduler]# kubectl apply -f pod-nodeaffinity-demo.yaml pod/pod-node-affinity-demo created [root@docker79 scheduler]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo 0/1 Pending 0 8s [root@docker79 scheduler]#
說明:因爲requiredAffinity爲硬親和,必需要求符合條件的node才能夠運行pod,因此若是沒有符合條件的node ,pod狀態一直處於pending狀態
[root@docker79 scheduler]# kubectl delete -f pod-nodeaffinity-demo.yaml pod "pod-node-affinity-demo" deleted [root@docker79 scheduler]#
3)node親和性調度Preferred示例
[root@docker79 scheduler]# vim pod-nodeaffinity-demo-2.yaml [root@docker79 scheduler]# cat pod-nodeaffinity-demo-2.yaml apiVersion: v1 kind: Pod metadata: name: pod-node-affinity-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: zone operator: In values: ["foo","bar"] weight: 60 [root@docker79 scheduler]# kubectl apply -f pod-nodeaffinity-demo-2.yaml pod/pod-node-affinity-demo created [root@docker79 scheduler]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-node-affinity-demo 1/1 Running 0 5s [root@docker79 scheduler]#
說明:因爲preferred爲軟親和,儘可能要求pod運行在符合條件的node上,當沒有符合條件的node時,也能夠運行在其它node上,因此pod狀態處於Running狀態。
[root@docker79 scheduler]# kubectl delete -f pod-nodeaffinity-demo-2.yaml pod "pod-node-affinity-demo" deleted [root@docker79 scheduler]#
4) Pod親和性required(位置依據hostName)示例
[root@docker79 scheduler]# vim pod-required-affinity-demo.yaml [root@docker79 scheduler]# cat pod-required-affinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: db tier: db spec: containers: - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - {key: app, operator: In, values: ["myapp"]} [root@docker79 scheduler]# kubectl apply -f pod-required-affinity-demo.yaml pod/pod-first unchanged pod/pod-second created [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-first 1/1 Running 0 4m 10.244.2.7 docker78 <none> pod-second 1/1 Running 0 22s 10.244.2.8 docker78 <none> [root@docker79 scheduler]# kubectl delete -f pod-required-affinity-demo.yaml pod "pod-first" deleted pod "pod-second" deleted [root@docker79 scheduler]#
說明:兩個pod因爲affinity的緣由運行在同一node上(第二個pod選擇 app標籤值爲myapp的pod所運行的node上運行,斷定node時的依據是hostName)。
5) Pod反親和性required(位置依據hostName)示例
[root@docker79 scheduler]# cp pod-required-affinity-demo.yaml pod-required-antiaffinity-demo.yaml [root@docker79 scheduler]# vim pod-required-antiaffinity-demo.yaml [root@docker79 scheduler]# cat pod-required-antiaffinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: db tier: db spec: containers: - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - {key: app, operator: In, values: ["myapp"]} [root@docker79 scheduler]# kubectl apply -f pod-required-antiaffinity-demo.yaml pod/pod-first created pod/pod-second created [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-first 1/1 Running 0 9s 10.244.2.11 docker78 <none> pod-second 1/1 Running 0 9s 10.244.1.3 docker77 <none> [root@docker79 scheduler]#
說明:因爲antiaffinity的原故,因此兩個pod不能運行在同一node上。
[root@docker79 scheduler]# kubectl delete -f pod-required-antiaffinity-demo.yaml pod "pod-first" deleted pod "pod-second" deleted [root@docker79 scheduler]#
6) Pod反親和性required(位置依據nodeLabel)示例
[root@docker79 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS docker77 Ready <none> 16d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=docker77 docker78 Ready <none> 16d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=docker78 docker79 Ready master 16d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=docker79,node-role.kubernetes.io/master=docker79 [root@docker79 ~]# kubectl label nodes docker77 zone=foo node/docker77 labeled [root@docker79 ~]# kubectl label nodes docker78 zone=foo node/docker78 labeled [root@docker79 ~]# [root@docker79 scheduler]# vim pod-required-antiaffinity-demo.yaml [root@docker79 scheduler]# cat pod-required-antiaffinity-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-first labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: pod-second labels: app: db tier: db spec: containers: - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: ["sh","-c","sleep 3600"] affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: zone labelSelector: matchExpressions: - {key: app, operator: In, values: ["myapp"]} [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl apply -f pod-required-antiaffinity-demo.yaml pod/pod-first created pod/pod-second created [root@docker79 scheduler]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-first 1/1 Running 0 8s pod-second 0/1 Pending 0 8s
說明:由於docker7七、docker78節點上都添加了label zone=foo ,並且使用podAntiAffinity反親和,因此致使其中一個pod Running狀態,另外一個pending狀態。
[root@docker79 scheduler]# kubectl delete -f pod-required-antiaffinity-demo.yaml pod "pod-first" deleted pod "pod-second" deleted [root@docker79 scheduler]#
7) 污點調度--爲node 添加 taints 示例
[root@docker79 scheduler]# kubectl taint node docker78 node-type=production:NoSchedule node/docker78 tainted [root@docker79 scheduler]# kubectl describe node docker78
說明:上述動做爲docker78添加taints,且effect的值爲NoSchedule。
定義Deployment,以下所示:
[root@docker79 scheduler]# cp ../deploy-demo.yaml ./ [root@docker79 scheduler]# vim deploy-demo.yaml [root@docker79 scheduler]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl apply -f deploy-demo.yaml deployment.apps/myapp-deploy created [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-69b47bc96d-2phfn 1/1 Running 0 8s 10.244.1.5 docker77 <none> myapp-deploy-69b47bc96d-msfwq 1/1 Running 0 8s 10.244.1.4 docker77 <none> [root@docker79 scheduler]#
說明:上例建立普通的deployment ,因爲pod沒有定義tolerations,pod沒法容忍帶有taints的node,因此pods將運行在docker77上。
爲docker77添加taint ,且effect的值爲NoExecute,以下所示:
[root@docker79 scheduler]# kubectl taint node docker77 node-type=dev:NoExecute node/docker77 tainted [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-69b47bc96d-2phfn 0/1 Terminating 0 1m 10.244.1.5 docker77 <none> myapp-deploy-69b47bc96d-6sv54 0/1 Pending 0 5s <none> <none> <none> myapp-deploy-69b47bc96d-z7qqh 0/1 Pending 0 5s <none> <none> <none> [root@docker79 scheduler]#
說明:當docker77添加了taints以後,且effect定義爲NoExecute,因此原有的pods將被驅逐,pods沒法匹配到任何nodes ,因此處於pending狀態。
8) 污點調度--爲pod 配置tolerations 示例
[root@docker79 scheduler]# vim deploy-demo.yaml [root@docker79 scheduler]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Equal" value: "production" effect: "NoSchedule" [root@docker79 scheduler]# kubectl apply -f deploy-demo.yaml deployment.apps/myapp-deploy configured [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-98fddd79f-grf77 1/1 Running 0 10s 10.244.2.16 docker78 <none> myapp-deploy-98fddd79f-xgtj6 1/1 Running 0 12s 10.244.2.15 docker78 <none> [root@docker79 scheduler]#
說明:pod定義了tolerations,且node-type屬性的值爲production,effect的值爲 NoSchedule ,因此只有docker78匹配成功。
若是僅定義"node-type"屬性,不定義其值,以下所示:
[root@docker79 scheduler]# vim deploy-demo.yaml [root@docker79 scheduler]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Exists" value: "" effect: "NoSchedule" [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl apply -f deploy-demo.yaml deployment.apps/myapp-deploy configured [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-7dd988dc9d-dg5zr 1/1 Running 0 3s 10.244.2.17 docker78 <none> myapp-deploy-7dd988dc9d-pvw6r 1/1 Running 0 2s 10.244.2.18 docker78 <none> myapp-deploy-98fddd79f-grf77 0/1 Terminating 0 4m 10.244.2.16 docker78 <none> myapp-deploy-98fddd79f-xgtj6 1/1 Terminating 0 4m 10.244.2.15 docker78 <none> [root@docker79 scheduler]#
說明:只要"node-type"屬性存在,且effect定義爲 NoSchedule的節點匹配,因此僅有docker78匹配,docker77不匹配 。(上面效果爲 pods被刪除而後重建的過程)
若是在定義Deployment時只定義"node-type" 屬性,不定義其值 ,以下所示:
[root@docker79 scheduler]# vim deploy-demo.yaml [root@docker79 scheduler]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Exists" value: "" effect: "" [root@docker79 scheduler]# kubectl apply -f deploy-demo.yaml deployment.apps/myapp-deploy configured [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-7dd988dc9d-dg5zr 1/1 Running 0 2m 10.244.2.17 docker78 <none> myapp-deploy-7dd988dc9d-pvw6r 0/1 Terminating 0 2m 10.244.2.18 docker78 <none> myapp-deploy-f9f87c46d-9skm9 0/1 ContainerCreating 0 1s <none> docker77 <none> myapp-deploy-f9f87c46d-qr4d9 1/1 Running 0 3s 10.244.2.19 docker78 <none> [root@docker79 scheduler]#
說明:只要"node-type" 屬性存在便可匹配,因此docker7七、docker78全都匹配。
若是把上例effect的值 定義爲NoExecute,以下所示:
[root@docker79 scheduler]# vim deploy-demo.yaml [root@docker79 scheduler]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 tolerations: - key: "node-type" operator: "Exists" value: "" effect: "NoExecute" [root@docker79 scheduler]# [root@docker79 scheduler]# kubectl apply -f deploy-demo.yaml deployment.apps/myapp-deploy configured [root@docker79 scheduler]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE myapp-deploy-765984bf98-hwdkv 1/1 Running 0 4s 10.244.1.7 docker77 <none> myapp-deploy-765984bf98-mfr4j 1/1 Running 0 2s 10.244.1.8 docker77 <none> myapp-deploy-f9f87c46d-9skm9 0/1 Terminating 0 1m 10.244.1.6 docker77 <none> myapp-deploy-f9f87c46d-qr4d9 0/1 Terminating 0 1m 10.244.2.19 docker78 <none> [root@docker79 scheduler]#
說明:將effect的值定義爲NoExecute ,因此只有docker77匹配成功。