Kubernetes K8S之Node節點親和性與反親和性以及Pod親和性與反親和性詳解與示例html
服務器名稱(hostname) | 系統版本 | 配置 | 內網IP | 外網IP(模擬) |
---|---|---|---|---|
k8s-master | CentOS7.7 | 2C/4G/20G | 172.16.1.110 | 10.0.0.110 |
k8s-node01 | CentOS7.7 | 2C/4G/20G | 172.16.1.111 | 10.0.0.111 |
k8s-node02 | CentOS7.7 | 2C/4G/20G | 172.16.1.112 | 10.0.0.112 |
nodeSelector提供了一種很是簡單的方法,將pods約束到具備特定標籤的節點。而親和性/反親和性極大地擴展了可表達的約束類型。關鍵的加強是:node
一、親和性/反親和性語言更具表達性。除了使用邏輯AND操做建立的精確匹配以外,該語言還提供了更多的匹配規則;linux
二、能夠指示規則是優選項而不是硬要求,所以若是調度器不能知足,pod仍將被調度;web
三、能夠針對節點(或其餘拓撲域)上運行的pods的標籤進行約束,而不是針對節點的自身標籤,這影響哪些Pod能夠或不能夠共處。redis
親和特性包括兩種類型:node節點親和性/反親和性 和 pod親和性/反親和性。pod親和性/反親和性約束針對的是pod標籤而不是節點標籤。docker
拓撲域是什麼:多個node節點,擁有相同的label標籤【節點標籤的鍵值相同】,那麼這些節點就處於同一個拓撲域。★★★★★api
當前有兩種類型的節點親和性,稱爲requiredDuringSchedulingIgnoredDuringExecution和 preferredDuringSchedulingIgnoredDuringExecution,能夠將它們分別視爲「硬」【必須知足條件】和「軟」【優選知足條件】要求。安全
前者表示Pod要調度到的節點必須知足規則條件,不知足則不會調度,pod會一直處於Pending狀態;後者表示優先調度到知足規則條件的節點,若是不能知足再調度到其餘節點。服務器
名稱中的 IgnoredDuringExecution 部分意味着,與nodeSelector的工做方式相似,若是節點上的標籤在Pod運行時發生更改,使得pod上的親和性規則再也不知足,那麼pod仍將繼續在該節點上運行。app
在將來,會計劃提供requiredDuringSchedulingRequiredDuringExecution,相似requiredDuringSchedulingIgnoredDuringExecution。不一樣之處就是pod運行過程當中若是節點再也不知足pod的親和性,則pod會在該節點中逐出。
節點親和性語法支持如下運算符:In,NotIn,Exists,DoesNotExist,Gt,Lt。能夠使用NotIn和DoesNotExist實現節點的反親和行爲。
運算符關係:
一、若是同時指定nodeSelector和nodeAffinity,則必須知足兩個條件,才能將Pod調度到候選節點上。
二、若是在nodeAffinity類型下指定了多個nodeSelectorTerms對象【對象不能有多個,若是存在多個只有最後一個生效】,那麼只有最後一個nodeSelectorTerms對象生效。
三、若是在nodeSelectorTerms下指定了多個matchExpressions列表,那麼只要能知足其中一個matchExpressions,就能夠將pod調度到某個節點上【針對節點硬親和】。
四、若是在matchExpressions下有多個key列表,那麼只有當全部key知足時,才能將pod調度到某個節點【針對硬親和】。
五、在key下的values只要有一個知足條件,那麼當前的key就知足條件
六、若是pod已經調度在該節點,當咱們刪除或修該節點的標籤時,pod不會被移除。換句話說,親和性選擇只有在pod調度期間有效。
七、preferredDuringSchedulingIgnoredDuringExecution中的weight(權重)字段在1-100範圍內。對於每一個知足全部調度需求的節點(資源請求、RequiredDuringScheduling親和表達式等),調度器將經過迭代該字段的元素來計算一個總和,若是節點與相應的匹配表達式匹配,則向該總和添加「權重」。而後將該分數與節點的其餘優先級函數的分數結合起來。總得分最高的節點是最受歡迎的。
給node節點打label標籤
1 ### --overwrite覆蓋已存在的標籤信息 2 kubectl label nodes k8s-node01 disk-type=ssd --overwrite 3 kubectl label nodes k8s-node01 cpu-num=12 4 5 kubectl label nodes k8s-node02 disk-type=sata 6 kubectl label nodes k8s-node02 cpu-num=24
查詢全部節點標籤信息
1 [root@k8s-master ~]# kubectl get node -o wide --show-labels 2 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS 3 k8s-master Ready master 43d v1.17.4 172.16.1.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master= 4 k8s-node01 Ready <none> 43d v1.17.4 172.16.1.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cpu-num=12,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux 5 k8s-node02 Ready <none> 43d v1.17.4 172.16.1.112 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cpu-num=24,disk-type=sata,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
參見上面,給k8s-node01打了cpu-num=12,disk-type=ssd標籤;給k8s-node02打了cpu-num=24,disk-type=sata標籤。
必須知足條件才能調度,不然不會調度
要運行的yaml文件
1 [root@k8s-master nodeAffinity]# pwd 2 /root/k8s_practice/scheduler/nodeAffinity 3 [root@k8s-master nodeAffinity]# cat node_required_affinity.yaml 4 apiVersion: apps/v1 5 kind: Deployment 6 metadata: 7 name: node-affinity-deploy 8 labels: 9 app: nodeaffinity-deploy 10 spec: 11 replicas: 5 12 selector: 13 matchLabels: 14 app: myapp 15 template: 16 metadata: 17 labels: 18 app: myapp 19 spec: 20 containers: 21 - name: myapp-pod 22 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 23 imagePullPolicy: IfNotPresent 24 ports: 25 - containerPort: 80 26 affinity: 27 nodeAffinity: 28 requiredDuringSchedulingIgnoredDuringExecution: 29 nodeSelectorTerms: 30 - matchExpressions: 31 # 表示node標籤存在 disk-type=ssd 或 disk-type=sas 32 - key: disk-type 33 operator: In 34 values: 35 - ssd 36 - sas 37 # 表示node標籤存在 cpu-num且值大於6 38 - key: cpu-num 39 operator: Gt 40 values: 41 - "6"
運行yaml文件並查看狀態
1 [root@k8s-master nodeAffinity]# kubectl apply -f node_required_affinity.yaml 2 deployment.apps/node-affinity-deploy created 3 [root@k8s-master nodeAffinity]# 4 [root@k8s-master nodeAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 node-affinity-deploy 5/5 5 5 6s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master nodeAffinity]# 8 [root@k8s-master nodeAffinity]# kubectl get rs -o wide 9 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 10 node-affinity-deploy-5c88ffb8ff 5 5 5 11s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=5c88ffb8ff 11 [root@k8s-master nodeAffinity]# 12 [root@k8s-master nodeAffinity]# kubectl get pod -o wide 13 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 14 node-affinity-deploy-5c88ffb8ff-2mbfl 1/1 Running 0 15s 10.244.4.237 k8s-node01 <none> <none> 15 node-affinity-deploy-5c88ffb8ff-9hjhk 1/1 Running 0 15s 10.244.4.235 k8s-node01 <none> <none> 16 node-affinity-deploy-5c88ffb8ff-9rg75 1/1 Running 0 15s 10.244.4.239 k8s-node01 <none> <none> 17 node-affinity-deploy-5c88ffb8ff-pqtfh 1/1 Running 0 15s 10.244.4.236 k8s-node01 <none> <none> 18 node-affinity-deploy-5c88ffb8ff-zqpl8 1/1 Running 0 15s 10.244.4.238 k8s-node01 <none> <none>
由上可見,再根據以前打的標籤,很容易推斷出當前pod只能調度在k8s-node01節點。
即便咱們刪除原來的rs,從新生成rs後pod依舊會調度到k8s-node01節點。以下:
1 [root@k8s-master nodeAffinity]# kubectl delete rs node-affinity-deploy-5c88ffb8ff 2 replicaset.apps "node-affinity-deploy-5c88ffb8ff" deleted 3 [root@k8s-master nodeAffinity]# 4 [root@k8s-master nodeAffinity]# kubectl get rs -o wide 5 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 6 node-affinity-deploy-5c88ffb8ff 5 5 2 4s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=5c88ffb8ff 7 [root@k8s-master nodeAffinity]# 8 [root@k8s-master nodeAffinity]# kubectl get pod -o wide 9 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 10 node-affinity-deploy-5c88ffb8ff-2v2tb 1/1 Running 0 11s 10.244.4.241 k8s-node01 <none> <none> 11 node-affinity-deploy-5c88ffb8ff-gl4fm 1/1 Running 0 11s 10.244.4.240 k8s-node01 <none> <none> 12 node-affinity-deploy-5c88ffb8ff-j26rg 1/1 Running 0 11s 10.244.4.244 k8s-node01 <none> <none> 13 node-affinity-deploy-5c88ffb8ff-vhzmn 1/1 Running 0 11s 10.244.4.243 k8s-node01 <none> <none> 14 node-affinity-deploy-5c88ffb8ff-xxj8m 1/1 Running 0 11s 10.244.4.242 k8s-node01 <none> <none>
優先調度到知足條件的節點,若是都不知足也會調度到其餘節點。
要運行的yaml文件
1 [root@k8s-master nodeAffinity]# pwd 2 /root/k8s_practice/scheduler/nodeAffinity 3 [root@k8s-master nodeAffinity]# cat node_preferred_affinity.yaml 4 apiVersion: apps/v1 5 kind: Deployment 6 metadata: 7 name: node-affinity-deploy 8 labels: 9 app: nodeaffinity-deploy 10 spec: 11 replicas: 5 12 selector: 13 matchLabels: 14 app: myapp 15 template: 16 metadata: 17 labels: 18 app: myapp 19 spec: 20 containers: 21 - name: myapp-pod 22 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 23 imagePullPolicy: IfNotPresent 24 ports: 25 - containerPort: 80 26 affinity: 27 nodeAffinity: 28 preferredDuringSchedulingIgnoredDuringExecution: 29 - weight: 1 30 preference: 31 matchExpressions: 32 # 表示node標籤存在 disk-type=ssd 或 disk-type=sas 33 - key: disk-type 34 operator: In 35 values: 36 - ssd 37 - sas 38 - weight: 50 39 preference: 40 matchExpressions: 41 # 表示node標籤存在 cpu-num且值大於16 42 - key: cpu-num 43 operator: Gt 44 values: 45 - "16"
運行yaml文件並查看狀態
1 [root@k8s-master nodeAffinity]# kubectl apply -f node_preferred_affinity.yaml 2 deployment.apps/node-affinity-deploy created 3 [root@k8s-master nodeAffinity]# 4 [root@k8s-master nodeAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 node-affinity-deploy 5/5 5 5 9s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master nodeAffinity]# 8 [root@k8s-master nodeAffinity]# kubectl get rs -o wide 9 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 10 node-affinity-deploy-d5d9cbc8d 5 5 5 13s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=d5d9cbc8d 11 [root@k8s-master nodeAffinity]# 12 [root@k8s-master nodeAffinity]# kubectl get pod -o wide 13 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 14 node-affinity-deploy-d5d9cbc8d-bv86t 1/1 Running 0 18s 10.244.2.243 k8s-node02 <none> <none> 15 node-affinity-deploy-d5d9cbc8d-dnbr8 1/1 Running 0 18s 10.244.2.244 k8s-node02 <none> <none> 16 node-affinity-deploy-d5d9cbc8d-ldq82 1/1 Running 0 18s 10.244.2.246 k8s-node02 <none> <none> 17 node-affinity-deploy-d5d9cbc8d-nt74q 1/1 Running 0 18s 10.244.4.2 k8s-node01 <none> <none> 18 node-affinity-deploy-d5d9cbc8d-rt5nb 1/1 Running 0 18s 10.244.2.245 k8s-node02 <none> <none>
由上可見,再根據以前打的標籤,很容易推斷出當前pod會【優先】調度在k8s-node02節點。
硬親和性與軟親和性一塊兒使用
要運行的yaml文件
1 [root@k8s-master nodeAffinity]# pwd 2 /root/k8s_practice/scheduler/nodeAffinity 3 [root@k8s-master nodeAffinity]# cat node_affinity.yaml 4 apiVersion: apps/v1 5 kind: Deployment 6 metadata: 7 name: node-affinity-deploy 8 labels: 9 app: nodeaffinity-deploy 10 spec: 11 replicas: 5 12 selector: 13 matchLabels: 14 app: myapp 15 template: 16 metadata: 17 labels: 18 app: myapp 19 spec: 20 containers: 21 - name: myapp-pod 22 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 23 imagePullPolicy: IfNotPresent 24 ports: 25 - containerPort: 80 26 affinity: 27 nodeAffinity: 28 requiredDuringSchedulingIgnoredDuringExecution: 29 nodeSelectorTerms: 30 # 表示node標籤存在 cpu-num且值大於10 31 - matchExpressions: 32 - key: cpu-num 33 operator: Gt 34 values: 35 - "10" 36 preferredDuringSchedulingIgnoredDuringExecution: 37 - weight: 50 38 preference: 39 matchExpressions: 40 # 表示node標籤存在 disk-type=ssd 或 disk-type=sas 41 - key: disk-type 42 operator: In 43 values: 44 - ssd 45 - sas
運行yaml文件並查看狀態
1 [root@k8s-master nodeAffinity]# kubectl apply -f node_affinity.yaml 2 deployment.apps/node-affinity-deploy created 3 [root@k8s-master nodeAffinity]# 4 [root@k8s-master nodeAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 node-affinity-deploy 5/5 5 5 9s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master nodeAffinity]# 8 [root@k8s-master nodeAffinity]# kubectl get rs -o wide 9 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 10 node-affinity-deploy-f9cb9b99b 5 5 5 13s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=f9cb9b99b 11 [root@k8s-master nodeAffinity]# 12 [root@k8s-master nodeAffinity]# kubectl get pod -o wide 13 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 14 node-affinity-deploy-f9cb9b99b-8w2nc 1/1 Running 0 17s 10.244.4.10 k8s-node01 <none> <none> 15 node-affinity-deploy-f9cb9b99b-csk2s 1/1 Running 0 17s 10.244.4.9 k8s-node01 <none> <none> 16 node-affinity-deploy-f9cb9b99b-g42kq 1/1 Running 0 17s 10.244.4.8 k8s-node01 <none> <none> 17 node-affinity-deploy-f9cb9b99b-m6xbv 1/1 Running 0 17s 10.244.4.7 k8s-node01 <none> <none> 18 node-affinity-deploy-f9cb9b99b-mxbdp 1/1 Running 0 17s 10.244.2.253 k8s-node02 <none> <none>
由上可見,再根據以前打的標籤,很容易推斷出k8s-node0一、k8s-node02都知足必要條件,但當前pod會【優先】調度在k8s-node01節點。
與節點親和性同樣,當前有Pod親和性/反親和性都有兩種類型,稱爲requiredDuringSchedulingIgnoredDuringExecution和 preferredDuringSchedulingIgnoredDuringExecution,分別表示「硬」與「軟」要求。對於硬要求,若是不知足則pod會一直處於Pending狀態。
Pod的親和性與反親和性是基於Node節點上已經運行pod的標籤(而不是節點上的標籤)決定的,從而約束哪些節點適合調度你的pod。
規則的形式是:若是X已經運行了一個或多個符合規則Y的pod,則此pod應該在X中運行(若是是反親和的狀況下,則不該該在X中運行)。固然pod必須處在同一名稱空間,否則親和性/反親和性無做用。從概念上講,X是一個拓撲域。咱們可使用topologyKey來表示它,topologyKey 的值是node節點標籤的鍵以便系統用來表示這樣的拓撲域。固然這裏也有個隱藏條件,就是node節點標籤的鍵值相同時,纔是在同一拓撲域中;若是隻是節點標籤名相同,可是值不一樣,那麼也不在同一拓撲域。★★★★★
也就是說:Pod的親和性/反親和性調度是根據拓撲域來界定調度的,而不是根據node節點。★★★★★
一、pod之間親和性/反親和性須要大量的處理,這會明顯下降大型集羣中的調度速度。不建議在大於幾百個節點的集羣中使用它們。
二、Pod反親和性要求對節點進行一致的標記。換句話說,集羣中的每一個節點都必須有一個匹配topologyKey的適當標籤。若是某些或全部節點缺乏指定的topologyKey標籤,可能會致使意外行爲。
requiredDuringSchedulingIgnoredDuringExecution中親和性的一個示例是「將服務A和服務B的Pod放置在同一區域【拓撲域】中,由於它們之間有不少交流」;preferredDuringSchedulingIgnoredDuringExecution中反親和性的示例是「將此服務的 pod 跨區域【拓撲域】分佈」【此時硬性要求是說不通的,由於你可能擁有的 pod 數多於區域數】。
Pod親和性/反親和性語法支持如下運算符:In,NotIn,Exists,DoesNotExist。
原則上,topologyKey能夠是任何合法的標籤鍵。可是,出於性能和安全方面的緣由,topologyKey有一些限制:
一、對於Pod親和性,在requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution中topologyKey都不容許爲空。
二、對於Pod反親和性,在requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution中topologyKey也都不容許爲空。
三、對於requiredDuringSchedulingIgnoredDuringExecution的pod反親和性,引入了容許控制器LimitPodHardAntiAffinityTopology來限制topologyKey的kubernet.io/hostname。若是你想讓它對自定義拓撲可用,你能夠修改許可控制器,或者乾脆禁用它。
四、除上述狀況外,topologyKey能夠是任何合法的標籤鍵。
Pod 間親和經過 PodSpec 中 affinity 字段下的 podAffinity 字段進行指定。而 pod 間反親和經過 PodSpec 中 affinity 字段下的 podAntiAffinity 字段進行指定。
Pod親和性/反親和性的requiredDuringSchedulingIgnoredDuringExecution所關聯的matchExpressions下有多個key列表,那麼只有當全部key知足時,才能將pod調度到某個區域【針對Pod硬親和】。
爲了更好的演示Pod親和性與反親和性,本次示例咱們會將k8s-master節點也加入進來進行演示。
給node節點打label標籤
1 # 刪除已存在標籤 2 kubectl label nodes k8s-node01 cpu-num- 3 kubectl label nodes k8s-node01 disk-type- 4 kubectl label nodes k8s-node02 cpu-num- 5 kubectl label nodes k8s-node02 disk-type- 6 7 8 ### --overwrite覆蓋已存在的標籤信息 9 # k8s-master 標籤添加 10 kubectl label nodes k8s-master busi-use=www --overwrite 11 kubectl label nodes k8s-master disk-type=ssd --overwrite 12 kubectl label nodes k8s-master busi-db=redis 13 14 # k8s-node01 標籤添加 15 kubectl label nodes k8s-node01 busi-use=www 16 kubectl label nodes k8s-node01 disk-type=sata 17 kubectl label nodes k8s-node01 busi-db=redis 18 19 # k8s-node02 標籤添加 20 kubectl label nodes k8s-node02 busi-use=www 21 kubectl label nodes k8s-node02 disk-type=ssd 22 kubectl label nodes k8s-node02 busi-db=etcd
查詢全部節點標籤信息
1 [root@k8s-master ~]# kubectl get node -o wide --show-labels 2 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS 3 k8s-master Ready master 28d v1.17.4 172.16.1.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,busi-db=redis,busi-use=www,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master= 4 k8s-node01 Ready <none> 28d v1.17.4 172.16.1.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,busi-db=redis,busi-use=www,disk-type=sata,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux 5 k8s-node02 Ready <none> 28d v1.17.4 172.16.1.112 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,busi-db=etcd,busi-use=www,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
如上所述:k8s-master添加了disk-type=ssd,busi-db=redis,busi-use=www標籤
k8s-node01添加了disk-type=sata,busi-db=redis,busi-use=www標籤
k8s-node02添加了disk-type=ssd,busi-db=etcd,busi-use=www標籤
經過deployment運行一個pod,或者直接運行一個pod也能夠。爲後續的Pod親和性與反親和性測驗作基礎。
1 ### yaml文件 2 [root@k8s-master podAffinity]# pwd 3 /root/k8s_practice/scheduler/podAffinity 4 [root@k8s-master podAffinity]# cat web_deploy.yaml 5 apiVersion: apps/v1 6 kind: Deployment 7 metadata: 8 name: web-deploy 9 labels: 10 app: myweb-deploy 11 spec: 12 replicas: 1 13 selector: 14 matchLabels: 15 app: myapp-web 16 template: 17 metadata: 18 labels: 19 app: myapp-web 20 version: v1 21 spec: 22 containers: 23 - name: myapp-pod 24 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 25 imagePullPolicy: IfNotPresent 26 ports: 27 - containerPort: 80 28 [root@k8s-master podAffinity]# 29 ### 運行yaml文件 30 [root@k8s-master podAffinity]# kubectl apply -f web_deploy.yaml 31 deployment.apps/web-deploy created 32 [root@k8s-master podAffinity]# 33 ### 查看pod標籤 34 [root@k8s-master podAffinity]# kubectl get pod -o wide --show-labels 35 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS 36 web-deploy-5ccc9d7c55-kkwst 1/1 Running 0 15m 10.244.2.4 k8s-node02 <none> <none> app=myapp-web,pod-template-hash=5ccc9d7c55,version=v1
當前pod在k8s-node02節點;其中pod的標籤app=myapp-web,version=v1會在後面pod親和性/反親和性示例中使用。
要運行的yaml文件
1 [root@k8s-master podAffinity]# pwd 2 /root/k8s_practice/scheduler/podAffinity 3 [root@k8s-master podAffinity]# cat pod_required_affinity.yaml 4 apiVersion: apps/v1 5 kind: Deployment 6 metadata: 7 name: pod-podaffinity-deploy 8 labels: 9 app: podaffinity-deploy 10 spec: 11 replicas: 6 12 selector: 13 matchLabels: 14 app: myapp 15 template: 16 metadata: 17 labels: 18 app: myapp 19 spec: 20 # 容許在master節點運行 21 tolerations: 22 - key: node-role.kubernetes.io/master 23 effect: NoSchedule 24 containers: 25 - name: myapp-pod 26 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 27 imagePullPolicy: IfNotPresent 28 ports: 29 - containerPort: 80 30 affinity: 31 podAffinity: 32 requiredDuringSchedulingIgnoredDuringExecution: 33 - labelSelector: 34 # 因爲是Pod親和性/反親和性;所以這裏匹配規則寫的是Pod的標籤信息 35 matchExpressions: 36 - key: app 37 operator: In 38 values: 39 - myapp-web 40 # 拓撲域 若多個node節點具備相同的標籤信息【標籤鍵值相同】,則表示這些node節點就在同一拓撲域 41 # 請對好比下兩個不一樣的拓撲域,Pod的調度結果 42 #topologyKey: busi-use 43 topologyKey: disk-type
運行yaml文件並查看狀態
1 [root@k8s-master podAffinity]# kubectl apply -f pod_required_affinity.yaml 2 deployment.apps/pod-podaffinity-deploy created 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 pod-podaffinity-deploy 6/6 6 6 48s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 web-deploy 1/1 1 1 22h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web 8 [root@k8s-master podAffinity]# 9 [root@k8s-master podAffinity]# kubectl get rs -o wide 10 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 11 pod-podaffinity-deploy-848559bf5b 6 6 6 52s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=848559bf5b 12 web-deploy-5ccc9d7c55 1 1 1 22h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web,pod-template-hash=5ccc9d7c55 13 [root@k8s-master podAffinity]# 14 [root@k8s-master podAffinity]# kubectl get pod -o wide 15 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 pod-podaffinity-deploy-848559bf5b-8kkwm 1/1 Running 0 54s 10.244.0.80 k8s-master <none> <none> 17 pod-podaffinity-deploy-848559bf5b-8s59f 1/1 Running 0 54s 10.244.2.252 k8s-node02 <none> <none> 18 pod-podaffinity-deploy-848559bf5b-8z4dv 1/1 Running 0 54s 10.244.2.253 k8s-node02 <none> <none> 19 pod-podaffinity-deploy-848559bf5b-gs7sb 1/1 Running 0 54s 10.244.0.79 k8s-master <none> <none> 20 pod-podaffinity-deploy-848559bf5b-sm6nz 1/1 Running 0 54s 10.244.0.78 k8s-master <none> <none> 21 pod-podaffinity-deploy-848559bf5b-zbr6v 1/1 Running 0 54s 10.244.2.251 k8s-node02 <none> <none> 22 web-deploy-5ccc9d7c55-khhrr 1/1 Running 3 22h 10.244.2.245 k8s-node02 <none> <none>
由上可見,yaml文件中爲topologyKey: disk-type;雖然k8s-master、k8s-node0一、k8s-node02都有disk-type標籤;可是k8s-master和k8s-node02節點的disk-type標籤值爲ssd;而k8s-node01節點的disk-type標籤值爲sata。所以k8s-master和k8s-node02節點屬於同一拓撲域,Pod只會調度到這兩個節點上。
要運行的yaml文件
1 [root@k8s-master podAffinity]# pwd 2 /root/k8s_practice/scheduler/podAffinity 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# cat pod_preferred_affinity.yaml 5 apiVersion: apps/v1 6 kind: Deployment 7 metadata: 8 name: pod-podaffinity-deploy 9 labels: 10 app: podaffinity-deploy 11 spec: 12 replicas: 6 13 selector: 14 matchLabels: 15 app: myapp 16 template: 17 metadata: 18 labels: 19 app: myapp 20 spec: 21 # 容許在master節點運行 22 tolerations: 23 - key: node-role.kubernetes.io/master 24 effect: NoSchedule 25 containers: 26 - name: myapp-pod 27 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 28 imagePullPolicy: IfNotPresent 29 ports: 30 - containerPort: 80 31 affinity: 32 podAffinity: 33 preferredDuringSchedulingIgnoredDuringExecution: 34 - weight: 100 35 podAffinityTerm: 36 labelSelector: 37 # 因爲是Pod親和性/反親和性;所以這裏匹配規則寫的是Pod的標籤信息 38 matchExpressions: 39 - key: version 40 operator: In 41 values: 42 - v1 43 - v2 44 # 拓撲域 若多個node節點具備相同的標籤信息【標籤鍵值相同】,則表示這些node節點就在同一拓撲域 45 topologyKey: disk-type
運行yaml文件並查看狀態
1 [root@k8s-master podAffinity]# kubectl apply -f pod_preferred_affinity.yaml 2 deployment.apps/pod-podaffinity-deploy created 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 pod-podaffinity-deploy 6/6 6 6 75s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 web-deploy 1/1 1 1 25h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web 8 [root@k8s-master podAffinity]# 9 [root@k8s-master podAffinity]# kubectl get rs -o wide 10 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 11 pod-podaffinity-deploy-8474b4b586 6 6 6 79s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=8474b4b586 12 web-deploy-5ccc9d7c55 1 1 1 25h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web,pod-template-hash=5ccc9d7c55 13 [root@k8s-master podAffinity]# 14 [root@k8s-master podAffinity]# kubectl get pod -o wide 15 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 pod-podaffinity-deploy-8474b4b586-57gxh 1/1 Running 0 83s 10.244.2.4 k8s-node02 <none> <none> 17 pod-podaffinity-deploy-8474b4b586-kd5l4 1/1 Running 0 83s 10.244.2.3 k8s-node02 <none> <none> 18 pod-podaffinity-deploy-8474b4b586-mlvv7 1/1 Running 0 83s 10.244.0.84 k8s-master <none> <none> 19 pod-podaffinity-deploy-8474b4b586-mtk6r 1/1 Running 0 83s 10.244.0.86 k8s-master <none> <none> 20 pod-podaffinity-deploy-8474b4b586-n5jpj 1/1 Running 0 83s 10.244.0.85 k8s-master <none> <none> 21 pod-podaffinity-deploy-8474b4b586-q2xdl 1/1 Running 0 83s 10.244.3.22 k8s-node01 <none> <none> 22 web-deploy-5ccc9d7c55-khhrr 1/1 Running 3 25h 10.244.2.245 k8s-node02 <none> <none>
由上可見,再根據k8s-master、k8s-node0一、k8s-node02的標籤信息;很容易推斷出Pod會優先調度到k8s-master、k8s-node02節點。
要運行的yaml文件
1 [root@k8s-master podAffinity]# pwd 2 /root/k8s_practice/scheduler/podAffinity 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# cat pod_required_AntiAffinity.yaml 5 apiVersion: apps/v1 6 kind: Deployment 7 metadata: 8 name: pod-podantiaffinity-deploy 9 labels: 10 app: podantiaffinity-deploy 11 spec: 12 replicas: 6 13 selector: 14 matchLabels: 15 app: myapp 16 template: 17 metadata: 18 labels: 19 app: myapp 20 spec: 21 # 容許在master節點運行 22 tolerations: 23 - key: node-role.kubernetes.io/master 24 effect: NoSchedule 25 containers: 26 - name: myapp-pod 27 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 28 imagePullPolicy: IfNotPresent 29 ports: 30 - containerPort: 80 31 affinity: 32 podAntiAffinity: 33 requiredDuringSchedulingIgnoredDuringExecution: 34 - labelSelector: 35 # 因爲是Pod親和性/反親和性;所以這裏匹配規則寫的是Pod的標籤信息 36 matchExpressions: 37 - key: app 38 operator: In 39 values: 40 - myapp-web 41 # 拓撲域 若多個node節點具備相同的標籤信息【標籤鍵值相同】,則表示這些node節點就在同一拓撲域 42 topologyKey: disk-type
運行yaml文件並查看狀態
1 [root@k8s-master podAffinity]# kubectl apply -f pod_required_AntiAffinity.yaml 2 deployment.apps/pod-podantiaffinity-deploy created 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 pod-podantiaffinity-deploy 6/6 6 6 68s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 web-deploy 1/1 1 1 25h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web 8 [root@k8s-master podAffinity]# 9 [root@k8s-master podAffinity]# kubectl get rs -o wide 10 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 11 pod-podantiaffinity-deploy-5fb4764b6b 6 6 6 72s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=5fb4764b6b 12 web-deploy-5ccc9d7c55 1 1 1 25h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web,pod-template-hash=5ccc9d7c55 13 [root@k8s-master podAffinity]# 14 [root@k8s-master podAffinity]# kubectl get pod -o wide 15 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 pod-podantiaffinity-deploy-5fb4764b6b-b5bzd 1/1 Running 0 75s 10.244.3.28 k8s-node01 <none> <none> 17 pod-podantiaffinity-deploy-5fb4764b6b-b6qjg 1/1 Running 0 75s 10.244.3.23 k8s-node01 <none> <none> 18 pod-podantiaffinity-deploy-5fb4764b6b-h262g 1/1 Running 0 75s 10.244.3.27 k8s-node01 <none> <none> 19 pod-podantiaffinity-deploy-5fb4764b6b-q98gt 1/1 Running 0 75s 10.244.3.24 k8s-node01 <none> <none> 20 pod-podantiaffinity-deploy-5fb4764b6b-v6kpm 1/1 Running 0 75s 10.244.3.25 k8s-node01 <none> <none> 21 pod-podantiaffinity-deploy-5fb4764b6b-wtmm6 1/1 Running 0 75s 10.244.3.26 k8s-node01 <none> <none> 22 web-deploy-5ccc9d7c55-khhrr 1/1 Running 3 25h 10.244.2.245 k8s-node02 <none> <none>
由上可見,因爲是Pod反親和測驗,再根據k8s-master、k8s-node0一、k8s-node02的標籤信息;很容易推斷出Pod只能調度到k8s-node01節點。
要運行的yaml文件
1 [root@k8s-master podAffinity]# pwd 2 /root/k8s_practice/scheduler/podAffinity 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# cat pod_preferred_AntiAffinity.yaml 5 apiVersion: apps/v1 6 kind: Deployment 7 metadata: 8 name: pod-podantiaffinity-deploy 9 labels: 10 app: podantiaffinity-deploy 11 spec: 12 replicas: 6 13 selector: 14 matchLabels: 15 app: myapp 16 template: 17 metadata: 18 labels: 19 app: myapp 20 spec: 21 # 容許在master節點運行 22 tolerations: 23 - key: node-role.kubernetes.io/master 24 effect: NoSchedule 25 containers: 26 - name: myapp-pod 27 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 28 imagePullPolicy: IfNotPresent 29 ports: 30 - containerPort: 80 31 affinity: 32 podAntiAffinity: 33 preferredDuringSchedulingIgnoredDuringExecution: 34 - weight: 100 35 podAffinityTerm: 36 labelSelector: 37 # 因爲是Pod親和性/反親和性;所以這裏匹配規則寫的是Pod的標籤信息 38 matchExpressions: 39 - key: version 40 operator: In 41 values: 42 - v1 43 - v2 44 # 拓撲域 若多個node節點具備相同的標籤信息【標籤鍵值相同】,則表示這些node節點就在同一拓撲域 45 topologyKey: disk-type
運行yaml文件並查看狀態
1 [root@k8s-master podAffinity]# kubectl apply -f pod_preferred_AntiAffinity.yaml 2 deployment.apps/pod-podantiaffinity-deploy created 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 pod-podantiaffinity-deploy 6/6 6 6 9s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 web-deploy 1/1 1 1 26h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web 8 [root@k8s-master podAffinity]# 9 [root@k8s-master podAffinity]# kubectl get rs -o wide 10 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 11 pod-podantiaffinity-deploy-54d758ddb4 6 6 6 13s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=54d758ddb4 12 web-deploy-5ccc9d7c55 1 1 1 26h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web,pod-template-hash=5ccc9d7c55 13 [root@k8s-master podAffinity]# 14 [root@k8s-master podAffinity]# kubectl get pod -o wide 15 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 pod-podantiaffinity-deploy-54d758ddb4-58t9p 1/1 Running 0 17s 10.244.3.31 k8s-node01 <none> <none> 17 pod-podantiaffinity-deploy-54d758ddb4-9ntd7 1/1 Running 0 17s 10.244.3.32 k8s-node01 <none> <none> 18 pod-podantiaffinity-deploy-54d758ddb4-9wr6p 1/1 Running 0 17s 10.244.2.5 k8s-node02 <none> <none> 19 pod-podantiaffinity-deploy-54d758ddb4-gnls4 1/1 Running 0 17s 10.244.3.30 k8s-node01 <none> <none> 20 pod-podantiaffinity-deploy-54d758ddb4-jlftn 1/1 Running 0 17s 10.244.3.29 k8s-node01 <none> <none> 21 pod-podantiaffinity-deploy-54d758ddb4-mvplv 1/1 Running 0 17s 10.244.0.87 k8s-master <none> <none> 22 web-deploy-5ccc9d7c55-khhrr 1/1 Running 3 26h 10.244.2.245 k8s-node02 <none> <none>
由上可見,因爲是Pod反親和測驗,再根據k8s-master、k8s-node0一、k8s-node02的標籤信息;很容易推斷出Pod會優先調度到k8s-node01節點。
要運行的yaml文件
1 [root@k8s-master podAffinity]# pwd 2 /root/k8s_practice/scheduler/podAffinity 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# cat pod_podAffinity_all.yaml 5 apiVersion: apps/v1 6 kind: Deployment 7 metadata: 8 name: pod-podaffinity-all-deploy 9 labels: 10 app: podaffinity-all-deploy 11 spec: 12 replicas: 6 13 selector: 14 matchLabels: 15 app: myapp 16 template: 17 metadata: 18 labels: 19 app: myapp 20 spec: 21 # 容許在master節點運行 22 tolerations: 23 - key: node-role.kubernetes.io/master 24 effect: NoSchedule 25 containers: 26 - name: myapp-pod 27 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 28 imagePullPolicy: IfNotPresent 29 ports: 30 - containerPort: 80 31 affinity: 32 podAffinity: 33 requiredDuringSchedulingIgnoredDuringExecution: 34 - labelSelector: 35 # 因爲是Pod親和性/反親和性;所以這裏匹配規則寫的是Pod的標籤信息 36 matchExpressions: 37 - key: app 38 operator: In 39 values: 40 - myapp-web 41 # 拓撲域 若多個node節點具備相同的標籤信息【標籤鍵值相同】,則表示這些node節點就在同一拓撲域 42 topologyKey: disk-type 43 podAntiAffinity: 44 preferredDuringSchedulingIgnoredDuringExecution: 45 - weight: 100 46 podAffinityTerm: 47 labelSelector: 48 matchExpressions: 49 - key: version 50 operator: In 51 values: 52 - v1 53 - v2 54 topologyKey: busi-db
運行yaml文件並查看狀態
1 [root@k8s-master podAffinity]# kubectl apply -f pod_podAffinity_all.yaml 2 deployment.apps/pod-podaffinity-all-deploy created 3 [root@k8s-master podAffinity]# 4 [root@k8s-master podAffinity]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 pod-podaffinity-all-deploy 6/6 6 1 5s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 web-deploy 1/1 1 1 28h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web 8 [root@k8s-master podAffinity]# 9 [root@k8s-master podAffinity]# kubectl get rs -o wide 10 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR 11 pod-podaffinity-all-deploy-5ddbf9cbf8 6 6 6 10s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=5ddbf9cbf8 12 web-deploy-5ccc9d7c55 1 1 1 28h myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp-web,pod-template-hash=5ccc9d7c55 13 [root@k8s-master podAffinity]# 14 [root@k8s-master podAffinity]# kubectl get pod -o wide 15 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 pod-podaffinity-all-deploy-5ddbf9cbf8-5w5b7 1/1 Running 0 15s 10.244.0.91 k8s-master <none> <none> 17 pod-podaffinity-all-deploy-5ddbf9cbf8-j57g9 1/1 Running 0 15s 10.244.0.90 k8s-master <none> <none> 18 pod-podaffinity-all-deploy-5ddbf9cbf8-kwz6w 1/1 Running 0 15s 10.244.0.92 k8s-master <none> <none> 19 pod-podaffinity-all-deploy-5ddbf9cbf8-l8spj 1/1 Running 0 15s 10.244.2.6 k8s-node02 <none> <none> 20 pod-podaffinity-all-deploy-5ddbf9cbf8-lf22c 1/1 Running 0 15s 10.244.0.89 k8s-master <none> <none> 21 pod-podaffinity-all-deploy-5ddbf9cbf8-r2fgl 1/1 Running 0 15s 10.244.0.88 k8s-master <none> <none> 22 web-deploy-5ccc9d7c55-khhrr 1/1 Running 3 28h 10.244.2.245 k8s-node02 <none> <none>
由上可見,根據k8s-master、k8s-node0一、k8s-node02的標籤信息;很容易推斷出Pod只能調度到k8s-master、k8s-node02節點,且會優先調度到k8s-master節點。
二、Kubernetes K8S調度器kube-scheduler詳解
完畢!
———END———
若是以爲不錯就關注下唄 (-^O^-) !