前文咱們瞭解了k8s上的DemonSet、Job和CronJob控制器的相關話題,回顧請參考:http://www.javashuo.com/article/p-ysqmazms-nz.html;今天咱們來了解下k8s上的service資源的相關話題;html
Service資源在k8s上主要用來解決pod訪問問題;咱們知道在k8s上pod因爲各類緣由重建,對於重建後的podip地址和名稱都是變化的,這樣一來使得咱們訪問pod就變得有些不便;爲了解決pod訪問能有一個固定的端點,在k8s上就是用service來解決的;簡單來說,service對象就是工做在節點上的一組iptables或ipvs規則,用於將到達service對象ip地址的流量調度轉發至相應endpoint對象指向的ip地址和端口之上;工做於每一個節點的kube-proxy組件經過apiserver持續監控着各service及其關聯的pod對象,並將其建立或變更實時反映至當前工做節點上相應的iptables或ipvs規則;其實service和pod或其餘資源的關聯,本質上不是直接關聯,它依靠一箇中間組件endpoint;endpoint主要做用就是引用後端pod或其餘資源(好比k8s外部的服務也能夠被endpoint引用);所謂endpoint就是ip地址+端口;node
提示:在k8s上kube-proxy它會監視着apiserver上的service資源變更,及時將變更轉化爲本機的iptables或ipvs規則;對應客戶端pod訪問對應serverpod,報文首先會從本機的iptables或ipvs規則所匹配,而後再由對應規則邏輯把請求調度到對應的pod上;nginx
service代理模式模式web
在k8s上service代理模式有三種,早期的k8s版本(1.1以前包含1.1的版本)默認的代理模式爲userspace,後面的版本(1.11起)默認代理模式爲ipvs,若是對應ipvs的模塊沒有加載,它會自動降級爲iptables;算法
userspace代理模式後端
提示:userspace是指Linux操做系統上的用戶空間;在這種代理模型下iptables只是作轉發並不調度,對應調度由kube-proxy完成;userspace這種調度模型用戶請求從內核空間到用戶空間再到內核空間,性能效率比較低下;api
iptables代理模式瀏覽器
提示:iptables這種代理模式,對於每一個service對象,kube-proxy會建立iptables規則直接捕獲到達Clusterip和port的流量,並將其重定向至當前service對象的後端pod資源;對於每一個endpoint對象,service資源會爲其建立iptables規則並關聯至挑選的後端pod資源對象;相對於userspace代理模式來講,該模式用戶請求無須在用戶空間和內核空間來回切換,所以效率高效;此種模式下kube-proxy就只負責生成iptalbes規則,調度有iptables規則完成;bash
ipvs代理模式服務器
提示:ipvs代理模式,kube-proxy會跟蹤apiserver上service和endpoint對象的變更,據此來調用netlink接口建立ipvs規則,並確保與apiserver中的變更保持同步;這種模式與iptables的模式不一樣之處僅在於其請求調度由ipvs完成,餘下其餘功能仍由iptables完成;好比流量捕獲,nat等等功能都會由iptables完成;ipvs代理模型相似iptables模型,ipvs構建於netfilter的鉤子函數之上,但它使用hash表做爲底層數據結構並工做於內核空間,所以具備流量轉發速度快,規則同步性能好的特色,除此以外,ipvs還支持衆多調度算法,好比rr,lc,sh,dh等等;
service類型
在k8s上service的類型有4種,第一種是clusterIP,咱們在建立service資源時,若是不指定其type類型,默認就是clusterip;第二種是NodePort類型,第三種是LoadBalancer,第四種是ExternalName;不一樣類型的service,其功能和做用也有所不一樣;
ClusterIP
提示:如上所示,ClusterIP類型的service就是在k8s節點上建立一個知足serviceip地址的iptables或ipvs規則;這種類型的service的ip地址必定是咱們在初始化集羣時,指定的service網絡(10.96.0.0/12)中的地址;這也意味着這種類型service不能被集羣外部客戶端所訪問,僅能在集羣節點上訪問;
NodePort
提示:NodePort類型的service,是建構在ClusterIP的基礎上作的擴展,主要解決了集羣外部客戶端訪問問題;如上圖所示,NodePort類型service在建立時,它會每一個節點上建立一條DNAT規則,外部客戶端訪問集羣任意節點的指定端口,都會被DNAT到對應的service上,從而實現訪問集羣內部Pod;對於集羣內部客戶端的訪問它仍是經過ClusterIP進行的;NodePort類型service與ClusterIP類型service惟一不一樣的是,NodePort類型service可以被外部客戶端所訪問,在集羣每一個節點上都有對應的DNAT規則;
LoadBalancer
提示:LoadBalancer這種類型的service是在NodePort的基礎上作的擴展,這種類型service只能在底層是雲環境的K8s上建立,若是底層是非雲環境,這種類型沒法實現,只能手動搭建反向代理進行對NodePort類型的service進行反代;它主要解決NodePort類型service被集羣外部訪問時的端口映射以及負載;
ExternalName
提示:ExternalName這種類型service主要用來解決對應service引用集羣外部的服務;咱們知道對於service來講,它就是一條iptables或ipvs規則,對於後端引用的資源是什麼,取決於對應endpoint關聯的是什麼資源的ip地址和端口;若是咱們須要在集羣中使用集羣外部的服務,咱們就能夠建立ExternalName類型的service,指定後端關聯外部某個服務端ip地址或域名便可;它的工做流程如上圖所示,在集羣內部客戶端訪問對應service時,首先要去core-DNS上查詢對應域名的ip地址,而後再根據dns返回的ip地址去鏈接對應的服務;使用這種類型service的前提是對應的coredns可以鏈接到外部網絡解析對應的域名;
service資源的建立
示例:建立ClusterIP類型的service
[root@master01 ~]# cat ngx-dep-svc-demo.yaml apiVersion: v1 kind: Service metadata: name: ngx-dep-svc namespace: default spec: ports: - name: http port: 80 targetPort: 80 selector: app: ngx-dep-pod [root@master01 ~]#
提示:在建立service資源時,主要要須要在spec字段中指定port和targetPort,port是service的端口,targetPort是後端資源的端口;其次就是須要定義標籤選擇器,這裏的標籤選擇器用selector字段指定,它的值是一個字典,即kv鍵值對;默認不指定type類型就是使用的ClusterIP類型,默認不指定ClusterIP就表示自動生成對應的ClusterIP;
應用資源清單
[root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h [root@master01 ~]# kubectl apply -f ngx-dep-svc-demo.yaml service/ngx-dep-svc created [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 5s [root@master01 ~]#
驗證:訪問對應的ClusterIP看看是否可以訪問到對應的資源?
[root@master01 ~]# curl 10.107.159.92 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@master01 ~]#
提示:能夠看到訪問對應serviceip地址可以訪問到對應的pod;
查看service詳細信息
[root@master01 ~]# kubectl describe svc ngx-dep-svc Name: ngx-dep-svc Namespace: default Labels: <none> Annotations: <none> Selector: app=ngx-dep-pod Type: ClusterIP IP Families: <none> IP: 10.107.159.92 IPs: 10.107.159.92 Port: http 80/TCP TargetPort: 80/TCP Endpoints: 10.244.2.93:80,10.244.3.86:80,10.244.4.14:80 Session Affinity: None Events: <none> [root@master01 ~]#
提示:能夠看到對應service的type類型爲ClusterIP,port爲80,targetPort爲80,endpoins對應了3個podip地址+targetPort;其實在建立service時,系統默認會建立一個同service相同名稱的endpoints;
查看endpoints
[root@master01 ~]# kubectl get endpoints NAME ENDPOINTS AGE kubernetes 192.168.0.41:6443 40h ngx-dep-svc 10.244.2.93:80,10.244.3.86:80,10.244.4.14:80 4m53s [root@master01 ~]# kubectl describe endpoints/ngx-dep-svc Name: ngx-dep-svc Namespace: default Labels: <none> Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-12-20T08:50:52Z Subsets: Addresses: 10.244.2.93,10.244.3.86,10.244.4.14 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 80 TCP Events: <none> [root@master01 ~]#
提示:能夠看到ngx-dep-svc這個endpoint關聯了3個podip地址;
示例:建立NodePort類型service
[root@master01 ~]# cat ngx-dep-svc-demo.yaml apiVersion: v1 kind: Service metadata: name: ngx-dep-svc-nodeport namespace: default spec: ports: - name: http port: 80 nodePort: 30080 targetPort: 80 selector: app: ngx-dep-pod type: NodePort [root@master01 ~]#
提示:建立nodeport類型service須要在spec字段中使用type字段來指定其類型爲NodePort;只有type的值爲NodePort,對應ports字段中指定的nodePort 纔有意義,默認不指定它會隨機生成一個端口,指定nodePort就至關於固定了端口;一般不建議指定nodePort;
應用資源配置清單
[root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 21m [root@master01 ~]# kubectl apply -f ngx-dep-svc-demo.yaml service/ngx-dep-svc-nodeport created [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 21m ngx-dep-svc-nodeport NodePort 10.97.166.233 <none> 80:30080/TCP 4s [root@master01 ~]#
提示:能夠看到nodeport類型的service,對應port就有兩個值,後面的30080就是外部客戶端訪問集羣內部資源的端口;
驗證:使用瀏覽器訪問k8s任意節點的30080端口,看看是否可以訪問到對應的pod?
提示:能夠看到集羣外部客戶端能夠經過訪問集羣節點上的一個端口實現訪問對應集羣內部資源;
示例:建立ExternalName類型service
[root@master01 ~]# cat externalname-svc.yaml kind: Service apiVersion: v1 metadata: name: external-www-svc spec: type: ExternalName externalName: www.qiuhom.com ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 0 selector: {} [root@master01 ~]#
提示:以上配置清單表示建立一個名爲external-www-svc的Service,對應Service的類型爲ExternalName;引用外部服務爲www.qiuhom.com;
應用資源配置清單
[root@master01 ~]# kubectl apply -f externalname-svc.yaml service/external-www-svc created [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-www-svc ExternalName <none> www.qiuhom.com 80/TCP 6s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 3h27m ngx-dep-svc-nodeport NodePort 10.97.166.233 <none> 80:30080/TCP 3h2m [root@master01 ~]# kubectl describe svc/external-www-svc Name: external-www-svc Namespace: default Labels: <none> Annotations: <none> Selector: <none> Type: ExternalName IP Families: <none> IP: IPs: <none> External Name: www.qiuhom.com Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: <none> Session Affinity: None Events: <none> [root@master01 ~]#
提示:能夠看到應用配置清單之後,service詳細信息中,沒有標籤,沒有選擇器,沒有ip地址;只有externalName和對應targetPort;
測試:把dns服務器地址指向coredns,而後訪問對應服務名稱,看看對應服務是否會有響應?
[root@master01 ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 12d [root@master01 ~]# cat /etc/resolv.conf # Generated by NetworkManager search k8s.org nameserver 10.96.0.10 [root@master01 ~]# curl external-www-svc.default.svc.cluster.local -I HTTP/1.1 302 Found Server: nginx/1.14.0 (Ubuntu) Date: Sun, 20 Dec 2020 12:33:44 GMT Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Vary: Accept-Language, Cookie Location: /accounts/login/?next=/ Content-Language: en [root@master01 ~]#
提示:能夠看到訪問對應服務名稱,響應碼是302,說明咱們的請求被代理出去了;
進入任意pod內部,使用nslookup查詢coredns上對應服務名稱是否可以解析?
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-6pjgw 1/1 Running 0 3h30m deploy-demo-6d795f958b-m4vfb 1/1 Running 0 3h30m deploy-demo-6d795f958b-qcl6m 1/1 Running 0 3h30m [root@master01 ~]# kubectl exec -it deploy-demo-6d795f958b-6pjgw -- /bin/sh / # nslookup external-www-svc nslookup: can't resolve '(null)': Name does not resolve Name: external-www-svc Address 1: 47.99.205.203 / #
提示:能夠看到對應服務名稱也能正常被解析;
示例:建立無頭service
[root@master01 ~]# cat handless-svc-demo.yaml apiVersion: v1 kind: Service metadata: name: handless-svc-demo namespace: default spec: clusterIP: None ports: - name: http port: 80 targetPort: 80 selector: app: ngx-dep-pod [root@master01 ~]#
提示:所謂無頭service是指沒有clusterIP的service,咱們知道clusterip是service做爲訪問後端pod的入口,那麼沒有clusterip,它怎麼訪問後端pod呢?沒有ip地址咱們只能使用名稱來訪問;在k8s上無頭service默認會被coredns把對應名稱的service解析爲後端關聯的多個pod地址;
應用資源配置清單
[root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-www-svc ExternalName <none> www.baidu.com 80/TCP 10m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 3h34m ngx-dep-svc-nodeport NodePort 10.97.166.233 <none> 80:30080/TCP 3h12m [root@master01 ~]# kubectl apply -f handless-svc-demo.yaml service/handless-svc-demo created [root@master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-www-svc ExternalName <none> www.baidu.com 80/TCP 10m handless-svc-demo ClusterIP None <none> 80/TCP 5s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h ngx-dep-svc ClusterIP 10.107.159.92 <none> 80/TCP 3h34m ngx-dep-svc-nodeport NodePort 10.97.166.233 <none> 80:30080/TCP 3h12m [root@master01 ~]#
提示:能夠看到對應service沒有clusterIP;
驗證:進入任意pod,使用名稱訪問service,看看對應名稱是否可以訪問到pod?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-6pjgw 1/1 Running 0 3h53m deploy-demo-6d795f958b-m4vfb 1/1 Running 0 3h53m deploy-demo-6d795f958b-qcl6m 1/1 Running 0 3h53m [root@master01 ~]# kubectl exec -it pod/deploy-demo-6d795f958b-m4vfb -- /bin/sh / # wget -O - -q handless-svc-demo <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> / #
提示:在pod內部使用名稱service名稱能夠訪問到對應的pod;
驗證:查看coredns是否將對應service名稱解析爲對應擁有service指定的標籤選擇器的podip呢?
[root@master01 ~]# kubectl get pod --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS deploy-demo-6d795f958b-6pjgw 1/1 Running 0 3h54m 10.244.2.93 node02.k8s.org <none> <none> app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-6d795f958b-m4vfb 1/1 Running 0 3h54m 10.244.4.14 node04.k8s.org <none> <none> app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-6d795f958b-qcl6m 1/1 Running 0 3h54m 10.244.3.86 node03.k8s.org <none> <none> app=ngx-dep-pod,pod-template-hash=6d795f958b [root@master01 ~]# kubectl exec -it pod/deploy-demo-6d795f958b-m4vfb -- /bin/sh / # nslookup handless-svc-demo nslookup: can't resolve '(null)': Name does not resolve Name: handless-svc-demo Address 1: 10.244.4.14 deploy-demo-6d795f958b-m4vfb Address 2: 10.244.3.86 10-244-3-86.ngx-dep-svc-nodeport.default.svc.cluster.local Address 3: 10.244.2.93 10-244-2-93.ngx-dep-svc.default.svc.cluster.local / #
提示:能夠看到coredns把對應名稱解析成了3個地址;這三個地址都是對應pod上擁有service指定的標籤選擇器上的標籤的podip地址;
配置k8s使用ipvs代理模式
編寫加載ipvs相關模塊腳本
[root@master01 ~]# cat /etc/sysconfig/modules/ipvs.modules #!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do /sbin/modinfo -F filename $mod &> /dev/null if [ $? -eq 0 ]; then /sbin/modprobe $mod fi done [root@master01 ~]#
提示:以上腳本主要作了一件事,就是把ipvs_mods_dir所指定的目錄下的全部模塊加載到內核;
給腳本添加執行權限
[root@master01 ~]# ll /etc/sysconfig/modules/ipvs.modules -rw-r--r-- 1 root root 253 Dec 20 18:50 /etc/sysconfig/modules/ipvs.modules [root@master01 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules [root@master01 ~]# ll /etc/sysconfig/modules/ipvs.modules -rwxr-xr-x 1 root root 253 Dec 20 18:50 /etc/sysconfig/modules/ipvs.modules [root@master01 ~]#
複製該腳本到其餘節點
[root@master01 ~]# scp -p /etc/sysconfig/modules/ipvs.modules node01:/etc/sysconfig/modules/ipvs.modules ipvs.modules 100% 253 132.8KB/s 00:00 [root@master01 ~]# scp -p /etc/sysconfig/modules/ipvs.modules node02:/etc/sysconfig/modules/ipvs.modules ipvs.modules 100% 253 102.5KB/s 00:00 [root@master01 ~]# scp -p /etc/sysconfig/modules/ipvs.modules node03:/etc/sysconfig/modules/ipvs.modules ipvs.modules 100% 253 80.8KB/s 00:00 [root@master01 ~]# scp -p /etc/sysconfig/modules/ipvs.modules node04:/etc/sysconfig/modules/ipvs.modules ipvs.modules 100% 253 121.0KB/s 00:00 [root@master01 ~]#
提示:把腳本放在/etc/sysconfig/modules目錄下以點modules結尾的腳本,在系統重啓之後,它會自動執行對應目錄下的腳本;
執行腳本,加載模塊
[root@master01 ~]# bash /etc/sysconfig/modules/ipvs.modules [root@master01 ~]# ssh node01 'bash /etc/sysconfig/modules/ipvs.modules' [root@master01 ~]# ssh node02 'bash /etc/sysconfig/modules/ipvs.modules' [root@master01 ~]# ssh node03 'bash /etc/sysconfig/modules/ipvs.modules' [root@master01 ~]# ssh node04 'bash /etc/sysconfig/modules/ipvs.modules' [root@master01 ~]#
驗證:查看對應模塊是否加載?
[root@master01 ~]# lsmod |grep ip_vs ip_vs_wlc 12519 0 ip_vs_sed 12519 0 ip_vs_pe_sip 12697 0 nf_conntrack_sip 33860 1 ip_vs_pe_sip ip_vs_nq 12516 0 ip_vs_lc 12516 0 ip_vs_lblcr 12922 0 ip_vs_lblc 12819 0 ip_vs_ftp 13079 0 ip_vs_dh 12688 0 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 141092 26 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc nf_nat 26787 4 ip_vs_ftp,nf_nat_ipv4,xt_nat,nf_nat_masquerade_ipv4 nf_conntrack 133387 8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack [root@master01 ~]#
提示:可以看到以上信息表示ipvs相關模塊已經加載;
編輯kube-proxy的配置
[root@master01 ~]# kubectl edit cm kube-proxy -n kube-system
修改mode字段的值爲「ipvs」,而後保存退出
刪除現有k8s kube-proxy pod
[root@master01 ~]# kubectl get pod -n kube-system --show-labels NAME READY STATUS RESTARTS AGE LABELS coredns-7f89b7bc75-k9gdt 1/1 Running 10 12d k8s-app=kube-dns,pod-template-hash=7f89b7bc75 coredns-7f89b7bc75-kp855 1/1 Running 8 12d k8s-app=kube-dns,pod-template-hash=7f89b7bc75 etcd-master01.k8s.org 1/1 Running 11 12d component=etcd,tier=control-plane kube-apiserver-master01.k8s.org 1/1 Running 8 12d component=kube-apiserver,tier=control-plane kube-controller-manager-master01.k8s.org 1/1 Running 9 12d component=kube-controller-manager,tier=control-plane kube-flannel-ds-cx8d5 1/1 Running 12 12d app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-flannel-ds-jz6r4 1/1 Running 3 46h app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-flannel-ds-ndzl6 1/1 Running 13 12d app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-flannel-ds-rjtn9 1/1 Running 14 12d app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-flannel-ds-zgq92 1/1 Running 12 12d app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node kube-proxy-8wfcx 1/1 Running 3 46h controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-dl8jd 1/1 Running 8 12d controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-lz4zc 1/1 Running 9 12d controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-pjv9s 1/1 Running 12 12d controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1 kube-proxy-pwl5v 1/1 Running 8 12d controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1 kube-scheduler-master01.k8s.org 1/1 Running 9 12d component=kube-scheduler,tier=control-plane [root@master01 ~]# kubectl get pod -n kube-system -l k8s-app=kube-proxy NAME READY STATUS RESTARTS AGE kube-proxy-8wfcx 1/1 Running 3 46h kube-proxy-dl8jd 1/1 Running 8 12d kube-proxy-lz4zc 1/1 Running 9 12d kube-proxy-pjv9s 1/1 Running 12 12d kube-proxy-pwl5v 1/1 Running 8 12d [root@master01 ~]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy pod "kube-proxy-8wfcx" deleted pod "kube-proxy-dl8jd" deleted pod "kube-proxy-lz4zc" deleted pod "kube-proxy-pjv9s" deleted pod "kube-proxy-pwl5v" deleted [root@master01 ~]#
提示:kube-proxy是一個ds控制器所管理的pod,它能容忍主節點上的污點在集羣每一個節點上建立對應pod;咱們手動刪除它,對應控制器會從新新建對應數量的pod,從而實現應用新配置的目的;生產環境不提倡這樣修改,應該在集羣初始化前就規劃好使用哪一種代理模式;
驗證:在集羣任意節點安裝ipvs客戶端工具,看看是否有對應的ipvs規則生成?
安裝ipvsadm
[root@master01 ~]# yum install -y ipvsadm
使用ipvsadm查看是否生成的有ipvs規則?
[root@master01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.17.0.1:30080 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 192.168.0.41:30080 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 10.96.0.1:443 rr -> 192.168.0.41:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 10.244.0.20:53 Masq 1 0 0 -> 10.244.0.21:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.20:9153 Masq 1 0 0 -> 10.244.0.21:9153 Masq 1 0 0 TCP 10.97.166.233:80 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 10.107.159.92:80 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 10.244.0.0:30080 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 10.244.0.1:30080 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 TCP 127.0.0.1:30080 rr -> 10.244.2.93:80 Masq 1 0 0 -> 10.244.3.86:80 Masq 1 0 0 -> 10.244.4.14:80 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.20:53 Masq 1 0 0 -> 10.244.0.21:53 Masq 1 0 0 [root@master01 ~]#
提示:可以看到有ipvs規則生成,說明此時k8s就是使用的ipvs代理模式;