容器編排系統K8s之flannel網絡模型

  前文咱們聊到了k8s上webui的安裝和相關用戶受權,回顧請參考:http://www.javashuo.com/article/p-foymjjlx-ny.html;今天咱們來聊一聊k8s上的網絡相關話題;html

  在說k8s網絡以前,咱們先來回顧下docker中的網絡是怎麼實現的,在docker中,容器有4種類型,第一種是closed container類型,這種容器類型,容器內部只有一個lo接口,它沒法實現和外部網絡通訊;第二種是bridged container,這種類型容器就是默認的容器類型,它是經過橋接的形式將容器中的虛擬網卡直接橋接到docker0橋上,讓其容器內部的虛擬網卡和docker0橋直接位於同一網絡名稱空間中,使得容器能夠同外部網絡通訊;第三種容器就是joined container,這種容器是共享式網絡容器,所謂共享式網絡容器是指在運行一個容器時,直接指定咱們要把該容器同那個容器共享爲同一網絡名稱空間,這種網絡模型本質上也橋接的一種,不一樣於bridage contaier的是,joined container它能夠共享多個容器的網絡名稱空間;好比容器a要和容器b通訊,容器a就能夠直接加入到容器b的網絡名稱空間中,實現兩個容器在同一網絡名稱空間中;第四種容器是open container,這種容器網絡是直接共享宿主機網絡名稱空間;以下圖所示node

  提示:從上述描述不難發現docker中的容器網絡模型都是經過橋接的模式實現的不一樣類別的網絡模型;web

  在docker中跨主機容器通訊是怎麼作的呢?docker

  從上面的描述,docker中的容器要想和外部通訊,首先要把對應的容器橋接到可以和外部通訊的橋上,默認狀況docker運行的容器,它會把容器內部的虛擬網卡橋接到docker0橋,對應docker0橋是宿主機上的一個虛擬網橋,它是一個nat橋,它可以和外部網絡通訊的緣由是它藉助了宿主機上的iptables規則中的SNAT實現的源地址轉換,從而實現和外部網絡主機通訊;一樣的道理若是對應docker0橋上橋接的容器要可以被外部網絡所訪問,它也須要藉助宿主機上的iptables中的DNAT,讓其外部網絡訪問對應宿主機上的ip地址,對應流量經過DNAT將用戶請求送達至容器內部進行響應;以下圖所示api

  提示:同一宿主機上的容器通訊直接能夠經過docker0橋直接通訊,跨主機容器間通訊,必須藉助宿主機上的iptables規則,將docker0橋上橋接的容器經過SNAT或DNAT把對應請求路由出去或將外部請求轉發到對應容器內部進行響應;bash

  k8s上的網絡網絡

  咱們知道在k8s上有三種網絡,第一種是宿主機網絡,這種網絡沒有列入容器編排的範疇內,是集羣管理員自行維護;第二種網絡上service網絡,service網絡也叫cluster網絡,該網絡本質上不會在任何網卡上存在,它是藉助每一個節點上的kube-proxy生成的iptables或ipvs規則,主要用來實現對pod訪問的負載均衡,也是各服務間訪問的網絡;第三種網絡就是pod網絡,pod網絡主要用來pod和pod間通訊;以下圖所示app

  提示:在k8s上pod和pod通訊,它不是靠iptables中的SANT或DNAT實現的,它也不走docker0橋,而是藉助外部網絡插件實現的,對於k8s的網絡插件來講,實現的軟件有不少,最爲著名的有flannel或calico這兩種;這兩種插件都能實現pod與pod間通訊不依賴iptables中的SNAT或DANT;不一樣的是flannel不支持網絡策略,對應calico支持網絡策略;負載均衡

  flannel是怎麼實現的pod與pod間通訊的呢?tcp

  咱們知道在k8s上pod的ip地址,取決於咱們使用的網絡插件,使用flannel網絡插件咱們在初始化集羣時就要指定對應的pod網絡(10.244.0.0/16),若是使用calico網絡插件,初始化集羣咱們要指定pod網絡爲192.168.0.0/16;咱們指定對應的pod網絡地址,使用對應的插件,k8s集羣就能正常工做,這其中的緣由是默認flannel網絡插件使用的地址就是10.244.0.0/16的地址,calico使用的192.168.0.0/16;固然這個默認的配置咱們是能夠更改的;以flannel爲例,它是怎麼實現pod和pod直接通訊的呢?咱們知道在docker環境中跨節點通訊,兩個容器的地址多是相同的地址,爲此跨節點容器通訊就必須藉助SNAT或DNAT方式進行通訊;對於在k8s上網絡插件要想實現pod和pod直接通訊,首先要解決podip和podip不能互相沖突;對於flannel這個網絡插件來講,它解決pod地址衝突是依賴網絡虛擬化中的vxlan機制實現的;vxlan可以將10.244.0.0/16這個網絡劃分爲多個子網,每一個節點的pod使用對應節點上的子網地址,這樣一來不一樣節點上的podip就必定不會發生兩個podip地址相同的狀況;好比vxlan把10.244.0.0/16的網絡劃分爲256個子網,第一個節點上運行的pod就是用10.244.0.0/24這個子網中的地址,第二個節點上的pod就使用10.244.1.0/24子網中的地址;第三個節點,第四個節點依次類推;IP地址衝突問題解決,pod和pod怎麼直接通訊呢?方案一:按照docker網絡中的思想,咱們能夠將容器內部的虛擬網卡直接橋接到宿主機上的網卡上,這樣一來對應每一個pod就能夠經過宿主機網卡來實現通訊;可是這種方式有一個缺點,若是對應pod增多,對於宿主機網絡中的arp廣播報文可能由於數量多而致使arp泛洪;從而致使網絡擁塞而不可用;爲此咱們須要藉助其餘機制來解決;好比把每一個節點的子網劃分一個vlan,節點與節點通訊,經過vlan去交換數據;這樣一來咱們就須要手動去管理vlan,很顯然這種方式不是咱們想要的方式;方式二:咱們不把容器的虛擬網卡橋接到宿主機網卡上,而是把它橋接的一個虛擬的網橋上;而後把宿主機和宿主機經過某種機制打通一個隧道,而後生成對應的路由信息,這樣一來在同一節點上的pod通訊直接經過虛擬網橋通訊便可;若是要和其餘節點上的pod通訊對應報文會通路由信息把對應虛擬網橋上的流量發送到隧道接口,進行隧道協議報文的封裝,而後把封裝好的報文經過本身所在節點上的物理網卡發送出去,對應主機收到此報文後,經過層層解封裝,最後到達對應pod內部,從而實現pod和pod直接通訊;對此pod是無所感知的,由於最終到達pod的報文必定是源ip和目標ip都是對應的podip;那麼問題來了,對應接口怎麼知道是對端虛擬網橋的ip地址呢?它怎麼知道對應報文該發往那個主機呢?這個就跟咱們使用的網絡插件有關係了;在flannel網絡插件中,對應的網絡信息是存儲在一個存儲系統中的,好比使用etcd存儲;在k8s上安裝好flannel插件之後,對應的它會在每一個宿主機上運行一個守護進程,而且在每一個節點上建立一個cni0的接口,這個接口就是咱們上面說的虛擬網橋;除了這個網橋,它還會建立一個flannel.1的接口,這個接口就是隧道接口;隨後flannel會藉助vxlan把10.244.0.0/16這個網絡進行子網劃分,並把對應劃分的子網信息同對應節點上的物理網卡上的ip地址,mac地址,進行一一對應;好比節點1的子網爲10.244.0.0/24,對應物理網卡上的ip地址爲192.168.0.41,mac地址xxx;把節點2的子網信息以及對應節點ip地址,mac地址等信息一一對應起來,並把這些信息保存在etcd中;當節點1上的pod須要和節點2上的pod通訊時,此時vxlan控制器會到etcd中檢索對應的信息,而後封裝報文;其實說這麼多就一句話,在k8s上flannel網絡插件是經過vxlan機制,實現對每一個節點上pod網絡進行子網劃分解決了podip地址衝突問題,同時也基於vxlan機制實現節點與節點間的隧道通訊;從而實現k8s上的pod與pod間能夠直接通訊;以下圖所示

  k8s上藉助flannel網絡插件,實現跨節點pod通訊報文走向示意圖

 

  提示:簡單講vxlan就是藉助物理網卡來承載pod網絡,而實現的二層隧道;從而各pod能夠直接使用該隧道通訊,中間無需作任何nat轉換;其實vxlan這種技術運用仍是很普遍的,好比openstack中的自服務網絡,docker swarm中的容器和容器間通訊;

  測試:在k8s上運行pod,而後在pod內部ping跨節點pod,看看他們之間具體是怎麼通訊過程

[root@master01 ~]# kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
myapp-dep-5bc4d8cc74-2kjf5   1/1     Running   0          20h   10.244.2.9   node02.k8s.org   <none>           <none>
myapp-dep-5bc4d8cc74-5k8rc   1/1     Running   0          20h   10.244.1.3   node01.k8s.org   <none>           <none>
myapp-dep-5bc4d8cc74-w6gdz   1/1     Running   0          20h   10.244.3.9   node03.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:以上k8s上在default名稱空間中跑了3個pod,分別被調度到3個節點之上,各自運行了一個pod;

  進入其中一個pod,在其內部ping另一個pod

[root@master01 ~]# kubectl get pods -o wide                                
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
myapp-dep-5bc4d8cc74-2kjf5   1/1     Running   0          20h   10.244.2.9   node02.k8s.org   <none>           <none>
myapp-dep-5bc4d8cc74-5k8rc   1/1     Running   0          20h   10.244.1.3   node01.k8s.org   <none>           <none>
myapp-dep-5bc4d8cc74-w6gdz   1/1     Running   0          20h   10.244.3.9   node03.k8s.org   <none>           <none>
[root@master01 ~]# kubectl exec -it myapp-dep-5bc4d8cc74-2kjf5 -- /bin/sh
/ # ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP 
    link/ether 5a:2a:ca:ec:83:65 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.9/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 10.244.1.3
PING 10.244.1.3 (10.244.1.3): 56 data bytes
64 bytes from 10.244.1.3: seq=0 ttl=62 time=9.944 ms
64 bytes from 10.244.1.3: seq=1 ttl=62 time=1.974 ms
64 bytes from 10.244.1.3: seq=2 ttl=62 time=2.115 ms

  登陸到node01節點,查看網絡接口

[root@node01 ~]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:0c:29:01:21:41 brd ff:ff:ff:ff:ff:ff
    inet 172.16.11.4/24 brd 172.16.11.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe01:2141/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:e1:a6:d7:1a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 76:12:1a:11:62:86 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::7412:1aff:fe11:6286/64 scope link 
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
    link/ether 52:6f:30:31:77:86 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::506f:30ff:fe31:7786/64 scope link 
       valid_lft forever preferred_lft forever
7: vethce8e4bf2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether 9a:22:8e:d7:78:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::9822:8eff:fed7:7833/64 scope link 
       valid_lft forever preferred_lft forever
[root@node01 ~]# 

  提示:能夠看到對應節點有cni0接口,也有flannel.1接口;

  在node01節點上抓cni0上的包

[root@node01 ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
18:55:18.469861 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 225, length 64
18:55:18.470073 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 225, length 64
18:55:19.471439 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 226, length 64
18:55:19.471575 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 226, length 64
18:55:20.472470 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 227, length 64
18:55:20.472608 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 227, length 64
18:55:21.473084 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 228, length 64
18:55:21.473223 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 228, length 64
18:55:22.474856 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 229, length 64
18:55:22.474922 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 229, length 64
18:55:23.475499 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 230, length 64
18:55:23.475685 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 230, length 64
18:55:24.476694 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 231, length 64
18:55:24.476854 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 231, length 64
^C
14 packets captured
14 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# 

  提示:能夠看到在cni0上可以看到10.244.2.9在ping10.244.1.3;說明pod和pod通訊首先會經過cni0這個接口;

  在node01上抓flainnel.1接口上的icmp包

[root@node01 ~]# tcpdump -i flannel.1 -nn icmp     
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
18:57:03.607093 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 330, length 64
18:57:03.607273 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 330, length 64
18:57:04.607604 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 331, length 64
18:57:04.607819 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 331, length 64
18:57:05.608172 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 332, length 64
18:57:05.608369 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 332, length 64
18:57:06.609825 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 333, length 64
18:57:06.610106 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 333, length 64
18:57:07.610310 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 334, length 64
18:57:07.612417 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 334, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# 

  提示:在node01上的flannel.1接口上抓icmp包,可以正常看到10.244.2.9在ping10.244.1.3;說明對應報文來到了flannel.1接口;

  在node01上的物理接口上抓icmp包,看看是否能抓到對應的icmp包呢?

[root@node01 ~]# tcpdump -i eth0 -nn icmp         
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes

  提示:能夠看到在node01上的物理接口上抓icmp類型的包,一個都沒有抓到,其緣由是對應報文經過隧道接口封裝後,在物理接口上不是icmp類型的包了;

  在node01的物理接口上抓node02的ip地址的包,看看會抓到什麼?

[root@node01 ~]# tcpdump -i eth0 -nn host 172.16.11.5
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:02:36.139552 IP 172.16.11.5.46521 > 172.16.11.4.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 662, length 64
19:02:36.139935 IP 172.16.11.4.57232 > 172.16.11.5.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 662, length 64
19:02:37.143339 IP 172.16.11.5.46521 > 172.16.11.4.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 663, length 64
19:02:37.143587 IP 172.16.11.4.57232 > 172.16.11.5.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 663, length 64
19:02:38.144569 IP 172.16.11.5.46521 > 172.16.11.4.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 664, length 64
19:02:38.145276 IP 172.16.11.4.57232 > 172.16.11.5.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 664, length 64
19:02:39.144889 IP 172.16.11.5.46521 > 172.16.11.4.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 665, length 64
19:02:39.145126 IP 172.16.11.4.57232 > 172.16.11.5.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 665, length 64
19:02:40.145727 IP 172.16.11.5.46521 > 172.16.11.4.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 13568, seq 666, length 64
19:02:40.145976 IP 172.16.11.4.57232 > 172.16.11.5.8472: OTV, flags [I] (0x08), overlay 0, instance 1
IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 13568, seq 666, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# 

  提示:從上面的抓包信息能夠看到,在node01節點物理接口上收到來自node02物理節點上的包,外層是兩個節點ip地址通訊,裏面承載了對應的podip;經過上述驗證,能夠發如今k8s上pod和pod通訊,的確沒有作任何nat,而是藉助vxlan隧道實現的pod和pod直接通訊;

  更改flannel工做爲直接路由模式,使pod與pod網絡通訊不經由flannel.1接口,直接將流量發送給物理接口

  提示:在flannel的配置文件中的backend配置段中,加上「DirectRouting」: true配置信息,這裏須要注意加了此配置,上面的type後面要有逗號分隔;修改完成之後,保存退出便可;

  刪除原有的flannel pod,讓其自動從新新建flannel pod,應用新配置

  刪除前查看節點的路由信息

[root@master01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.11.2     0.0.0.0         UG    0      0        0 eth0
0.0.0.0         172.16.11.2     0.0.0.0         UG    100    0        0 eth0
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      10.244.1.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.2.0      10.244.2.0      255.255.255.0   UG    0      0        0 flannel.1
10.244.3.0      10.244.3.0      255.255.255.0   UG    0      0        0 flannel.1
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.16.11.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
[root@master01 ~]# 

  提示:能夠看到刪除原有flannel pod前,對應節點路由信息中,對應10.244.1.0/2四、10.244.2.0/24和3.0/24的網絡都是指向flannel.1這個接口;

  刪除原有的flannel pod

[root@master01 ~]# kubectl get pods -n kube-system --show-labels
NAME                                       READY   STATUS    RESTARTS   AGE   LABELS
coredns-7f89b7bc75-9s8wr                   1/1     Running   0          11d   k8s-app=kube-dns,pod-template-hash=7f89b7bc75
coredns-7f89b7bc75-ck8gl                   1/1     Running   0          11d   k8s-app=kube-dns,pod-template-hash=7f89b7bc75
etcd-master01.k8s.org                      1/1     Running   1          11d   component=etcd,tier=control-plane
kube-apiserver-master01.k8s.org            1/1     Running   1          11d   component=kube-apiserver,tier=control-plane
kube-controller-manager-master01.k8s.org   1/1     Running   3          11d   component=kube-controller-manager,tier=control-plane
kube-flannel-ds-2z7sk                      1/1     Running   2          11d   app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-flannel-ds-57fng                      1/1     Running   0          11d   app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-flannel-ds-vt2jv                      1/1     Running   0          11d   app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-flannel-ds-wk52c                      1/1     Running   2          11d   app=flannel,controller-revision-hash=64465d999,pod-template-generation=1,tier=node
kube-proxy-2hcd9                           1/1     Running   0          11d   controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-m9s45                           1/1     Running   0          11d   controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-mh9nx                           1/1     Running   0          11d   controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-t57x8                           1/1     Running   0          11d   controller-revision-hash=c449f5b75,k8s-app=kube-proxy,pod-template-generation=1
kube-scheduler-master01.k8s.org            1/1     Running   3          11d   component=kube-scheduler,tier=control-plane
[root@master01 ~]# kubectl delete pod -l app=flannel -n kube-system
pod "kube-flannel-ds-2z7sk" deleted
pod "kube-flannel-ds-57fng" deleted
pod "kube-flannel-ds-vt2jv" deleted
pod "kube-flannel-ds-wk52c" deleted
[root@master01 ~]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-9s8wr                   1/1     Running   0          11d
coredns-7f89b7bc75-ck8gl                   1/1     Running   0          11d
etcd-master01.k8s.org                      1/1     Running   1          11d
kube-apiserver-master01.k8s.org            1/1     Running   1          11d
kube-controller-manager-master01.k8s.org   1/1     Running   3          11d
kube-flannel-ds-9ww8d                      1/1     Running   0          39s
kube-flannel-ds-gd45l                      1/1     Running   0          79s
kube-flannel-ds-ps6c5                      1/1     Running   0          27s
kube-flannel-ds-x642z                      1/1     Running   0          70s
kube-proxy-2hcd9                           1/1     Running   0          11d
kube-proxy-m9s45                           1/1     Running   0          11d
kube-proxy-mh9nx                           1/1     Running   0          11d
kube-proxy-t57x8                           1/1     Running   0          11d
kube-scheduler-master01.k8s.org            1/1     Running   3          11d
[root@master01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.11.2     0.0.0.0         UG    0      0        0 eth0
0.0.0.0         172.16.11.2     0.0.0.0         UG    100    0        0 eth0
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
10.244.1.0      172.16.11.4     255.255.255.0   UG    0      0        0 eth0
10.244.2.0      172.16.11.5     255.255.255.0   UG    0      0        0 eth0
10.244.3.0      172.16.11.6     255.255.255.0   UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.16.11.0     0.0.0.0         255.255.255.0   U     100    0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
[root@master01 ~]# 

  提示:能夠看到,新建flannel pod後對應的路由信息就發生變了;如今就沒有任何路由會經過flannel.1接口;

  驗證:進入一個pod內部ping 另外一個pod ip,看看對應報文走向

  在節點1抓包

[root@node01 ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:45:55.693118 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 32, length 64
19:45:55.693285 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 32, length 64
19:45:56.693771 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 33, length 64
19:45:56.693941 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 33, length 64
19:45:57.695549 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 34, length 64
19:45:57.695905 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 34, length 64
19:45:58.696517 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 35, length 64
19:45:58.697035 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 35, length 64
^C
8 packets captured
8 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# tcpdump -i flannel.1 -nn icmp        
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# tcpdump -i eth0 -nn icmp         
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:46:24.737002 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 61, length 64
19:46:24.737350 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 61, length 64
19:46:25.737664 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 62, length 64
19:46:25.737987 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 62, length 64
19:46:26.739459 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 63, length 64
19:46:26.739705 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 63, length 64
19:46:27.739800 IP 10.244.2.9 > 10.244.1.3: ICMP echo request, id 18944, seq 64, length 64
19:46:27.740026 IP 10.244.1.3 > 10.244.2.9: ICMP echo reply, id 18944, seq 64, length 64
^C
8 packets captured
8 packets received by filter
0 packets dropped by kernel
[root@node01 ~]# 

  提示:能夠看到在node01的flannel.1接口上就抓不到icmp類型的包,在對應物理接口上可以抓到icmp類型的包,而且從抓包信息中也能看到對應是10.244.2.9在ping 10.244.1.3;

相關文章
相關標籤/搜索