Kubernetes 學習19基於canel的網絡策略

1、概述html

  一、咱們說過,k8s的可用插件有不少,除了flannel以外,還有一個流行的叫作calico的組件,不過calico在不少項目中都會有這個名字被應用,因此他們把本身稱爲project calico,可是不少時候咱們在kubernets的語境中一般會單獨稱呼他爲calico。其自己支持bgp的方式來構建pod網絡。經過bgp協議的路由學習能使得去每一節點上生成到達另外一節點上pod之間的路由表信息。會在變更時自動執行改變和修改,另外其也支持IP-IP,就是基於IP報文來承載一個IP報文,不像咱們VXLAN是經過一個所謂的以太網幀來承載另一個以太網幀的報文的方式來實現的。所以從這個角度來說他是一個三層隧道,也就意外着說若是咱們指望在calico的網絡中實現隧道的方式進行應用,來實現更爲強大的控制邏輯,他也支持隧道,不過他是IP-IP隧道,和LVS中所謂的IP-IP的方式很相像。不過calico的配置依賴於咱們對bgp等協議的理解工做起來才能更好的去了解他。咱們這兒就再也不講calico怎麼去做爲網絡插件去提供網絡功能的,而是把他重點集中在calico如何去提供網絡策略。由於flannel自己提供不了網絡策略。而flannel和calico兩者自己已經合二爲一了,即canel。事實上,就算咱們此前不瞭解canel的時候相似於使用kubectl的提示根據不少文章的提示直接安裝並部署了flannel這樣的網絡插件都想把flannel換掉。但在這基礎之上咱們又想去使用網絡策略。其實也是有解決方案的,咱們能夠在flannel提供網絡功能的基礎之上在額外去給他提供calico去提供網絡策略。node

  二、在實現部署以前要先知道,calico默認使用的網段不是10.244.0.0,若是要拿calico做爲網絡插件使用的話它工做於192.168.0.0網絡。並且是16位掩碼。每個節點網絡分配是按照192.168.0.0/24,192.168.1.0/24來分配的。但此處咱們不把其當作網絡插件提供者而是當作網絡策略提供者,咱們仍然使用10.244.0.0網段。git

2、安裝calicoapi

  一、能夠在官方文檔中看到calico有多種安裝方式(目前爲止calico還不支持ipvs模式)安全

  

  二、安裝教程連接https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/flannel網絡

  三、calico新版本中部署比較複雜,calico對集羣節點的全部的地址分配都沒有本身介入,而是須要依賴於etcd來介入,這就比較麻煩了,首先,你們知道咱們k8s本身有etcd database,而calico也須要etcd database,它是兩套集羣,各自是各自的etcd,這樣子分裂起來進行管理來說對咱們k8s工程師都不是一個輕鬆的事情,那咱們應該怎麼作呢?後來的calico也支持不把數據放在本身專用的etcd集羣中,而是掉apiserver的功能,直接把全部的設置都發給apiserver,由apiserver再存儲在etcd中,由於整個k8s集羣任何節點的功能,任何組件都不能直接寫k8s的etcd,必須apiserver寫,主要是確保數據一致性。這樣一來也就意外着說咱們calico部署有兩種方式。第一就是和k8s的etcd分開;第二,直接使用k8s的etcd,不過是要經過apiserver去調用。這兒咱們直接使用第二種方式部署。官網對部署方式描述以下app

Installing with the Kubernetes API datastore (recommended) Ensure that the Kubernetes controller manager has the following flags set: --cluster-cidr=10.244.0.0/16 and --allocate-node-cidrs=true. Tip: If you’re using kubeadm, you can pass --pod-network-cidr=10.244.0.0/16 to kubeadm to set the Kubernetes controller flags. If your cluster has RBAC enabled, issue the following command to configure the roles and bindings that Calico requires. kubectl apply -f \ https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
Note: You can also view the manifest in your browser. Issue the following command to install Calico. kubectl apply -f \ https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
Note: You can also view the manifest in your browser.

  四、如今咱們來部署curl

    a、首先咱們部署一個rbac.yaml配置文件ide

[root@k8smaster flannel]# kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico created clusterrole.rbac.authorization.k8s.io/flannel configured clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created clusterrolebinding.rbac.authorization.k8s.io/canal-calico created

    b、第二步咱們部署canal.yaml學習

[root@k8smaster flannel]# kubectl apply -f \ > https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
configmap/canal-config created daemonset.extensions/canal created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

    c、接下來咱們來看一下對應的組件是否已經啓動起來了

3、canel的使用

  一、部署完canel後咱們怎麼去控制咱們對應的pod間通訊呢?如今咱們在咱們的cluster上建立兩個namespaces,一個叫dev,一個叫pro,而後在兩個名稱空間中建立兩個pod,咱們看看兩個名稱空間中跨名稱空間通訊能不能直接進行,若是能進行的話咱們如何讓他不能進行。而且咱們還能夠這樣來設計,咱們能夠爲一個名稱空間設置一個默認通訊策略,任何到達此名稱空間的通訊流量都不被容許。簡單來說咱們所用的網絡控制策略是經過這種方式來定義的。如圖,咱們網絡控制策略(Network Policy)他經過Egress或Ingress規則分別來控制不一樣的通訊需求,什麼意思呢?Egress即出棧的意思,Ingress即入棧的意思(此處Ingress和咱們k8sIngress是兩回事)。Egress表示咱們pod做爲客戶端去訪問別人的。即本身做爲源地址對方做爲目標地址來進行通訊,Ingress表示本身是目標,遠程是源。因此咱們要控制這兩個方向的不一樣的通訊,而控制Egress的時候客戶端端口是隨機的而服務端端口是固定的,所以做爲出棧的時候去請求別人很顯然對方是服務端,對方的端口可預測,對方的地址也可預測,但本身的地址能預測端口卻不能預測。一樣的邏輯,若是別人訪問本身,本身的地址能預測,本身的端口也能預測,但對方的端口是不能預測的。由於對方是客戶端。

    

  二、所以去定義規則時若是定義的是Egress規則(出棧的),那麼咱們能夠定義目標地址和目標端口。若是咱們定義的是Ingress規則(入棧的),咱們能限制對方的地址,能限制本身的端口,那咱們這種限制是針對於哪個Pod來講的呢?這個網絡策略規則是控制哪一個pod和別人通訊或接受別人通訊的呢?咱們使用podSelector(pod選擇器)去選擇pod,意思是這個規則生效在哪一個pod上,咱們通常使用單個pod進行控制,也能夠控制一組pod,因此咱們使用podSelector就至關於說我這一組pod都受控於這個Egress和Ingress規則,並且更重要的是咱們未來定義規則時還能夠指定單方向。意思是入棧咱們作控制出棧都容許或出棧作控制入棧都容許。所以定義時能夠很靈活的去定義,能夠發現他和iptables沒有太大的區別。那種方式最安全呢?確定是拒絕全部放行已知。甚至於若是說你是託管了每個名稱空間託管了不一樣項目的,甚至不一樣客戶的項目。咱們名稱空間直接設置默認策略。咱們在名稱空間內全部pod能夠無障礙的通訊,可是跨名稱空間都不被容許。這種均可以叫一個名稱空間的默認策略。

  三、接下來咱們看這些策略怎麼工做起來

    a、咱們能夠看到咱們加載的canal pod已經啓動起來了

[root@k8smaster ~]# kubectl get pods -n kube-system |grep canal canal-hmc47                            3/3       Running   0 45m canal-sw5q6                            3/3       Running   0 45m canal-xxvzk                            3/3       Running   0          45m

    b、咱們看咱們網絡策略怎麼定義

[root@k8smaster ~]# kubectl explain networkpolicy KIND: NetworkPolicy VERSION: extensions/v1beta1 DESCRIPTION: DEPRECATED 1.9 - This group version of NetworkPolicy is deprecated by networking/v1/NetworkPolicy. NetworkPolicy describes what network traffic is allowed for a set of Pods FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
 kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
 metadata <Object> Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
 spec <Object> Specification of the desired behavior for this NetworkPolicy.
[root@k8smaster ~]# kubectl explain networkpolicy.spec KIND: NetworkPolicy VERSION: extensions/v1beta1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior for this NetworkPolicy. DEPRECATED 1.9 - This group version of NetworkPolicySpec is deprecated by networking/v1/NetworkPolicySpec. FIELDS: egress <[]Object> #出棧規則 List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 ingress <[]Object> #入棧規則 List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod OR if the traffic source is the pod's local node, OR if the traffic matches at
 least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default). podSelector <Object> -required-   #規則應用在哪一個pod上 Selects the pods to which this NetworkPolicy object applies. The array of ingress rules is applied to any pods selected by this field. Multiple network policies can select the same set of pods. In this case, the ingress rules for each are combined additively. This field is NOT optional and follows standard label selector semantics. An empty podSelector matches all pods in this namespace. policyTypes <[]string>  #策略類型,指的是假如我在當前這個策略中即定義了Egress又定義了Ingress,那麼誰生效呢?雖然他們並不衝突,可是你能夠定義在某個時候某一方向的規則生效。 List of rule types that the NetworkPolicy relates to. Valid options are Ingress, Egress, or Ingress,Egress. If this field is not specified, it will default based on the existence of Ingress or Egress rules; policies that contain an Egress section are assumed to affect Egress, and all policies (whether or not they contain an Ingress section) are assumed to affect Ingress. If you want to write an egress-only policy, you must explicitly specify policyTypes [ "Egress" ]. Likewise, if you want to write a policy that specifies that no egress is allowed, you must specify a policyTypes value that include "Egress" (since such a policy would not include an Egress section and would otherwise default to just [ "Ingress" ]). This field is beta-level in 1.8

      咱們來看egress定義

[root@k8smaster ~]# kubectl explain networkpolicy.spec.egress KIND: NetworkPolicy VERSION: extensions/v1beta1 RESOURCE: egress <[]Object> DESCRIPTION: List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 DEPRECATED 1.9 - This group version of NetworkPolicyEgressRule is deprecated by networking/v1/NetworkPolicyEgressRule. NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The
     traffic must match both ports and to. This type is beta-level in 1.8 FIELDS: ports <[]Object> #目標端口,能夠是端口名和相關的協議 List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. to <[]Object> List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list.

       咱們看看這個to,to表示目標地址,能夠是三種狀況中的一個,也能夠是合併起來的,若是同時使用的話是要取交集。

[root@k8smaster ~]# kubectl explain networkpolicy.spec.egress.to KIND: NetworkPolicy VERSION: extensions/v1beta1 RESOURCE: to <[]Object> DESCRIPTION: List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. DEPRECATED 1.9 - This group version of NetworkPolicyPeer is deprecated by networking/v1/NetworkPolicyPeer. FIELDS: ipBlock <Object> #目標地址也能夠是一個IP地址塊,是一個IP地址範圍內的全部端點。無論它是pod或主機都行。 IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. namespaceSelector <Object> #意思是名稱空間選擇器,意思是咱們控制的pod能到達其它名稱空間的,那個名稱空間內的全部pod都在這個範圍內。我使用這個選擇器選擇一組名稱空間是指用於控制這組源pod是怎麼去訪問這組名稱空間以內的全部pod或者某一個pod。 Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. podSelector <Object> #目標地址也能夠是另一組pod,控制兩組pod之間通訊。源是一組pod,目標地址也是一組pod。 This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace.

      Ingress只是由to變成了from

    c、policyTypes下有幾個規則須要咱們注意,若是這個字段沒有定義,那麼只要存在的Egress和Ingress都會生效。但凡出現的定義的都會生效。若咱們只定義了Ingress規則,可是在policyTypes中定義了Egress和Ingress,那麼Egress的默認規則會生效。若是默認是容許的那麼全部規則都被容許,若是默認是拒絕的那麼全部規則都被拒絕。咱們默認的規則是拒絕的。這裏Egress將會使用默認規則拒絕全部。

  四、咱們去設置一個Ingress默認策略。咱們把一個名稱空間上全部的入棧規則都關了,都拒絕,誰都不容許入棧。而後咱們只放行特定的,這樣更安全。好比咱們dev名稱空間默認全部pod都是被別人能訪問的,咱們不容許別人訪問應該是一個默認規則,那麼咱們怎麼去定義這個規則呢?

    a、首先咱們建立名稱空間dev,pro

[root@k8smaster networkpolicy]# kubectl create namespace dev namespace/dev created [root@k8smaster networkpolicy]# kubectl create namespace prod namespace/prod created

    b、接下來咱們建立Ingress默認規則並運用於dev名稱空間中

[root@k8smaster networkpolicy]# cat ingress-def.yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-ingress spec: podSelector: {} #空表示選擇全部pod,表示指定名稱空間中的全部pod,就至關於控制整個名稱空間了 policyTypes: - Ingress #表示只對Ingress生效,由於沒有定義Ingress所以沒有任何規則是生效的。沒有任何顯示定義的規則就意味着默認是拒絕全部的。可是咱們沒有加Egress意味着,由於policyTypes沒有將其定義進來因 此默認是容許的。[root@k8smaster networkpolicy]# kubectl apply -f ingress-def.yml -n dev networkpolicy.networking.k8s.io/deny-all-ingress created [root@k8smaster networkpolicy]#

      查看咱們定義的NetworkPolicy

[root@k8smaster networkpolicy]# kubectl get netpol -n dev NAME POD-SELECTOR AGE deny-all-ingress   <none>         1m

    c、接下來咱們dev名稱空間建立一個pod看可否被訪問到

[root@k8smaster networkpolicy]# cat pod-a.yaml apiVersion: v1 kind: Pod metadata: name: pod1 spec: containers: - name: myapp image: ikubernetes/myapp:v1 [root@k8smaster networkpolicy]# kubectl apply -f pod-a.yaml -n dev pod/pod1 created [root@k8smaster networkpolicy]# kubectl get pods -n dev NAME READY STATUS RESTARTS AGE pod1 1/1       Running   0          7s
[root@k8smaster networkpolicy]# kubectl get pods -n dev -o wide NAME READY STATUS RESTARTS AGE IP NODE pod1 1/1       Running   0          1m        10.244.2.2 k8snode2 [root@k8smaster networkpolicy]# curl 10.244.2.2 #能夠看到沒法訪問
^C

    d、咱們在prod名稱空間中建立一個pod看可否被訪問

[root@k8smaster networkpolicy]# kubectl apply -f pod-a.yaml -n prod pod/pod1 created [root@k8smaster networkpolicy]# kubectl get pods -n prod -o wide NAME READY STATUS RESTARTS AGE IP NODE pod1 1/1       Running   0          12s       10.244.1.2 k8snode1 [root@k8smaster networkpolicy]# curl 10.244.1.2 #能夠看到由於沒有定義規則因此可以訪問 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

  五、如今咱們放行2.2,dev名稱空間中默認是拒絕一切入棧請求的,如今咱們要放行別人對於dev中咱們單個pod  pod1

[root@k8smaster networkpolicy]# cat ingress-def.yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-ingress spec: podSelector: {} #空表示選擇全部pod,表示指定名稱空間中的全部pod,就至關於控制整個名稱空間了 ingress: - {} #這個全部就表示全部的都容許 policyTypes: - Ingress #表示只對Ingress生效,此時咱們ingress是容許的,Egress也是容許的 [root@k8smaster networkpolicy]# kubectl get pods -n dev -o wide NAME READY STATUS RESTARTS AGE IP NODE pod1 1/1       Running   0          15m       10.244.2.2 k8snode2 [root@k8smaster networkpolicy]# curl 10.244.2.2 #能夠看到能夠訪問了 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

  六、接下來加一組規則說咱們指望容許別人去訪問咱們這個2.2這個pod,或者容許一組pod,咱們要定義一組pod咱們能夠這麼來定義,好比這組pod給其打一個標籤爲myapp

    a、首先咱們給dev中的pod1打上標籤myapp,再此以前咱們仍是先設置爲拒絕全部

[root@k8smaster networkpolicy]# cat ingress-def.yml.bak apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-ingress spec: podSelector: {} #空表示選擇全部pod,表示指定名稱空間中的全部pod,就至關於控制整個名稱空間了 policyTypes: - Ingress #表示只對Ingress生效,由於沒有定義Ingress所以沒有任何規則是生效的。沒有任何顯示定義的規則就意味着默認是拒絕全部的。可是咱們沒有加Egress意味着,由於policyTypes沒有將其定義進來因 此默認是容許的。[root@k8smaster networkpolicy]# kubectl apply -f ingress-def.yml.bak -n dev networkpolicy.networking.k8s.io/deny-all-ingress unchanged
[root@k8smaster networkpolicy]# kubectl label pods pod1 app=myapp -n dev pod/pod1 labeled [root@k8smaster networkpolicy]# kubectl get pods -n dev -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE LABELS pod1 1/1       Running   0          22m       10.244.2.2   k8snode2   app=myapp

    b、咱們設置這樣一條Ingress規則:咱們放行特定的入棧流量。即咱們容許來自於10.244.0.0/16網段的客戶端訪問咱們本地擁有app標籤且值爲myapp的一組本地pod,對這組pod的訪問只要訪問到80端口都是容許的,其它端口都沒有說明。默認是拒絕的

[root@k8smaster networkpolicy]# cat allow-netpol-demo.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-myapp-ingress spec: podSelector: matchLabels: app: myapp #匹配標籤爲myapp的pod ingress: - from: - ipBlock: #網段 cidr: 10.244.0.0/16 #放行這個網絡 except: #排除這個網絡 - 10.244.1.2/32  #即排除10.244.1.2這個地址 ports: #容許訪問本機的哪一個端口,咱們myapp只開放了一個80端口所以咱們開放80端口的訪問 - protocol: TCP port: 80 [root@k8smaster networkpolicy]# kubectl apply -f allow-netpol-demo.yaml -n dev networkpolicy.networking.k8s.io/allow-myapp-ingress unchanged [root@k8smaster networkpolicy]# kubectl get netpol -n dev NAME POD-SELECTOR AGE allow-myapp-ingress   app=myapp 34s deny-all-ingress      <none> 49m [root@k8smaster networkpolicy]# curl 10.244.2.2 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

    咱們訪問443端口發現也被拒絕,當咱們放行後發現能收到訪問失敗的信息

[root@k8smaster networkpolicy]# curl 10.244.2.2:443 #能夠看到被拒絕 ^C [root@k8smaster networkpolicy]# cat allow-netpol-demo.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-myapp-ingress spec: podSelector: matchLabels: app: myapp #匹配標籤爲myapp的pod ingress: - from: - ipBlock: #網段 cidr: 10.244.0.0/16 #放行這個網絡 except: #排除這個網絡 - 10.244.1.2/32  #即排除10.244.1.2這個地址 ports: #容許訪問本機的哪一個端口,咱們myapp只開放了一個80端口所以咱們開放80端口的訪問 - protocol: TCP port: 80
    - protocol: TCP port: 443 [root@k8smaster networkpolicy]# kubectl apply -f allow-netpol-demo.yaml -n dev networkpolicy.networking.k8s.io/allow-myapp-ingress configured [root@k8smaster networkpolicy]# curl 10.244.2.2:443 #能夠看到被放行了 curl: (7) Failed connect to 10.244.2.2:443; Connection refused

    c、咱們要管控出棧流量的方式和管控入棧流量的方式是同樣的,只不過將其Ingress改成Egress。

  七、以prod名稱空間爲例,咱們將其出棧規則先設置爲拒絕,而後咱們再慢慢放行,能訪問出去,咱們讓進來都容許

    a、首先咱們定義netpol

[root@k8smaster networkpolicy]# cat egress-def.yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-egress spec: podSelector: {} #空表示選擇全部pod,表示指定名稱空間中的全部pod,就至關於控制整個名稱空間了 policyTypes: - Egress #表示只對Egress生效,此時咱們egress默認拒絕全部,ingress是容許的 [root@k8smaster networkpolicy]# kubectl apply -f egress-def.yml -n prod networkpolicy.networking.k8s.io/deny-all-egress created [root@k8smaster networkpolicy]# kubectl get netpol -n prod NAME POD-SELECTOR AGE deny-all-egress   <none>         16s

    b、此時咱們看到咱們prod 中的pod1是ping不出去的,由於egress默認拒絕了全部流量,要放行全部egress流量也很簡單,和ingress處定義方式同樣

[root@k8smaster networkpolicy]# cat egress-def.yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-egress spec: podSelector: {} #空表示選擇全部pod,表示指定名稱空間中的全部pod,就至關於控制整個名稱空間了 egress: - {} policyTypes: - Egress #表示只對Egress生效,此時咱們egress默認拒絕全部,ingress是容許的 [root@k8smaster networkpolicy]# kubectl apply -f egress-def.yml -n prod networkpolicy.networking.k8s.io/deny-all-egress unchanged [root@k8smaster networkpolicy]# kubectl exec -it pod1 -n prod /bin/sh
/ # ping 10.244.0.26 PING 10.244.0.26 (10.244.0.26): 56 data bytes 64 bytes from 10.244.0.26: seq=0 ttl=62 time=3.317 ms 64 bytes from 10.244.0.26: seq=1 ttl=62 time=0.630 ms

    此時咱們仍是將egress改回拒絕狀態,發現沒法ping通

[root@k8smaster networkpolicy]# kubectl exec -it pod1 -n prod /bin/sh PING 10.244.0.26 (10.244.0.26): 56 data bytes ^C

    egress咱們要定義他們特定的放行出站流量的時候咱們只能定義對方的端口和對方的地址,事實上本身做爲客戶端的話是容許訪問全部的客戶端和全部的遠程地址的。應該放行全部,出棧通常沒問題,入棧咱們控制只有哪些能進來就行,就再也不詳細說了。若是想作的苛刻一點,對全部名稱空間,拒絕全部入棧,拒絕全部出棧,單獨放行。可是拒絕全部入棧和出棧就有一個問題,使用podselect寫完之後會致使同一個名稱空間中pod與pod之間也無法通訊了,由於podselect不是在namespace級別控制的而是在pod級別控制的。就表示不管他在不在名稱空間彼此之間都不能通訊。因此必要的狀況下就應該定義先拒絕全部出棧拒絕全部入棧之後再加兩條規則,就是本名稱空間中的pod出,而後又到本名稱空間的pod是容許的。這樣就能放行同一個名稱空間中的pod通訊。

  八、通常來講網絡策略咱們能夠這樣幹

    對於名稱空間來講,咱們先拒絕全部出棧和入棧,而後再放行全部出棧目標爲本地全部名稱空間內的全部pod;這樣至少內部通訊沒問題了,剩下的跨名稱空間再單獨定義就行。而放行全部出棧爲本名稱空間,入棧也是本名稱空間,所以此時Ingress和Egress都須要定義。可是咱們使用namespaceselector來選擇哪一個名稱空間,或者入棧和出棧的時候都寫成podselector,寫Ingress和Egress時都寫成{},即本地全部的就好了。

相關文章
相關標籤/搜索