經過CNI Chaining 爲k8s 插上Cilium翅膀

cilium.png

在介紹CNI Chaining以前,咱們先簡單介紹一下Cilium。要說如今最火的容器網絡,莫過於Cilium了。Cilium 是一個基於 eBPF 和 XDP 的高性能容器網絡方案,代碼開源在https://github.com/cilium/cilium。其主要功能特性包括node

  • 安全上,支持 L3/L4/L7 安全策略,這些策略按照使用方法又能夠分爲linux

    • 基於身份的安全策略(security identity)
    • 基於 CIDR 的安全策略
    • 基於標籤的安全策略
  • 網絡上,支持三層平面網絡(flat layer 3 network),如git

    • 覆蓋網絡(Overlay),包括 VXLAN 和 Geneve 等
    • Linux 路由網絡,包括原生的 Linux 路由和雲服務商的高級網絡路由等
  • 提供基於 BPF 的負載均衡
  • 提供便利的監控和排錯能力

此外最新版本的Cilium已經包含了Kube-proxy的功能。github

CNI Chaining

今天咱們試想一種場景:你的集羣運行在公有云上,整個k8s的網絡模型已經使用了公有云提供的ENI彈性網絡,好比aws的aws-cni和阿里雲的terway。ENI帶給咱們諸多好處,高性能,拉平了Pod網絡。api

可是咱們卻但願使用Cilium帶來的高性能負載均衡和可觀察性。安全

因而今天的主角CNI Chaining出場了。網絡

CNI Chaining容許將Cilium與其餘CNI插件結合使用。架構

經過Cilium CNI Chaining,基本網絡鏈接和IP地址管理由非Cilium CNI插件管理,可是Cilium將BPF程序附加到由非Cilium插件建立的網絡設備上,以提供L3/L4/L7網絡可見性和策略強制執行和其餘高級功能,例如透明加密。app

目前Cilium支持與如下網絡模型配合使用:負載均衡

今天咱們主要測試AWS-CNI。

Cilium與AWS eni

接下來主要介紹如何與aws-cni結合設置Cilium。在這種混合模式下,aws-cni插件負責經過ENI設置虛擬網絡設備以及地址分配(IPAM)。安裝程序中,調用Cilium CNI插件將BPF程序附加到aws-cni設置的網絡設備上,以實施網絡策略,執行負載平衡和加密。
aws-cni-architecture.png

關於EKS集羣部署,本文不涉及。你們能夠參考相關文檔。

安裝成功後,執行kubectl get nodes能夠相似以下輸出:

NAME                                               STATUS   ROLES    AGE   VERSION
ip-172-xx-56-151.ap-southeast-1.compute.internal   Ready    <none>   10m   v1.15.11-eks-af3caf
ip-172-xx-94-192.ap-southeast-1.compute.internal   Ready    <none>   10m   v1.15.11-eks-af3caf

部署helm3

執行

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

chmod 700 get_helm.sh

./get_helm.sh

看到以下輸出,代表安裝成功:

Helm v3.2.0 is available. Changing from version .
Downloading https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

安裝Cilium

增長Cilium helm repo:

helm repo add cilium https://helm.cilium.io/

經過Helm部署Cilium:

helm install cilium cilium/cilium --version 1.7.3 \
  --namespace kube-system \
  --set global.cni.chainingMode=aws-cni \
  --set global.masquerade=false \
  --set global.tunnel=disabled \
  --set global.nodeinit.enabled=true

這將啓用與aws-cni插件的chaining,也將禁用隧道。因爲ENI IP地址能夠直接在您的VPC中路由,所以不須要隧道,也能夠出於相同緣由禁用假裝。

看到以下相似輸出,代表安裝成功:

NAME: cilium
LAST DEPLOYED: Thu Apr 30 17:56:11 2020
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium.

Your release version is 1.7.3.

For any further help, visit https://docs.cilium.io/en/v1.7/gettinghelp

重啓已經部署的Pod

新的CNIchaining配置將不適用於羣集中已在運行的任何Pod。現有Pod將能夠訪問,Cilium將對其進行負載平衡,但策略實施將不適用於它們而且不對源流量進行負載平衡您必須從新啓動這些Pod才能在其上調用鏈配置。

若是不肯定某個Pod是否由Cilium管理,請在相應的名稱空間中運行kubectl get cep並查看是否列出了該Pod。

以下:

kubectl get cep -n kube-system
NAME                       ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4            IPV6
coredns-5d76c48b7c-q2z5b   1297          43915                                                                        ready            172.26.92.175
coredns-5d76c48b7c-ths7q   863           43915                                                                        ready            172.26.55.46

coredns 已經重啓,而且生效。

檢驗安裝

接下來咱們查看一下部署了那些組件:

kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
aws-node-5lgwp                    1/1     Running   0          18m
aws-node-cpj9g                    1/1     Running   0          18m
cilium-7ql6n                      1/1     Running   0          94s
cilium-node-init-kxh2t            1/1     Running   0          94s
cilium-node-init-zzlrd            1/1     Running   0          94s
cilium-operator-6f9f88d64-lrt7f   1/1     Running   0          94s
cilium-zdtxq                      1/1     Running   0          94s
coredns-5d76c48b7c-q2z5b          1/1     Running   0          55s
coredns-5d76c48b7c-ths7q          1/1     Running   0          40s
kube-proxy-27j82                  1/1     Running   0          18m
kube-proxy-qktk8                  1/1     Running   0          18m

部署連通性測試

您能夠部署「連通性檢查」以測試Pod之間的連通性。

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.7.3/examples/kubernetes/connectivity-check/connectivity-check.yaml

能夠看到以下輸出:

service/echo-a created
deployment.apps/echo-a created
service/echo-b created
service/echo-b-headless created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
service/echo-b-host-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
deployment.apps/pod-to-a-l3-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-l3-denied-cnp created
deployment.apps/pod-to-a created
deployment.apps/pod-to-b-intra-node-hostport created
deployment.apps/pod-to-b-intra-node created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-hostport created
deployment.apps/pod-to-a-external-1111 created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created

它將部署一系列部署,這些部署將使用各類鏈接路徑相互鏈接,包括和不具備服務負載平衡以及各類網絡策略組合的鏈接路徑。Pod名稱指示鏈接性變體,就緒和活躍性門指示成功或測試失敗:

kubectl get pods
NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-558b9b6dc4-hjpqx                                  1/1     Running   0          72s
echo-b-59d5ff8b98-gxrb8                                  1/1     Running   0          72s
echo-b-host-f4bd98474-5bpfz                              1/1     Running   0          72s
host-to-b-multi-node-clusterip-7bb8b4f964-4zslk          1/1     Running   0          72s
host-to-b-multi-node-headless-5c5676647b-7dflx           1/1     Running   0          72s
pod-to-a-646cccc5df-ssg8l                                1/1     Running   0          71s
pod-to-a-allowed-cnp-56f4cfd999-2vln8                    1/1     Running   0          72s
pod-to-a-external-1111-7c5c99c6d9-mbglt                  1/1     Running   0          70s
pod-to-a-l3-denied-cnp-556fb69b9f-v9b74                  1/1     Running   0          72s
pod-to-b-intra-node-b9454c7c6-k9s4s                      1/1     Running   0          71s
pod-to-b-intra-node-hostport-665b46c945-x7g8s            1/1     Running   0          71s
pod-to-b-multi-node-clusterip-754d5ff9d-rsqgz            1/1     Running   0          71s
pod-to-b-multi-node-headless-7876749b84-c9fr5            1/1     Running   0          71s
pod-to-b-multi-node-hostport-77fcd6f59f-m7w8s            1/1     Running   0          70s
pod-to-external-fqdn-allow-google-cnp-6478db9cd9-4cc78   1/1     Running   0          70s

安裝Hubble

咱們使用Cilium,一個很大的緣由,爲了流量的可觀察性,因此咱們部署Hubble。

Hubble是一個用於Cloud Native工做負載的徹底分佈式的網絡和安全性可觀察性平臺,它基於Cilium和eBPF構建,以徹底透明的方式實現對服務以及網絡基礎架構的通訊和行爲的深刻可見性。

生成部署文件:

git clone https://github.com/cilium/hubble.git
cd hubble/install/kubernetes

helm template hubble \
    --namespace kube-system \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    --set ui.enabled=true \
> hubble.yaml

查看生產的hubble.yaml 文件:

---
# Source: hubble/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hubble
  namespace: kube-system
---
# Source: hubble/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: kube-system
  name: hubble-ui
---
# Source: hubble/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hubble
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
---
# Source: hubble/templates/clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hubble-ui
rules:
  - apiGroups:
      - networking.k8s.io
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - componentstatuses
      - endpoints
      - namespaces
      - nodes
      - pods
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - cilium.io
    resources:
      - "*"
    verbs:
      - get
      - list
      - watch
---
# Source: hubble/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hubble
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hubble
subjects:
- kind: ServiceAccount
  name: hubble
  namespace: kube-system
---
# Source: hubble/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hubble-ui
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hubble-ui
subjects:
  - kind: ServiceAccount
    namespace: kube-system
    name: hubble-ui
---
# Source: hubble/templates/svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: hubble-grpc
  namespace: kube-system
  labels:
    k8s-app: hubble
spec:
  type: ClusterIP
  clusterIP: None
  selector:
    k8s-app: hubble
  ports:
  - targetPort: 50051
    protocol: TCP
    port: 50051
---
# Source: hubble/templates/svc.yaml
kind: Service
apiVersion: v1
metadata:
  namespace: kube-system
  name: hubble-ui
spec:
  selector:
    k8s-app: hubble-ui
  ports:
    - name: http
      port: 12000
      targetPort: 12000
  type: ClusterIP
---
# Source: hubble/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: hubble
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: hubble
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      annotations:
        prometheus.io/port: "6943"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: hubble
        kubernetes.io/cluster-service: "true"
    spec:
      priorityClassName: system-node-critical
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: "k8s-app"
                operator: In
                values:
                - cilium
            topologyKey: "kubernetes.io/hostname"
            namespaces:
            - cilium
            - kube-system
      containers:
      - name: hubble
        image: "quay.io/cilium/hubble:v0.5.0"
        imagePullPolicy: Always
        command:
        - hubble
        args:
        - serve
        - --listen-client-urls=0.0.0.0:50051
        - --listen-client-urls=unix:///var/run/hubble.sock
        - --metrics-server
        - ":6943"
        - --metric=dns
        - --metric=drop
        - --metric=tcp
        - --metric=flow
        - --metric=port-distribution
        - --metric=icmp
        - --metric=http
        env:
          - name: HUBBLE_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: HUBBLE_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        ports:
        - containerPort: 6943
          protocol: TCP
          name: metrics
        readinessProbe:
          exec:
            command:
            - hubble
            - status
          failureThreshold: 3
          initialDelaySeconds: 5
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          {}
        volumeMounts:
        - mountPath: /var/run/cilium
          name: cilium-run
      restartPolicy: Always
      serviceAccount: hubble
      serviceAccountName: hubble
      terminationGracePeriodSeconds: 1
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          # We need to access Cilium's monitor socket
          path: /var/run/cilium
          type: Directory
        name: cilium-run
---
# Source: hubble/templates/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  namespace: kube-system
  name: hubble-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: hubble-ui
  template:
    metadata:
      labels:
        k8s-app: hubble-ui
    spec:
      priorityClassName:
      serviceAccountName: hubble-ui
      containers:
        - name: hubble-ui
          image: "quay.io/cilium/hubble-ui:latest"
          imagePullPolicy: Always
          env:
            - name: NODE_ENV
              value: "production"
            - name: LOG_LEVEL
              value: "info"
            - name: HUBBLE
              value: "true"
            - name: HUBBLE_SERVICE
              value: "hubble-grpc.kube-system.svc.cluster.local"
            - name: HUBBLE_PORT
              value: "50051"
          ports:
            - containerPort: 12000
              name: http
          resources:
            {}

部署Hubble:

kubectl apply -f hubble.yaml

能夠看到建立了如下的對象:

serviceaccount/hubble created
serviceaccount/hubble-ui created
clusterrole.rbac.authorization.k8s.io/hubble created
clusterrole.rbac.authorization.k8s.io/hubble-ui created
clusterrolebinding.rbac.authorization.k8s.io/hubble created
clusterrolebinding.rbac.authorization.k8s.io/hubble-ui created
service/hubble-grpc created
service/hubble-ui created
daemonset.apps/hubble created
deployment.apps/hubble-ui created

此時,咱們還須要爲Hubble UI 部署一個負載均衡器,方便咱們從外部訪問。

因此須要把service hubble-ui 類型更改成LoadBalancer

以下:

kind: Service
apiVersion: v1
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  namespace: kube-system
  name: hubble-ui
spec:
  selector:
    k8s-app: hubble-ui
  ports:
    - name: http
      port: 12000
      targetPort: 12000
  type: LoadBalancer

訪問UI,咱們能夠看到以下:

hubble.jpg

點擊放大,咱們能夠清晰看到聯調測試部署的網絡拓撲:
topo.jpg

總結

本文主要介紹瞭如何藉助CNI Chaining ,實現Cilium對其餘網絡模型的功能加強。

不過因爲eBPF對內核版本要求比較高,3.x系列的內核是不支持的。

最新版的Cilium利用eBPF實現負載均衡,徹底能夠不用部署Kube-proxy。

接下來,咱們會詳細講述下Cilium原理和其餘知識點。

相關文章
相關標籤/搜索