Istio的流量管理(實操一)(istio 系列三)

Istio的流量管理(實操一)(istio 系列三)

使用官方的Bookinfo應用進行測試。涵蓋官方文檔Traffic Management章節中的請求路由,故障注入,流量遷移,TCP流量遷移,請求超時,熔斷處理和流量鏡像。不含ingress和Egree,後續再補充。python

部署Bookinfo應用

Bookinfo應用說明

官方提供的測試應用以下,包含以下4個組件:mysql

  • productpageproductpage 服務會調用detailsreviews來填充web頁面.
  • detailsdetails 服務包含book信息.
  • reviewsreviews 服務包含書評,它會調用 ratings 服務.
  • ratingsratings 服務包與書評相關的含排名信息

reviews 包含3個版本:git

  • v1版本不會調用 ratings 服務.
  • v2版本會調用 ratings 服務,並按照1到5的黑色星展現排名
  • v2版本會調用 ratings 服務,並按照1到5的紅色星展現排名

部署

Bookinfo應用部署在default命名空間下,使用自動注入sidecar的方式:github

  • 經過以下命令在default命名空間(固然也能夠部署在其餘命名空間下面,Bookinfo配置文件中並無指定部署的命名空間)中啓用自動注入sidecar:web

    $ cat <<EOF | oc -n <target-namespace> create -f -
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: istio-cni
    EOF
    $ kubectl label namespace default istio-injection=enabled
  • 切換在default命名空間下,部署Bookinfo應用:sql

    $ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

    等待一段時間,Bookinfo的全部pod就能夠成功啓動,查看pod和service:docker

    $ oc get pod
    NAME                              READY   STATUS    RESTARTS   AGE
    details-v1-78d78fbddf-5mfv9       2/2     Running   0          2m27s
    productpage-v1-85b9bf9cd7-mfn47   2/2     Running   0          2m27s
    ratings-v1-6c9dbf6b45-nm6cs       2/2     Running   0          2m27s
    reviews-v1-564b97f875-ns9vz       2/2     Running   0          2m27s
    reviews-v2-568c7c9d8f-6r6rq       2/2     Running   0          2m27s
    reviews-v3-67b4988599-ddknm       2/2     Running   0          2m27s
    $ oc get svc                                              
    NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    details       ClusterIP      10.84.97.183   <none>        9080/TCP   3m33s
    kubernetes    ClusterIP      10.84.0.1      <none>        443/TCP    14d
    productpage   ClusterIP      10.84.98.111   <none>        9080/TCP   3m33s
    ratings       ClusterIP      10.84.237.68   <none>        9080/TCP   3m33s
    reviews       ClusterIP      10.84.39.249   <none>        9080/TCP   3m33s

    使用以下命令判斷Bookinfo應用是否正確安裝:shell

    $ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
    
    <title>Simple Bookstore App</title> #返回的結果

    也能夠直接經過svc的endpoint進行訪問json

    $ oc describe svc productpage|grep Endpoint
    Endpoints:         10.83.1.85:9080
    $ curl -s 10.83.1.85:9080/productpage | grep -o "<title>.*</title>"

    可在openshift中建立router(屬於kuberenetes的ingress gateway)進行訪問(將${HOST_NAME}替換爲實際的主機名)後端

    kind: Route
    apiVersion: route.openshift.io/v1
    metadata:
      name: productpage
      namespace: default
      labels:
        app: productpage
        service: productpage
      annotations:
        openshift.io/host.generated: 'true'
    spec:
      host: ${HOST_NAME}
      to:
        kind: Service
        name: productpage
        weight: 100
      port:
        targetPort: http
      wildcardPolicy: None

    此處先不根據官方文檔配置ingress,後續再配置

  • 配置默認的destination rules

    配置帶mutual TLS(一開始學習istio時不建議配置)

    $ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml

    配置不帶mutual TLS

    $ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml

    獲取配置的destination rules

    $ kubectl get destinationrules -o yaml

    獲取到的destination rules以下,注意默認安裝下,除了reviews外的service只有v1版本

    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: details
        namespace: default
      spec:
        host: details    #對應kubernetes service "details"
        subsets:
        - labels:        #實際的details的deployment只有一個標籤"version: v1"
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: productpage
        namespace: default
      spec:
        host: productpage
        subsets:
        - labels:
            version: v1
          name: v1
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: ratings
        namespace: default
      spec:
        host: ratings
        subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
        - labels:
            version: v2-mysql
          name: v2-mysql
        - labels:
            version: v2-mysql-vm
          name: v2-mysql-vm
    	  
    - apiVersion: networking.istio.io/v1beta1
      kind: DestinationRule
      metadata:
        annotations:
          ...
        name: reviews     # kubernetes service "reviews"實際中有3個版本
        namespace: default
      spec:
        host: reviews
        subsets:
        - labels:
            version: v1
          name: v1
        - labels:
            version: v2
          name: v2
        - labels:
            version: v3
          name: v3

卸載

使用以下命令能夠卸載Bookinfo

$ samples/bookinfo/platform/kube/cleanup.sh

流量管理

請求路由

下面展現如何根據官方提供的Bookinfo微服務的多個版本動態地路由請求。在上面部署BookInfo應用以後,該應用有3個reviews服務,分別提供:無排名,有黑星排名,有紅星排名三種顯示。因爲默認狀況下istio會使用輪詢模式將請求一次分發到3個reviews服務上,所以在刷新/productpage的頁面時,能夠看到以下變化:

  • V1版本:

  • V2版本:

  • V3版本:

本次展現如何將請求僅分發到某一個reviews服務上。

首先建立以下virtual service:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

查看路由信息

$ kubectl get virtualservices -o yaml
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: details
    namespace: default
  spec:
    hosts:
    - details
    http:
    - route:
      - destination:
          host: details
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: productpage
    namespace: default
  spec:
    hosts:
    - productpage
    http:
    - route:
      - destination:
          host: productpage
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: ratings
    namespace: default
  spec:
    hosts:
    - ratings
    http:
    - route:
      - destination:
          host: ratings
          subset: v1
		  
- apiVersion: networking.istio.io/v1beta1
  kind: VirtualService
  metadata:
    annotations:
      ...
    name: reviews
    namespace: default
  spec:
    hosts:
    - reviews
    http:
    - route:
      - destination: #能夠看到流量都分發到`reviews`服務的v1版本上
          host: reviews #kubernetes的服務,解析爲reviews.default.svc.cluster.local
          subset: v1 #將v1修改成v2就能夠將請求分只發到v2版本上

此時再刷新/productpage的頁面時,發現只顯示無排名的頁面

卸載:

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

基於用戶ID的路由

下面展現基於HTTP首部字段的路由,首先在/productpage頁面中使用名爲jason的用戶登錄(密碼隨便寫)。

部署啓用基於用戶的路由:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

建立的VirtualService以下

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: reviews
  namespace: default
spec:
  hosts:
  - reviews
  http:
  - match: #將HTTP請求首部中有end-user:jason字段的請求路由到v2
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route: #HTTP請求首部中不帶end-user:jason字段的請求會被路由到v1
    - destination:
        host: reviews
        subset: v1

刷新/productpage頁面,能夠看到只會顯示v2版本(帶黑星排名)頁面,退出jason登錄,能夠看到只顯示v1版本(不帶排名)頁面。

卸載:

$ kubectl delete -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

故障注入

本節使用故障注入來測試應用的可靠性。

首先使用以下配置固定請求路徑:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

執行後,請求路徑變爲:

  • productpagereviews:v2ratings (僅適用於用戶 jason)
  • productpagereviews:v1 (適用於除jason外的其餘用戶)

注入HTTP延時故障

爲了測試Bookinfo應用的彈性,爲用戶jasonreviews:v2ratings 的微服務間注入7s的延時,用來模擬Bookinfo的內部bug。

注意reviews:v2在調用ratings服務時,有一個10s的硬編碼超時時間,所以即便引入了7s的延時,端到端流程上也不會看到任何錯誤。

注入故障,來延緩來自測試用戶jason的流量:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml

查看部署的virtual service信息:

$ kubectl get virtualservice ratings -o yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: ratings
  namespace: default
spec:
  hosts:
  - ratings
  http:
  - fault: #未來自jason的所有流量注入5s的延遲,流量目的地爲v1版本的ratings服務
      delay:
        fixedDelay: 7s
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route: #非來自jason的流量不受影響
    - destination:
        host: ratings
        subset: v1

打開 /productpage 頁面,使用jason用戶登錄並刷新瀏覽器頁面,能夠看到7s內不會加載頁面,且頁面上能夠看到以下錯誤信息:

相同服務的virtualservice的配置會被覆蓋,所以此處不必清理

注入HTTP中斷故障

ratings微服務上模擬爲測試用戶jason引入HTTP中斷故障,這種場景下,在加載頁面時會看到錯誤信息Ratings service is currently unavailable.

使用以下命令爲用戶jason注入HTTP中斷

$ kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml

獲取部署的ratings的virtual service信息

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: ratings
  namespace: default
spec:
  hosts:
  - ratings
  http:
  - fault: #對來自用戶jason的請求直接響應500錯誤碼
      abort:
        httpStatus: 500
        percentage:
          value: 100
    match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: ratings
        subset: v1
  - route:
    - destination:
        host: ratings
        subset: v1

打開 /productpage 頁面,使用jason用戶登錄,能夠看到以下錯誤。退出用戶jason後該錯誤消失。

刪除注入的中斷故障

$ kubectl delete -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml

卸載

環境清理

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

流量遷移

本章展現如何將流量從一個版本的微服務上遷移到另外一個版本的微服務,如將流量從老版本切換到新版本。一般狀況下會逐步進行流量切換,istio下能夠基於百分比進行流量切換。注意各個版本的權重之和必須等於100,不然會報total destination weight ${weight-total}= 100的錯誤,${weight-total}爲當前配置的權重之和。

基於權重的路由

  • 首先將全部微服務的流量都分發到v1版本的微服務,打開/productpage頁面能夠看到該頁面上沒有任何排名信息。

    $ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
  • 使用以下命令將50%的流量從reviews:v1遷移到review:v3

    $ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
  • 獲取virtual service信息

    $ kubectl get virtualservice reviews -o yaml
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
      annotations:
        ...
      name: reviews
      namespace: default
    spec:
      hosts:
      - reviews
      http:
      - route: #50%的流量到v1,50%的流量到v3。
        - destination:
            host: reviews
            subset: v1
          weight: 50
        - destination:
            host: reviews
            subset: v3
          weight: 50
  • 登錄並刷新/productpage,能夠看到50%機率會看到v1的頁面,50%的機率會看到v2的頁面

卸載

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

TCP流量遷移

本節展現如何將TCP流量從一個版本的遷移到另外一個版本。例如將TCP的流量從老版本遷移到新版本。

基於權重的TCP路由

單首創建一個命名空間部署tcp-echo應用

$ kubectl create namespace istio-io-tcp-traffic-shifting

openshift下面須要受權1337的用戶進行sidecar注入

$ oc adm policy add-scc-to-group privileged system:serviceaccounts:istio-io-tcp-traffic-shifting
$ oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-io-tcp-traffic-shifting

建立NetworkAttachmentDefinition,使用istio-cni

$ cat <<EOF | oc -n istio-io-tcp-traffic-shifting create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: istio-cni
EOF

對命名空間istio-io-tcp-traffic-shifting使用自動注入sidecar的方式

$ kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled

部署tcp-echo應用

$ kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting

tcp-echo服務的流量所有分發到v1版本

$ kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting

tcp-echo服務的pod以下,包含v1v2兩個版本

$ oc get pod
NAME                           READY   STATUS    RESTARTS   AGE
tcp-echo-v1-5cb688897c-hk277   2/2     Running   0          16m
tcp-echo-v2-64b7c58f68-hk9sr   2/2     Running   0          16m

默認部署的gateway以下,能夠看到它使用了istio默認安裝的ingress gateway,經過端口31400進行訪問

$ oc get gateways.networking.istio.io tcp-echo-gateway -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  annotations:
    ...
  name: tcp-echo-gateway
  namespace: istio-io-tcp-traffic-shifting
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: tcp
      number: 31400
      protocol: TCP

對應綁定的virtual service爲tcp-echo。此處host爲"*",表示只要訪問到gateway tcp-echo-gateway 31400端口上的流量都會被分發到該virtual service中。

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway
  tcp:
  - match:
    - port: 31400
    route:
    - destination: #轉發到的後端服務的信息
        host: tcp-echo
        port:
          number: 9000
        subset: v1

因爲沒有安裝ingress gateway(沒有生效),按照gateway的原理,能夠經過istio默認安裝的ingress gateway模擬ingress的訪問方式。能夠看到默認的ingress gateway pod中打開了31400端口:

$ oc exec -it  istio-ingressgateway-64f6f9d5c6-qrnw2 /bin/sh -n istio-system
$ ss -ntl                                                          
State          Recv-Q          Send-Q      Local Address:Port       Peer Address:Port     
LISTEN         0               0                 0.0.0.0:15090           0.0.0.0:*       
LISTEN         0               0               127.0.0.1:15000           0.0.0.0:*       
LISTEN         0               0                 0.0.0.0:31400           0.0.0.0:*       
LISTEN         0               0                 0.0.0.0:80              0.0.0.0:*       
LISTEN         0               0                       *:15020                 *:*

經過ingress gateway pod的kubernetes service進行訪問:

$ oc get svc |grep ingress
istio-ingressgateway   LoadBalancer   10.84.93.45  ...
$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 11:17:44 UTC 2020
one Wed May 13 11:17:45 UTC 2020
one Wed May 13 11:17:46 UTC 2020
one Wed May 13 11:17:47 UTC 2020

能夠看到全部的流量都分發到了v1版本(打印"one")的tcp-echo服務

直接使用tcp-echo對應的kubernetes service進行訪問是不受istio管控的,須要經過virtual service進行訪問

下面將20%的流量從tcp-echo:v1 遷移到tcp-echo:v2

$ kubectl apply -f samples/tcp-echo/tcp-echo-20-v2.yaml -n istio-io-tcp-traffic-shifting

查看部署的路由規則

$ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    ...
  name: tcp-echo
  namespace: istio-io-tcp-traffic-shifting
spec:
  gateways:
  - tcp-echo-gateway
  hosts:
  - '*'
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v1
      weight: 80
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v2
      weight: 20

再次進行測試,結果以下:

$ for i in {1..10}; do (date; sleep 1) | nc 10.84.93.45 31400; done
one Wed May 13 13:17:44 UTC 2020
two Wed May 13 13:17:45 UTC 2020
one Wed May 13 13:17:46 UTC 2020
one Wed May 13 13:17:47 UTC 2020
one Wed May 13 13:17:48 UTC 2020
one Wed May 13 13:17:49 UTC 2020
one Wed May 13 13:17:50 UTC 2020
one Wed May 13 13:17:51 UTC 2020
one Wed May 13 13:17:52 UTC 2020
two Wed May 13 13:17:53 UTC 2020

卸載

執行以下命令卸載tcp-echo應用

$ kubectl delete -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
$ kubectl delete namespace istio-io-tcp-traffic-shifting

請求超時

本節介紹如何使用istio在Envoy上配置請求超時時間。用到了官方的例子Bookinfo

部署路由

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

HTTP請求的超時時間在路由規則的timeout字段中指定。默認狀況下禁用HTTP的超時,下面會將review服務的超時時間設置爲1s,爲了校驗效果,將ratings 服務延時2s。

  • 將請求路由到v2版本的review服務,即調用ratings服務的版本,此時review服務沒有設置超時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
        - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v2
    EOF
  • rating服務增長2s延時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: ratings
    spec:
      hosts:
      - ratings
      http:
      - fault:
          delay:
            percent: 100
            fixedDelay: 2s
        route:
        - destination:
            host: ratings
            subset: v1
    EOF
  • 打開/productpage頁面,能夠看到Bookinfo應用正在,但刷新頁面後會有2s的延時

  • 爲review服務設置0.5s的請求超時

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: reviews
    spec:
      hosts:
      - reviews
      http:
      - route:
        - destination:
            host: reviews
            subset: v2
        timeout: 0.5s
    EOF
  • 此時刷新頁面,大概1s返回結果,reviews不可用

    響應花了1s,而不是0.5s的緣由是productpage 服務硬編碼了一次重試,所以reviews 服務在返回前會超時2次。Bookinfo應用是有本身內部的超時機制的,具體參見fault-injection

卸載

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

斷路

本節將顯示如何爲鏈接、請求和異常值檢測配置熔斷。斷路是建立彈性微服務應用程序的重要模式,容許編寫的程序可以限制錯誤,延遲峯值以及非指望的網絡的影響。

default命名空間(已經開啓自動注入sidecar)下部署httpbin

$ kubectl apply -f samples/httpbin/httpbin.yaml

配置斷路器

  • 建立destination rule,在調用httpbin服務時應用斷路策略。

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: httpbin
    spec:
      host: httpbin
      trafficPolicy:
        connectionPool:
          tcp:
            maxConnections: 1 #到一個目的主機的HTTP1/TCP 的最大鏈接數
          http:
            http1MaxPendingRequests: 1 #到一個目標的處於pending狀態的最大HTTP請求數
            maxRequestsPerConnection: 1 #到一個後端的每條鏈接上的最大請求數
        outlierDetection: #控制從負載平衡池中逐出不正常主機的設置
          consecutiveErrors: 1
          interval: 1s
          baseEjectionTime: 3m
          maxEjectionPercent: 100
    EOF
  • 校驗destination rule的正確性

    $ kubectl get destinationrule httpbin -o yaml
    apiVersion: networking.istio.io/v1beta1
    kind: DestinationRule
    metadata:
      annotations:
        ...
      name: httpbin
      namespace: default
    spec:
      host: httpbin
      trafficPolicy:
        connectionPool:
          http:
            http1MaxPendingRequests: 1
            maxRequestsPerConnection: 1
          tcp:
            maxConnections: 1
        outlierDetection:
          baseEjectionTime: 3m
          consecutiveErrors: 1
          interval: 1s
          maxEjectionPercent: 100

添加客戶端

建立一個客戶端,向httpbin服務發送請求。客戶端是一個名爲 fortio的簡單負載測試工具,fortio能夠控制鏈接數,併發數和發出去的HTTP調用延時。下面將使用該客戶端觸發設置在 DestinationRule中的斷路器策略。

  • 部署fortio服務

    $ kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
  • 登錄到客戶端的pod,使用名爲的fortio工具調用httpbin,使用-curl指明指望執行一次調用

    $ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get

    調用結果以下,能夠看到請求成功:

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
    HTTP/1.1 200 OK
    server: envoy
    date: Thu, 14 May 2020 01:21:47 GMT
    content-type: application/json
    content-length: 586
    access-control-allow-origin: *
    access-control-allow-credentials: true
    x-envoy-upstream-service-time: 11
    
    {
      "args": {},
      "headers": {
        "Content-Length": "0",
        "Host": "httpbin:8000",
        "User-Agent": "fortio.org/fortio-1.3.1",
        "X-B3-Parentspanid": "b5cd907bcfb5158f",
        "X-B3-Sampled": "0",
        "X-B3-Spanid": "407597df02737b32",
        "X-B3-Traceid": "45f3690565e5ca9bb5cd907bcfb5158f",
        "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=dac158cf40c0f28f3322e6219c45d546ef8cc3b7df9d993ace84ab6e44aab708;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
      },
      "origin": "127.0.0.1",
      "url": "http://httpbin:8000/get"
    }

觸發斷路器

在上面的DestinationRule設定中指定了maxConnections: 1http1MaxPendingRequests: 1,表示若是併發的鏈接數和請求數大於1,則後續的請求和鏈接會失敗,此時觸發斷路。

  1. 使用兩條併發的鏈接 (-c 2) ,併發生20個請求 (-n 20):

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
    05:50:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 20 calls: http://httpbin:8000/get
    Starting at max qps with 2 thread(s) [gomax 16] for exactly 20 calls (10 per thread + 0)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    05:50:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    Ended after 51.51929ms : 20 calls. qps=388.2
    Aggregated Function Time : count 20 avg 0.0041658472 +/- 0.003982 min 0.000313105 max 0.017104987 sum 0.083316943
    # range, mid point, percentile, count
    >= 0.000313105 <= 0.001 , 0.000656552 , 15.00, 3
    > 0.002 <= 0.003 , 0.0025 , 70.00, 11
    > 0.003 <= 0.004 , 0.0035 , 80.00, 2
    > 0.005 <= 0.006 , 0.0055 , 85.00, 1
    > 0.008 <= 0.009 , 0.0085 , 90.00, 1
    > 0.012 <= 0.014 , 0.013 , 95.00, 1
    > 0.016 <= 0.017105 , 0.0165525 , 100.00, 1
    # target 50% 0.00263636
    # target 75% 0.0035
    # target 90% 0.009
    # target 99% 0.016884
    # target 99.9% 0.0170829
    Sockets used: 6 (for perfect keepalive, would be 2)
    Code 200 : 16 (80.0 %)
    Code 503 : 4 (20.0 %)
    Response Header Sizes : count 20 avg 184.05 +/- 92.03 min 0 max 231 sum 3681
    Response Body/Total Sizes : count 20 avg 701.05 +/- 230 min 241 max 817 sum 14021
    All done 20 calls (plus 0 warmup) 4.166 ms avg, 388.2 qps

    主要關注的內容以下,能夠看到大部分請求都是成功的,但也有一小部分失敗

    Sockets used: 6 (for perfect keepalive, would be 2)
    Code 200 : 16 (80.0 %)
    Code 503 : 4 (20.0 %)
  2. 將併發鏈接數提高到3

    $ kubectl exec -it $FORTIO_POD  -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
    06:00:30 I logger.go:97> Log level is now 3 Warning (was 2 Info)
    Fortio 1.3.1 running at 0 queries per second, 16->16 procs, for 30 calls: http://httpbin:8000/get
    Starting at max qps with 3 thread(s) [gomax 16] for exactly 30 calls (10 per thread + 0)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    06:00:30 W http_client.go:679> Parsed non ok code 503 (HTTP/1.1 503)
    Ended after 18.885972ms : 30 calls. qps=1588.5
    Aggregated Function Time : count 30 avg 0.0015352119 +/- 0.002045 min 0.000165718 max 0.006403746 sum 0.046056356
    # range, mid point, percentile, count
    >= 0.000165718 <= 0.001 , 0.000582859 , 70.00, 21
    > 0.002 <= 0.003 , 0.0025 , 73.33, 1
    > 0.003 <= 0.004 , 0.0035 , 83.33, 3
    > 0.004 <= 0.005 , 0.0045 , 90.00, 2
    > 0.005 <= 0.006 , 0.0055 , 93.33, 1
    > 0.006 <= 0.00640375 , 0.00620187 , 100.00, 2
    # target 50% 0.000749715
    # target 75% 0.00316667
    # target 90% 0.005
    # target 99% 0.00634318
    # target 99.9% 0.00639769
    Sockets used: 23 (for perfect keepalive, would be 3)
    Code 200 : 9 (30.0 %)
    Code 503 : 21 (70.0 %)
    Response Header Sizes : count 30 avg 69 +/- 105.4 min 0 max 230 sum 2070
    Response Body/Total Sizes : count 30 avg 413.5 +/- 263.5 min 241 max 816 sum 12405
    All done 30 calls (plus 0 warmup) 1.535 ms avg, 1588.5 qps

    能夠看到發生了短路,只有30%的請求成功

    Sockets used: 23 (for perfect keepalive, would be 3)
    Code 200 : 9 (30.0 %)
    Code 503 : 21 (70.0 %)
  3. 查詢 istio-proxy 獲取更多信息

    $ kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
    cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.default.rq_pending_open: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.circuit_breakers.high.rq_pending_open: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_active: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_overflow: 93
    cluster.outbound|8000||httpbin.default.svc.cluster.local.upstream_rq_pending_total: 139

卸載

$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin fortio-deploy
$ kubectl delete svc httpbin fortio

鏡像

本節展現istio的流量鏡像功能。鏡像會將活動的流量的副本發送到鏡像的服務上。

該任務中,首先將全部的流量分發到v1的測試服務上,而後經過鏡像將一部分流量分發到v2。

  • 首先部署兩個版本的httpbin服務

    httpbin-v1:

    $ cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin-v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1 #v1版本標籤
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
            ports:
            - containerPort: 80
    EOF

    httpbin-v2:

    $ cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin-v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v2  #v2版本標籤
      template:
        metadata:
          labels:
            app: httpbin
            version: v2
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
            ports:
            - containerPort: 80
    EOF

    httpbin Kubernetes service:

    $ kubectl create -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: httpbin
      labels:
        app: httpbin
    spec:
      ports:
      - name: http
        port: 8000
        targetPort: 80
      selector:
        app: httpbin
    EOF
  • 啓動一個sleep服務,提供curl功能

    cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          containers:
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
            imagePullPolicy: IfNotPresent
    EOF

建立默認路由策略

默認kubernetes會對httpbin的全部版本的服務進行負載均衡,這一步中,將全部的流量分發到v1

  • 建立一個默認的路由,將全部流量分發大v1版本的服務

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: httpbin
    spec:
      hosts:
        - httpbin
      http:
      - route:
        - destination:
            host: httpbin
            subset: v1 # 100%將流量分發到v1
          weight: 100
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: httpbin
    spec:
      host: httpbin
      subsets:
      - name: v1
        labels:
          version: v1
      - name: v2
        labels:
          version: v2
    EOF
  • 向該服務發送部分流量

    $ export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
    $ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool
    {
        "headers": {
            "Accept": "*/*",
            "Content-Length": "0",
            "Host": "httpbin:8000",
            "User-Agent": "curl/7.35.0",
            "X-B3-Parentspanid": "a35a08a1875f5d18",
            "X-B3-Sampled": "0",
            "X-B3-Spanid": "7d1e0a1db0db5634",
            "X-B3-Traceid": "3b5e9010f4a50351a35a08a1875f5d18",
            "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/default;Hash=6dd991f0846ac27dc7fb878ebe8f7b6a8ebd571bdea9efa81d711484505036d7;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
        }
    }
  • 校驗v1v2版本的httpbin pod的日誌,能夠看到v1服務是有訪問日誌的,而v2則沒有

    $ export V1_POD=$(kubectl get pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name})
    $ kubectl logs -f $V1_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    $ export V2_POD=$(kubectl get pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name})
    $ kubectl logs -f $V2_POD -c httpbin
    <none>

將流量鏡像到v2

  • 修改路由規則,將流量鏡像到v2

    $ kubectl apply -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: httpbin
    spec:
      hosts:
        - httpbin
      http:
      - route:
        - destination:
            host: httpbin
            subset: v1 #100%將流量分發到v1
          weight: 100
        mirror:
          host: httpbin
          subset: v2  #100%將流量鏡像到v2
        mirror_percent: 100
    EOF

    當流量配置了鏡像時,發送到鏡像服務的請求會在Host/Authority首部以後加上-shadow,如cluster-1 變爲cluster-1-shadow須要注意的是,鏡像的請求是"發起並忘記"的方式,即會丟棄對鏡像請求的響應

    可使用``mirror_percent 字段鏡像一部分流量,而不是全部的流量。若是沒有出現該字段,爲了兼容老版本,會鏡像全部的流量。

  • 發送流量

    $ kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl  http://httpbin:8000/headers' | python -m json.tool

    查看v1和v2服務的日誌,能夠看到此時將v1服務的請求鏡像到了v2服務上

    $ kubectl logs -f $V1_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:17:57 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:18:16 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:32:09 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 518 "-" "curl/7.35.0"
    
    $ kubectl logs -f $V2_POD -c httpbin
    ...
    127.0.0.1 - - [14/May/2020:06:32:37 +0000] "GET /headers HTTP/1.1" 200 558 "-" "curl/7.35.0"

卸載

$ kubectl delete virtualservice httpbin
$ kubectl delete destinationrule httpbin
$ kubectl delete deploy httpbin-v1 httpbin-v2 sleep
$ kubectl delete svc httpbin
相關文章
相關標籤/搜索