curl -LO https://kubectl.oss-cn-hangzhou.aliyuncs.com/macos/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl --help
curl -LO https://kubectl.oss-cn-hangzhou.aliyuncs.com/linux/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl --help
把 https://kubectl.oss-cn-hangzhou.aliyuncs.com/windows/kubectl.exe 放到系統PATH路徑下html
kubectl --help
配置 kubectl 鏈接 Kubernetes 集羣的配置,可參考文檔 經過kubectl鏈接Kubernetes集羣node
kubectl port-forward -n istio-system "$(kubectl get -n istio-system pod --selector=app=kiali -o jsonpath='{.items..metadata.name}')" 20001
在本地瀏覽器中訪問 http://localhost:20001 ,使用默認帳戶 admin/admin 登陸linux
demo
的命名空間,併爲其添加一個 istio-injection:enabled
的標籤istio-app
,指定應用版本爲v1
,選擇剛剛建立的命名空間 demo
registry.cn-beijing.aliyuncs.com/test-node/node-server:v1
istio-app-svc
,類型爲虛擬集羣 IP ,服務端口和容器端口均爲8080kubectl apply -n demo -f - <<EOF apiVersion: v1 kind: Service metadata: name: sleep labels: app: sleep spec: ports: - port: 80 name: http selector: app: sleep --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: labels: app: sleep spec: containers: - name: sleep image: pstauffer/curl command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent EOF
kubectl exec -it -n demo "$(kubectl get -n demo pod --selector=app=sleep -o jsonpath='{.items..metadata.name}')" sh
執行以下命令調用以前部署的 Istio 應用:git
for i in `seq 1000` do curl http://istio-app-svc.demo:8080; echo ''; sleep 1; done;
能夠看到返回的信息:github
Hello from v1 Hello from v1 Hello from v1
istio-app-svc
服務,點擊管理,在版本管理中,點擊增長灰度版本,新的版本指定爲v2
在容器配置中使用如下鏡像:macos
registry.cn-beijing.aliyuncs.com/test-node/node-server:v2
在灰度策略中選擇基於流量比例發佈,流量比例 50%json
v1
和v2
版本的返回結果Hello from v1 Hello from v2 Hello from v1 Hello from v1 Hello from v2 Hello from v1
fault filter abort
。Hello from v1 Hello from v1 Hello from v1 fault filter abort fault filter abort fault filter abort Hello from v1 Hello from v2 fault filter abort Hello from v2 Hello from v2
同時,在 kiali 可視化界面中,也能夠看到 sleep 服務對 istio-app-svc 服務的調用有 50% 左右的失敗比例windows
v2
kubectl delete destinationrule -n demo istio-app-svc kubectl apply -n demo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-app-svc spec: host: istio-app-svc trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 subsets: - labels: version: v1 name: v1 - labels: version: v2 name: v2 EOF
kubectl apply -n demo -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/sample-client/fortio-deploy.yaml
maxConnections: 1
以及 http1MaxPendingRequests: 1
。這意味着若是超過了一個鏈接同時發起請求,Istio 就會熔斷,阻止後續的請求或鏈接。所以咱們以併發數爲 3,發出 100 個請求:FORTIO_POD=$(kubectl -n demo get pod | grep fortio | awk '{ print $1 }') kubectl -n demo exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 100 -loglevel Warning http://istio-app-svc:8080
從結果中,能夠看到,有超過 40% 的請求被 Istio 阻斷了。api
Code 200 : 57 (57.0 %) Code 503 : 43 (43.0 %) Response Header Sizes : count 100 avg 130.53 +/- 113.4 min 0 max 229 sum 13053 Response Body/Total Sizes : count 100 avg 242.14 +/- 0.9902 min 241 max 243 sum 24214 All done 100 calls (plus 0 warmup) 0.860 ms avg, 2757.6 qps
在 kiali 中觀察能夠發現,這部分請求並無真正到達 istio-app-svc 的 Pod瀏覽器
upstream_rq_pending_overflow
的值即爲被熔斷策略阻止的請求數kubectl -n demo exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep istio-app-svc | grep pending cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_active: 0 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_failure_eject: 0 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_overflow: 99 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_total: 199
執行如下命令:
kubectl delete ns demo
原文連接 本文爲雲棲社區原創內容,未經容許不得轉載。