基礎的Istio環境已經搭建完成,咱們須要開始瞭解Istio提供做爲微服務網格的各類機制,也就是本文標題的(超時控制,熔斷器,流量複製,速率控制)官方很給力的準備的實例項目也不須要你們本身編寫demo來進行測試,那就來時跑跑看吧.python
附上:docker
喵了個咪的博客:w-blog.cnjson
Istio官方地址:https://preliminary.istio.io/zhapi
Istio中文文檔:https://preliminary.istio.io/zh/docs/併發
PS : 此處基於當前最新istio版本1.0.3版本進行搭建和演示app
在真正的請求過程當中咱們經常會給對應的服務給一個超時時間來保證足夠的用戶體驗,經過硬編碼的方式固然不理想,Istio提供對應的超時控制方式:負載均衡
kubectl apply -n istio-test -f istio-1.0.3/samples/bookinfo/networking/virtual-service-all-v1.yaml
能夠在路由規則的 timeout 字段中來給 http 請求設置請求超時。缺省狀況下,超時被設置爲 15 秒鐘,本文任務中,會把 reviews 服務的超時設置爲一秒鐘。爲了能觀察設置的效果,還須要在對 ratings 服務的調用中加入兩秒鐘的延遲。curl
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF
在1秒鐘以後就會返回(即便超時配置爲半秒,響應須要1秒的緣由是由於服務中存在硬編碼重試productpage,所以它reviews在返回以前調用超時服務兩次。)tcp
在微服務有重要的服務也有不重要的服務,雖然能夠經過K8S控制CPU消耗,可是這種基本控制力度是無法知足對於併發請求數的控制,好比A服務限制併發數100,B服務限制10,這個時候就能夠經過併發數來限制,無需經過CPU這種不許確的限制方式微服務
> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/httpbin.yaml
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutiveErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 EOF
> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/sample-client/fortio-deploy.yaml > FORTIO_POD=$(kubectl get -n istio-test pod | grep fortio | awk '{ print $1 }') > kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/get HTTP/1.1 200 OK server: envoy date: Wed, 07 Nov 2018 06:52:32 GMT content-type: application/json access-control-allow-origin: * access-control-allow-credentials: true content-length: 365 x-envoy-upstream-service-time: 113 { "args": {}, "headers": { "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "istio/fortio-1.0.1", "X-B3-Sampled": "1", "X-B3-Spanid": "a708e175c6a077d1", "X-B3-Traceid": "a708e175c6a077d1", "X-Request-Id": "62d09db5-550a-9b81-80d9-6d8f60956386" }, "origin": "127.0.0.1", "url": "http://httpbin:8000/get" }
> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 06:54:16 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0) Ended after 96.058168ms : 20 calls. qps=208.21 Aggregated Function Time : count 20 avg 0.0084172288 +/- 0.004876 min 0.000583248 max 0.016515793 sum 0.168344576 # range, mid point, percentile, count >= 0.000583248 <= 0.001 , 0.000791624 , 5.00, 1 > 0.001 <= 0.002 , 0.0015 , 25.00, 4 > 0.006 <= 0.007 , 0.0065 , 30.00, 1 > 0.007 <= 0.008 , 0.0075 , 35.00, 1 > 0.008 <= 0.009 , 0.0085 , 55.00, 4 > 0.009 <= 0.01 , 0.0095 , 65.00, 2 > 0.01 <= 0.011 , 0.0105 , 75.00, 2 > 0.011 <= 0.012 , 0.0115 , 80.00, 1 > 0.012 <= 0.014 , 0.013 , 85.00, 1 > 0.014 <= 0.016 , 0.015 , 95.00, 2 > 0.016 <= 0.0165158 , 0.0162579 , 100.00, 1 # target 50% 0.00875 # target 75% 0.011 # target 90% 0.015 # target 99% 0.0164126 # target 99.9% 0.0165055 Sockets used: 7 (for perfect keepalive, would be 2) Code 200 : 15 (75.0 %) Code 503 : 5 (25.0 %) Response Header Sizes : count 20 avg 172.7 +/- 99.71 min 0 max 231 sum 3454 Response Body/Total Sizes : count 20 avg 500.7 +/- 163.8 min 217 max 596 sum 10014 All done 20 calls (plus 0 warmup) 8.417 ms avg, 208.2 qps
這裏能夠看到,幾乎全部請求都經過了。Istio-proxy 容許存在一些偏差
Code 200 : 15 (75.0 %) Code 503 : 5 (25.0 %)
> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get 06:55:28 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 30 calls: http://httpbin:8000/get Starting at max qps with 3 thread(s) [gomax 4] for exactly 30 calls (10 per thread + 0) Ended after 59.921126ms : 30 calls. qps=500.66 Aggregated Function Time : count 30 avg 0.0052897259 +/- 0.006496 min 0.000633091 max 0.024999538 sum 0.158691777 # range, mid point, percentile, count >= 0.000633091 <= 0.001 , 0.000816546 , 16.67, 5 > 0.001 <= 0.002 , 0.0015 , 63.33, 14 > 0.002 <= 0.003 , 0.0025 , 66.67, 1 > 0.008 <= 0.009 , 0.0085 , 73.33, 2 > 0.009 <= 0.01 , 0.0095 , 80.00, 2 > 0.01 <= 0.011 , 0.0105 , 83.33, 1 > 0.011 <= 0.012 , 0.0115 , 86.67, 1 > 0.012 <= 0.014 , 0.013 , 90.00, 1 > 0.014 <= 0.016 , 0.015 , 93.33, 1 > 0.02 <= 0.0249995 , 0.0224998 , 100.00, 2 # target 50% 0.00171429 # target 75% 0.00925 # target 90% 0.014 # target 99% 0.0242496 # target 99.9% 0.0249245 Sockets used: 22 (for perfect keepalive, would be 3) Code 200 : 10 (33.3 %) Code 503 : 20 (66.7 %) Response Header Sizes : count 30 avg 76.833333 +/- 108.7 min 0 max 231 sum 2305 Response Body/Total Sizes : count 30 avg 343.16667 +/- 178.4 min 217 max 596 sum 10295 All done 30 calls (plus 0 warmup) 5.290 ms avg, 500.7 qps
這時候會觀察到,熔斷行爲按照以前的設計生效了,只有 33.3% 的請求得到經過,剩餘請求被斷路器攔截了
咱們能夠查詢 istio-proxy 的狀態,獲取更多相關信息:
> kubectl exec -n istio-test -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending
最後清理規則和服務
kubectl delete -n istio-test destinationrule httpbin kubectl delete -n istio-test deploy httpbin fortio-deploy kubectl delete -n istio-test svc httpbin
以前在流量控制裏面有提到分流的概覽V1和V2都承擔50%的流量,可是還有一種場景下會用到Istio流量複製的功能,它是一個以儘量低的風險爲生產帶來變化的強大的功能.
當咱們須要發佈一個咱們不太確認的程序的時候咱們但願它能夠先運行一段時間看看穩定性,可是並不想讓用戶訪問這個不穩定的服務,此時流量複製就起到做用了,流量複製能夠吧100%請求給到V1版本而且在其中抽取10%的請求也發送給V2一份可是並不關心它的返回.
httpbin-v1
> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v1 spec: replicas: 1 template: metadata: labels: app: httpbin version: v1 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOF
httpbin-v2:
> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v2 spec: replicas: 1 template: metadata: labels: app: httpbin version: v2 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOF
httpbin Kubernetes service:
> kubectl apply -n istio-test -f - <<EOF apiVersion: v1 kind: Service metadata: name: httpbin labels: app: httpbin spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin EOF
啓動 sleep 服務,這樣就可使用 curl 來請求了:
> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: labels: app: sleep spec: containers: - name: sleep image: tutum/curl command: ["/bin/sleep","infinity"] imagePullPolicy: IfNotPresent EOF
默認狀況下,Kubernetes 在 httpbin 服務的兩個版本之間進行負載均衡。在此步驟中會更改該行爲,把全部流量都路由到 v1。
建立一個默認路由規則,將全部流量路由到服務的 v1 :
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 weight: 100 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF
向服務發送一些流量:
> export SLEEP_POD=$(kubectl get -n istio-test pod -l app=sleep -o jsonpath={.items..metadata.name}) > kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool { "headers": { "Accept": "*/*", "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "curl/7.35.0", "X-B3-Sampled": "1", "X-B3-Spanid": "8e32159d042d8a75", "X-B3-Traceid": "8e32159d042d8a75" } }
查看 httpbin pods 的 v1 和 v2 日誌。您能夠看到 v1 的訪問日誌和 v2 爲 <none> 的日誌:
> export V1_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name}) > kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0"
> export V2_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name}) > kubectl logs -n istio-test -f $V2_POD -c httpbin <none>
> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 weight: 100 mirror: host: httpbin subset: v2 EOF
此路由規則將 100% 的流量發送到 v1 。最後一節指定鏡像到 httpbin:v2 服務。當流量被鏡像時,請求將經過其主機/受權報頭髮送到鏡像服務附上 -shadow。例如,將 cluster-1 變爲 cluster-1-shadow。
此外,重點注意這些被鏡像的請求是「即發即棄」的,也就是說這些請求引起的響應是會被丟棄的。
> kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool
> kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0" 127.0.0.1 - - [07/Nov/2018:07:26:58 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0"
kubectl logs -n istio-test -f $V2_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:28:37 +0000] "GET /headers HTTP/1.1" 200 281 "-" "curl/7.35.0"
istioctl delete -n istio-test virtualservice httpbin istioctl delete -n istio-test destinationrule httpbin kubectl delete -n istio-test deploy httpbin-v1 httpbin-v2 sleep kubectl delete -n istio-test svc httpbin