某些狀況下,咱們在使用Kubernetes做爲業務應用的雲平臺,想要實現應用的藍綠部署用來迭代應用版本,用lstio過重太複雜,並且它自己定位於流控和網格治理;Ingress-Nginx在0.21版本引入了Canary功能,能夠爲網關入口配置多個版本的應用程序,使用annotation來控制多個後端服務的流量分配nginx
若是想啓用Canary功能,要先設置
nginx.ingress.kubernetes.io/canary: "true"
,而後能夠啓用如下注釋來配置Canary後端
nginx.ingress.kubernetes.io/canary-weight
請求到Canary ingress中指定的服務的請求百分比,值爲0-100的整數,根據設置的值來決定大概有百分之多少的流量會分配Canary Ingress中指定的後端s服務nginx.ingress.kubernetes.io/canary-by-header
基於request header 的流量切分,適用於灰度發佈或者A/B測試,當設定的hearder值爲always是,請求流量會被一直分配到Canary入口,當hearder值被設置爲never時,請求流量不會分配到Canary入口,對於其餘hearder值,將忽略,並經過優先級將請求流量分配到其餘規則nginx.ingress.kubernetes.io/canary-by-header-value
這個配置要和nginx.ingress.kubernetes.io/canary-by-header
一塊兒使用,當請求中的hearder key和value 和nginx.ingress.kubernetes.io/canary-by-header
nginx.ingress.kubernetes.io/canary-by-header-value
匹配時,請求流量會被分配到Canary Ingress入口,對於其餘任何hearder值,將忽略,並經過優先級將請求流量分配到其餘規則nginx.ingress.kubernetes.io/canary-by-cookie
這個配置是基於cookie的流量切分,也適用於灰度發佈或者A/B測試,當cookie值設置爲always時,請求流量將被路由到Canary Ingress入口,當cookie值設置爲never時,請求流量將不會路由到Canary入口,對於其餘值,將忽略,並經過優先級將請求流量分配到其餘規則金絲雀規則按優先順序進行以下排序:canary-by-header - > canary-by-cookie - > canary-weightapi
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: echoserverv1
name: echoserverv1
namespace: echoserver
spec:
rules:
- host: echo.chulinx.com
http:
paths:
- backend:
serviceName: echoserverv1
servicePort: 8080
path: /
---
kind: Service
apiVersion: v1
metadata:
name: echoserverv1
namespace: echoserver
spec:
selector:
name: echoserverv1
type: ClusterIP
ports:
- name: echoserverv1
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverv1
namespace: echoserver
labels:
name: echoserverv1
spec:
template:
metadata:
labels:
name: echoserverv1
spec:
containers:
- image: mirrorgooglecontainers/echoserver:1.10
name: echoserverv1
ports:
- containerPort: 8080
name: echoserverv1
複製代碼
$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME READY STATUS RESTARTS AGE
pod/echoserverv1-657b966cb5-7grqs 1/1 Running 0 24h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/echoserverv1 ClusterIP 10.99.68.72 <none> 8080/TCP 24h
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/echoserverv1 echo.chulinx.com 80 24h
複製代碼
$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
複製代碼
咱們開啓canary功能,將v2版本的權重設置爲50%,這個百分比並不能精確的將請求平均分配到兩個版本的服務,而是在50%上下浮動bash
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "50"
labels:
app: echoserverv2
name: echoserverv2
namespace: echoserver
spec:
rules:
- host: echo.chulinx.com
http:
paths:
- backend:
serviceName: echoserverv2
servicePort: 8080
path: /
---
kind: Service
apiVersion: v1
metadata:
name: echoserverv2
namespace: echoserver
spec:
selector:
name: echoserverv2
type: ClusterIP
ports:
- name: echoserverv2
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverv2
namespace: echoserver
labels:
name: echoserverv2
spec:
template:
metadata:
labels:
name: echoserverv2
spec:
containers:
- image: mirrorgooglecontainers/echoserver:1.10
name: echoserverv2
ports:
- containerPort: 8080
name: echoserverv2
複製代碼
$ [K8sSj] kubectl get pod,service,ingress -n echoserver
NAME READY STATUS RESTARTS AGE
pod/echoserverv1-657b966cb5-7grqs 1/1 Running 0 24h
pod/echoserverv2-856bb5758-f9tqn 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/echoserverv1 ClusterIP 10.99.68.72 <none> 8080/TCP 24h
service/echoserverv2 ClusterIP 10.111.103.170 <none> 8080/TCP 4s
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/echoserverv1 echo.chulinx.com 80 24h
ingress.extensions/echoserverv2 echo.chulinx.com 80 4s
複製代碼
能夠看到請求有4個落到v2版本,6個落到v1版本,理論上來講,請求說越多,落到v2版本的請求數越接近設置的權重50%服務器
$ [K8sSj] for i in `seq 10`;do curl -s echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
複製代碼
增長header
nginx.ingress.kubernetes.io/canary-by-header: "v2"
cookie
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "50"
nginx.ingress.kubernetes.io/canary-by-header: "v2"
labels:
app: echoserverv2
name: echoserverv2
namespace: echoserver
spec:
rules:
- host: echo.chulinx.com
http:
paths:
- backend:
serviceName: echoserverv2
servicePort: 8080
path: /
---
kind: Service
apiVersion: v1
metadata:
name: echoserverv2
namespace: echoserver
spec:
selector:
name: echoserverv2
type: ClusterIP
ports:
- name: echoserverv2
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverv2
namespace: echoserver
labels:
name: echoserverv2
spec:
template:
metadata:
labels:
name: echoserverv2
spec:
containers:
- image: mirrorgooglecontainers/echoserver:1.10
name: echoserverv2
ports:
- containerPort: 8080
name: echoserverv2
複製代碼
測試了header 爲
v2:always
v2:never
v2:true
這三個hearder值,能夠看到當hearder爲v2:always
時,流量會所有流入v2,當v2:never
時,流量會所有流入v1,當v2:true
時,也就是非always/never
,流量會按照配置的權重流入對應版本的服務app
$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
複製代碼
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "50"
nginx.ingress.kubernetes.io/canary-by-header: "v2"
nginx.ingress.kubernetes.io/canary-by-header-value: "true"
labels:
app: echoserverv2
name: echoserverv2
namespace: echoserver
spec:
rules:
- host: echo.chulinx.com
http:
paths:
- backend:
serviceName: echoserverv2
servicePort: 8080
path: /
---
kind: Service
apiVersion: v1
metadata:
name: echoserverv2
namespace: echoserver
spec:
selector:
name: echoserverv2
type: ClusterIP
ports:
- name: echoserverv2
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverv2
namespace: echoserver
labels:
name: echoserverv2
spec:
template:
metadata:
labels:
name: echoserverv2
spec:
containers:
- image: mirrorgooglecontainers/echoserver:1.10
name: echoserverv2
ports:
- containerPort: 8080
name: echoserverv2
複製代碼
能夠看到只有header爲
v2:never
時,請求流量纔會流入v2版本,其餘值流量都會按照權重設置流入不通版本的服務curl
$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:true" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
$ [K8sSj] for i in `seq 10`;do curl -s -H "v2:never" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
複製代碼
cookie其實和header原理大體相同,也是ingress自動cookie值,客戶訪問若是cookie匹配,流量就會流入與之匹配的後端服務測試
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "50"
nginx.ingress.kubernetes.io/canary-by-header: "v2"
nginx.ingress.kubernetes.io/canary-by-header-value: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_shanghai"
labels:
app: echoserverv2
name: echoserverv2
namespace: echoserver
spec:
rules:
- host: echo.chulinx.com
http:
paths:
- backend:
serviceName: echoserverv2
servicePort: 8080
path: /
---
kind: Service
apiVersion: v1
metadata:
name: echoserverv2
namespace: echoserver
spec:
selector:
name: echoserverv2
type: ClusterIP
ports:
- name: echoserverv2
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserverv2
namespace: echoserver
labels:
name: echoserverv2
spec:
template:
metadata:
labels:
name: echoserverv2
spec:
containers:
- image: mirrorgooglecontainers/echoserver:1.10
name: echoserverv2
ports:
- containerPort: 8080
name: echoserverv2
複製代碼
能夠看和header的訪問效果是同樣的,只不過cookie不能自定義valueui
$ [K8sSj] kubectl apply -f appv2.yml
ingress.extensions/echoserverv2 configured
service/echoserverv2 unchanged
deployment.extensions/echoserverv2 unchanged
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:01:52]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai:always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv1-657b966cb5-7grqs
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
# zlx @ zlxdeMacBook-Pro in ~/Desktop/unicom/k8syml/nginx-ingress-canary-deployment [16:02:25]
$ [K8sSj] for i in `seq 10`;do curl -s --cookie "user_from_shanghai=always" echo.chulinx.com|grep Hostname;done
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
Hostname: echoserverv2-856bb5758-f9tqn
複製代碼
灰度發佈能夠保證總體系統的穩定,在初始灰度的時候就能夠對新版本進行測試、發現和調整問題,以保證其影響度,以上內容經過實例詳細介紹了Ingress-Nginx的實戰Canary Annotation,能夠藉助Ingress-Nginx輕鬆實現藍綠髮布和金絲雀發佈
藍綠部署中,一共有兩套系統:一套是正在提供服務系統,標記爲「綠色」;另外一套是準備發佈的系統,標記爲「藍色」。兩套系統都是功能完善的,而且正在運行的系統,只是系統版本和對外服務狀況不一樣。 最初,沒有任何系統,沒有藍綠之分。 而後,第一套系統開發完成,直接上線,這個過程只有一個系統,也沒有藍綠之分。 後來,開發了新版本,要用新版本替換線上的舊版本,在線上的系統以外,搭建了一個使用新版本代碼的全新系統。 這時候,一共有兩套系統在運行,正在對外提供服務的老系統是綠色系統,新部署的系統是藍色系統。 藍色系統不對外提供服務,用來作啥? 用來作發佈前測試,測試過程當中發現任何問題,能夠直接在藍色系統上修改,不干擾用戶正在使用的系統。(注意,兩套系統沒有耦合的時候才能百分百保證不干擾) 藍色系統通過反覆的測試、修改、驗證,肯定達到上線標準以後,直接將用戶切換到藍色系統: 切換後的一段時間內,依舊是藍綠兩套系統並存,可是用戶訪問的已是藍色系統。這段時間內觀察藍色系統(新系統)工做狀態,若是出現問題,直接切換回綠色系統。 當確信對外提供服務的藍色系統工做正常,不對外提供服務的綠色系統已經再也不須要的時候,藍色系統正式成爲對外提供服務系統,成爲新的綠色系統。 原先的綠色系統能夠銷燬,將資源釋放出來,用於部署下一個藍色系統。 藍綠部署只是上線策略中的一種,它不是能夠應對全部狀況的萬能方案。 藍綠部署可以簡單快捷實施的前提假設是目標系統是很是內聚的,若是目標系統至關複雜,那麼如何切換、兩套系統的數據是否須要以及如何同步等,都須要仔細考慮。
金絲雀發佈(Canary)也是一種發佈策略,和國內常說的灰度發佈是同一類策略。藍綠部署是準備兩套系統,在兩套系統之間進行切換,金絲雀策略是隻有一套系統,逐漸替換這套系統 譬如說,目標系統是一組無狀態的Web服務器,可是數量很是多,假設有一萬臺。 這時候,藍綠部署就不能用了,由於你不可能申請一萬臺服務器專門用來部署藍色系統(在藍綠部署的定義中,藍色的系統要可以承接全部訪問)。 能夠想到的一個方法是: 只准備幾臺服務器,在上面部署新版本的系統並測試驗證。測試經過以後,擔憂出現意外,還不敢當即更新全部的服務器。 先將線上的一萬臺服務器中的10臺更新爲最新的系統,而後觀察驗證。確認沒有異常以後,再將剩餘的全部服務器更新。 這個方法就是金絲雀發佈。 實際操做中還能夠作更多控制,譬如說,給最初更新的10臺服務器設置較低的權重、控制發送給這10臺服務器的請求數,而後逐漸提升權重、增長請求數。 這個控制叫作「流量切分」,既能夠用於金絲雀發佈,也能夠用於後面的A/B測試。 藍綠部署和金絲雀發佈是兩種發佈策略,都不是萬能的。有時候二者均可以使用,有時候只能用其中一種。
首先須要明確的是,A/B測試和藍綠部署以及金絲雀,徹底是兩回事。 藍綠部署和金絲雀是發佈策略,目標是確保新上線的系統穩定,關注的是新系統的BUG、隱患。 A/B測試是效果測試,同一時間有多個版本的服務對外服務,這些服務都是通過足夠測試,達到了上線標準的服務,有差別可是沒有新舊之分(它們上線時可能採用了藍綠部署的方式)。 A/B測試關注的是不一樣版本的服務的實際效果,譬如說轉化率、訂單狀況等。 A/B測試時,線上同時運行多個版本的服務,這些服務一般會有一些體驗上的差別,譬如說頁面樣式、顏色、操做流程不一樣。相關人員經過分析各個版本服務的實際效果,選出效果最好的版本。 在A/B測試中,須要可以控制流量的分配,譬如說,爲A版本分配10%的流量,爲B版本分配10%的流量,爲C版本分配80%的流量。