cp deployment-user-v1.yaml deployment-user-v2.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #資源類型
metadata:
+ name: user-v2 #資源名稱
spec:
selector:
matchLabels:
+ app: user-v2 #告訴deployment根據規則匹配相應的Pod進行控制和管理,matchLabels字段匹配Pod的label值
replicas: 3 #聲明一個 Pod,副本的數量
template:
metadata:
labels:
+ app: user-v2 #Pod的名稱
spec: #組內建立的 Pod 信息
containers:
- name: nginx #容器的名稱
+ image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v2
ports:
- containerPort: 80 #容器內映射的端口
複製代碼
service-user-v2.yaml前端
apiVersion: v1
kind: Service
metadata:
+ name: service-user-v2
spec:
selector:
+ app: user-v2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
kubectl apply -f deployment-user-v2.yaml service-user-v2.yaml
複製代碼
基於 Cookie 切分流量。這種實現原理主要根據用戶請求中的 Cookie 是否存在灰度標示 Cookie 去判斷是否爲灰度用戶,再決定是否返回灰度版本服務node
nginx.ingress.kubernetes.io/canary
:可選值爲 true / false 。表明是否開啓灰度功能mysql
nginx.ingress.kubernetes.io/canary-by-cookie
複製代碼
:灰度發佈 cookie 的 key。當 key 值等於 always 時,灰度觸發生效。等於其餘值時,則不會走灰度環境 ingress-gray.yamlnginx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "vip_user"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
複製代碼
生效配置文件正則表達式
kubectl apply -f ./ingress-gray.yaml
複製代碼
來獲取 ingress 的外部端口sql
-n: 根據資源名稱進行模糊查詢docker
kubectl -n ingress-nginx get svc
複製代碼
curl http://172.31.178.169:31234/user
curl http://118.190.156.138:31234/user
curl --cookie "vip_user=always" http://172.31.178.169:31234/user
複製代碼
基於 Header 切分流量,這種實現原理主要根據用戶請求中的 header 是否存在灰度標示 header 去判斷是否爲灰度用戶,再決定是否返回灰度版本服務shell
vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-by-header: "name"
+ nginx.ingress.kubernetes.io/canary-by-header-value: "vip"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
curl --header "name:vip" http://172.31.178.169:31234/user
複製代碼
nginx.ingress.kubernetes.io/canary-weight
:值是字符串,爲 0-100 的數字,表明灰度環境命中機率。若是值爲 0,則表示不會走灰度。值越大命中機率越大。當值 = 100 時,表明全走灰度vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-weight: "50"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
for ((i=1; i<=10; i++)); do curl http://172.31.178.169:31234/user; done
複製代碼
先擴容爲 10 個副本數據庫
kubectl get deploy
kubectl scale deployment user-v1 --replicas=10
複製代碼
deployment-user-v1.yamljson
apiVersion: apps/v1 #API 配置版本
kind: Deployment #資源類型
metadata:
name: user-v1 #資源名稱
spec:
minReadySeconds: 1
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 0
+ selector:
+ matchLabels:
+ app: user-v1 #告訴deployment根據規則匹配相應的Pod進行控制和管理,matchLabels字段匹配Pod的label值
replicas: 10 #聲明一個 Pod,副本的數量
template:
metadata:
labels:
app: user-v1 #Pod的名稱
spec: #組內建立的 Pod 信息
containers:
- name: nginx #容器的名稱
+ image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一個鏡像
ports:
- containerPort: 80 #容器內映射的端口
複製代碼
參數 | 含義 |
---|---|
minReadySeconds | 容器接受流量延緩時間:單位爲秒,默認爲 0。若是沒有設置的話,k8s 會認爲容器啓動成功後就能夠用了。設置該值能夠延緩容器流量切分 |
strategy.type = RollingUpdate | ReplicaSet 發佈類型,聲明爲滾動發佈,默認也爲滾動發佈 |
strategy.rollingUpdate.maxSurge | 最多 Pod 數量:爲數字類型/百分比。若是 maxSurge 設置爲 1,replicas 設置爲 10,則在發佈過程當中 pod 數量最多爲 10 + 1 個(多出來的爲舊版本 pod,平滑期不可用狀態)。maxUnavailable 爲 0 時,該值也不能設置爲 0 |
strategy.rollingUpdate.maxUnavailable | 升級中最多不可用 pod 的數量:爲數字類型/百分比。當 maxSurge 爲 0 時,該值也不能設置爲 0 |
kubectl apply -f ./deployment-user-v1.yaml
deployment.apps/user-v1 configured
kubectl rollout status deployment/user-v1
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
deployment "user-v1" successfully rolled out
複製代碼
第一種是存活探針。存活探針是對運行中的容器檢測的。若是想檢測你的服務在運行中有沒有發生崩潰,服務有沒有中途退出或無響應,可使用這個探針
若是探針探測到錯誤, Kubernetes 就會殺掉這個 Pod;不然就不會進行處理。若是默認沒有配置這個探針, Pod 不會被殺死
探針名稱 | 在哪一個環節觸發 | 做用 | 檢測失敗對 Pod 的反應 |
---|---|---|---|
啓動探針 | Pod 運行時 | 檢測服務是否啓動成功 | 殺死 Pod 並重啓 |
存活探針 | Pod 運行時 | 檢測服務是否崩潰,是否須要重啓服務 | 殺死 Pod 並重啓 |
可用探針 | Pod 運行時 | 檢測服務是否是容許被訪問到 | 中止 Pod 的訪問調度,不會被殺死重啓 |
vi shell-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: shell-probe
name: shell-probe
spec:
containers:
- name: shell-probe
image: registry.aliyuncs.com/google_containers/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
kubectl apply -f liveness.yaml
kubectl get pods | grep liveness-exec
kubectl describe pods liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m44s default-scheduler Successfully assigned default/liveness-exec to node1
Normal Pulled 2m41s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 1.669600584s
Normal Pulled 86s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 605.008964ms
Warning Unhealthy 41s (x6 over 2m6s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 41s (x2 over 116s) kubelet Container liveness failed liveness probe, will be restarted
Normal Created 11s (x3 over 2m41s) kubelet Created container liveness
Normal Started 11s (x3 over 2m41s) kubelet Started container liveness
Normal Pulling 11s (x3 over 2m43s) kubelet Pulling image "registry.aliyuncs.com/google_containers/busybox"
Normal Pulled 11s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 521.70892ms
複製代碼
tcp-probe.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcp-probe
labels:
app: tcp-probe
spec:
containers:
- name: tcp-probe
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
kubectl apply -f tcp-probe.yaml
kubectl get pods | grep tcp-probe
kubectl describe pods tcp-probe
kubectl exec -it tcp-probe -- /bin/sh
apt-get update
apt-get install vim -y
vi /etc/nginx/conf.d/default.conf
80=>8080
nginx -s reload
kubectl describe pod tcp-probe
Warning Unhealthy 6s kubelet Readiness probe failed: dial tcp 10.244.1.47:80: connect: connection
複製代碼
vi http-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: http-probe
name: http-probe
spec:
containers:
- name: http-probe
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/http-probe:1.0.0
livenessProbe:
httpGet:
path: /liveness
port: 80
httpHeaders:
- name: source
value: probe
initialDelaySeconds: 3
periodSeconds: 3
vim ./http-probe.yaml
kubectl apply -f ./http-probe.yaml
kubectl describe pods http-probe
Normal Killing 5s kubelet Container http-probe failed liveness probe, will be restarted
docker pull registry.cn-beijing.aliyuncs.com/zhangyaohuang/http-probe:1.0.0
kubectl replace --force -f http-probe.yaml
複製代碼
Dockerfile
FROM node
COPY ./app /app
WORKDIR /app
EXPOSE 3000
CMD node index.js
let http = require('http');
let start = Date.now();
http.createServer(function(req,res){
if(req.url === '/liveness'){
let value = req.headers['source'];
if(value === 'probe'){
let duration = Date.now()-start;
if(duration>10*1000){
res.statusCode=500;
res.end('error');
}else{
res.statusCode=200;
res.end('success');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}).listen(3000,function(){console.log("http server started on 3000")}); 複製代碼
kubectl create secret generic mysql-account --from-literal=username=james --from-literal=password=123456
kubectl get secret
複製代碼
字段 | 含義 |
---|---|
NAME | Secret 的名稱 |
TYPE | Secret 的類型 |
DATA | 存儲內容的數量 |
AGE | 建立到如今的時間 |
//編輯值
kubectl edit secret account
//輸出yaml格式
kubectl get secret account -o yaml
//輸出json格式
kubectl get secret account -o json
//對Base64進行解碼
echo MTIzNDU2 | base64 -d
複製代碼
mysql-account.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysql-account
stringData:
username: root
password: root
type: Opaque
kubectl apply -f mysql-account.yaml
secret/mysql-account created
kubectl get secret mysql-account -o yaml
複製代碼
kubectl create secret docker-registry private-registry \
--docker-username=[用戶名] \
--docker-password=[密碼] \
--docker-email=[郵箱] \
--docker-server=[私有鏡像庫地址]
//查看私有庫密鑰組
kubectl get secret private-registry -o yaml
echo [value] | base64 -d
複製代碼
vi private-registry-file.yaml
apiVersion: v1
kind: Secret
metadata:
name: private-registry-file
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczo
type: kubernetes.io/dockerconfigjson
kubectl apply -f ./private-registry-file.yaml
kubectl get secret private-registry-file -o yaml
複製代碼
apiVersion: apps/v1 #API 配置版本
kind: Deployment #資源類型
metadata:
name: user-v1 #資源名稱
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告訴deployment根據規則匹配相應的Pod進行控制和管理,matchLabels字段匹配Pod的label值
+ replicas: 1 #聲明一個 Pod,副本的數量
template:
metadata:
labels:
app: user-v1 #Pod的名稱
spec: #組內建立的 Pod 信息
+ volumes:
+ - name: mysql-account
+ secret:
+ secretName: mysql-account
containers:
- name: nginx #容器的名稱
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一個鏡像
+ volumeMounts:
+ - name: mysql-account
+ mountPath: /mysql-account
+ readOnly: true
ports:
- containerPort: 80 #容器內映射的端口
kubectl describe pods user-v1-b88799944-tjgrs
kubectl exec -it user-v1-b88799944-tjgrs -- ls /root
複製代碼
deployment-user-v1.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #資源類型
metadata:
name: user-v1 #資源名稱
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告訴deployment根據規則匹配相應的Pod進行控制和管理,matchLabels字段匹配Pod的label值
replicas: 1 #聲明一個 Pod,副本的數量
template:
metadata:
labels:
app: user-v1 #Pod的名稱
spec: #組內建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
containers:
- name: nginx #容器的名稱
+ env:
+ - name: USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: username
+ - name: PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: password
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一個鏡像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器內映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-5f48f78d86-hjkcl
kubectl exec -it user-v1-688486759f-9snpx -- env | grep USERNAME
複製代碼
vi v4.yaml
image: [僅有鏡像庫地址]/[鏡像名稱]:[鏡像標籤]
kubectl apply -f v4.yaml
kubectl get pods
kubectl describe pods [POD_NAME]
複製代碼
vi v4.yaml
+imagePullSecrets:
+ - name: private-registry-file
containers:
- name: nginx
kubectl apply -f v4.yaml
複製代碼
服務發現
kubectl -n kube-system get all -l k8s-app=kube-dns -o wide
複製代碼
kubectl exec -it [PodName] -- [Command]
kubectl get pods
kubectl get svc
kubectl exec -it user-v1-688486759f-9snpx -- /bin/sh
curl http://service-user-v2
複製代碼
[ServiceName].[NameSpace].svc.cluster.local
curl http://service-user-v2.default.svc.cluster.local
複製代碼
kubectl create configmap [config_name] --from-literal=[key]=[value]
kubectl create configmap mysql-config --from-literal=MYSQL_HOST=192.168.1.172 --from-literal=MYSQL_PORT=3306
複製代碼
[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
kubectl get cm
kubectl describe cm mysql-config
複製代碼
mysql-config-file.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config-file
data:
MYSQL_HOST: "192.168.1.172"
MYSQL_PORT: "3306"
kubectl apply -f ./mysql-config-file.yaml
kubectl describe cm mysql-config-file
複製代碼
--from-file
表明一個文件key
是文件在 configmap
內的 keyfile_path
是文件的路徑kubectl create configmap [configname] --from-file=[key]=[file_path]
複製代碼
env.config
HOST: 192.168.0.1
PORT: 8080
kubectl create configmap env-from-file --from-file=env=./env.config
configmap/env-from-file created
kubectl get cm env-from-file -o yaml
複製代碼
kubectl create configmap [configname] --from-file=[dir_path]
mkdir env && cd ./env
echo 'local' > env.local
echo 'test' > env.test
echo 'prod' > env.prod
kubectl create configmap env-from-dir --from-file=./
kubectl get cm env-from-dir -o yaml
複製代碼
containers:
- name: nginx #容器的名稱
+ env:
+ - name: MYSQL_HOST
+ valueFrom:
+ configMapKeyRef:
+ name: mysql-config
+ key: MYSQL_HOST
kubectl apply -f ./v1.yaml
//kubectl exec -it [POD_NAME] -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_PORT
containers:
- name: nginx #容器的名稱
env:
+ envFrom:
+ - configMapRef:
+ name: mysql-config
+ optional: true
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一個鏡像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器內映射的端口
複製代碼
template:
metadata:
labels:
app: user-v1 #Pod的名稱
spec: #組內建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
+ - name: envfiles
+ configMap:
+ name: env-from-dir
containers:
- name: nginx #容器的名稱
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysql-account
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysql-account
key: password
envFrom:
- configMapRef:
name: mysql-config
optional: true
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一個鏡像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
+ - name: envfiles
+ mountPath: /envfiles
+ readOnly: true
ports:
- containerPort: 80 #容器內映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-79b8768f54-r56kd
kubectl exec -it user-v1-744f48d6bd-9klqr -- ls /envfiles
複製代碼
spec: #組內建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
- name: envfiles
configMap:
name: env-from-dir
+ items:
+ - key: env.local
+ path: env.local
複製代碼
在 Kubernetes 中, Pod 被部署到 Node 上面去的規則和邏輯是由 Kubernetes 的調度組件根據 Node 的剩餘資源,地位,以及其餘規則自動選擇調度的
但前端和後端每每服務器資源的分配都是不均衡的,甚至有的服務只能讓特定的服務器來跑
在這種狀況下,咱們選擇自動調度是不均衡的,就須要人工去幹預匹配選擇規則了
這時候,就須要在給 Node 添加一個叫作污點的東西,以確保 Node 不被 Pod 調度到
當你給 Node 設置一個污點後,除非給 Pod 設置一個相對應的容忍度,不然 Pod 才能被調度上去。這也就是污點和容忍的來源
污點的格式是 key=value
,能夠自定義本身的內容,就像是一組 Tag 同樣
Node_Name 爲要添加污點的 node 名稱
key 和 value 爲一組鍵值對,表明一組標示標籤
NoSchedule 則爲不被調度的意思,和它同級別的還有其餘的值:PreferNoSchedule 和 NoExecute
kubectl taint nodes [Node_Name] [key]=[value]:NoSchedule
//添加污點
kubectl taint nodes node1 user-v4=true:NoSchedule
//查看污點
kubectl describe node node1
kubectl describe node master
Taints: node-role.kubernetes.io/master:NoSchedule
複製代碼
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
複製代碼
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "user-v4"
+ operator: "Equal"
+ value: "true"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
複製代碼
修改 Node 的污點
kubectl taint nodes node1 user-v4=1:NoSchedule --overwrite
複製代碼
刪除 Node 的污點
kubectl taint nodes node1 user-v4-
複製代碼
在 master 上佈署 pod
kubectl taint nodes node1 user-v4=true:NoSchedule
kubectl describe node node1
kubectl describe node master
複製代碼
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "node-role.kubernetes.io/master"
+ operator: "Exists"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
複製代碼
apiVersion: v1kind: Podmetadata: name: private-regspec: containers: - name: private-reg-container image: imagePullSecrets: - name: har