本文收錄在容器技術學習系列文章總目錄html
Pod控制器是用於實現管理pod的中間層,確保pod資源符合預期的狀態,pod的資源出現故障時,會嘗試 進行重啓,當根據重啓策略無效,則會從新新建pod的資源。node
本文主要講解ReplicaSet、Deployment、DaemonSet 三中類型的pod控制器。nginx
(1)什麼是ReplicaSet?web
ReplicaSet是下一代複本控制器,是Replication Controller(RC)的升級版本。ReplicaSet和 Replication Controller之間的惟一區別是對選擇器的支持。ReplicaSet支持labels user guide中描述的set-based選擇器要求, 而Replication Controller僅支持equality-based的選擇器要求。redis
(2)如何使用ReplicaSetvim
大多數kubectl 支持Replication Controller 命令的也支持ReplicaSets。rolling-update命令除外,若是要使用rolling-update,請使用Deployments來實現。api
雖然ReplicaSets能夠獨立使用,但它主要被 Deployments用做pod 機制的建立、刪除和更新。當使用Deployment時,你沒必要擔憂建立pod的ReplicaSets,由於能夠經過Deployment實現管理ReplicaSets。bash
(3)什麼時候使用ReplicaSet?服務器
ReplicaSet能確保運行指定數量的pod。然而,Deployment 是一個更高層次的概念,它能管理ReplicaSets,並提供對pod的更新等功能。所以,咱們建議你使用Deployment來管理ReplicaSets,除非你須要自定義更新編排。網絡
這意味着你可能永遠不須要操做ReplicaSet對象,而是使用Deployment替代管理 。後續講解Deployment會詳細描述。
(1)編寫yaml文件,並建立啓動
簡單建立一個replicaset:啓動2個pod
[root@master manifests]# vim rs-damo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: qa spec: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master manifests]# kubectl create -f rs-damo.yaml replicaset.apps/myapp created
(2)查詢驗證
---查詢replicaset(rs)信息 [root@master manifests]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 2 2 2 23s ---查詢pod信息 [root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-r4ss4 1/1 Running 0 25s myapp-zjc5l 1/1 Running 0 26s ---查詢pod詳細信息;模板中的label都生效了 [root@master manifests]# kubectl describe pod myapp-r4ss4 Name: myapp-r4ss4 Namespace: default Priority: 0 PriorityClassName: <none> Node: node2/192.168.130.105 Start Time: Thu, 06 Sep 2018 14:57:23 +0800 Labels: app=myapp environment=qa release=canary ... ... ---驗證服務 [root@master manifests]# curl 10.244.2.13 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
(3)生成pod原則:「多退少補」
① 刪除pod,會當即從新構建,生成新的pod
[root@master manifests]# kubectl delete pods myapp-zjc5l pod "myapp-k4j6h" deleted [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-r4ss4 1/1 Running 0 33s myapp-mdjvh 1/1 Running 0 10s
② 若另外一個pod,不當心符合了rs的標籤選擇器,就會隨機幹掉一個此標籤的pod
---隨便啓動一個pod [root@master manifests]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1/1 Running 0 7m app=myapp,environment=qa,release=canary myapp-mdjvh 1/1 Running 0 6m app=myapp,environment=qa,release=canary pod-test 1/1 Running 0 13s app=myapp,tier=frontend ---將pod-test打上release=canary標籤 [root@master manifests]# kubectl label pods pod-test release=canary pod/pod-test labeled ---隨機停掉一個pod [root@master manifests]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-hxgbh 1/1 Running 0 8m app=myapp,environment=qa,release=canary myapp-mdjvh 1/1 Running 0 7m app=myapp,environment=qa,release=canary pod-test 0/1 Terminating 0 1m app=myapp,release=canary,tier=frontend
(1)使用edit 修改rs 配置,將副本數改成5;便可實現動態擴容
[root@master manifests]# kubectl edit rs myapp ... ... spec: replicas: 5 ... ... replicaset.extensions/myapp edited
(2)驗證
[root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 1d myapp-bck7l 1/1 Running 0 16s myapp-h8cqr 1/1 Running 0 16s myapp-hfb72 1/1 Running 0 6m myapp-r4ss4 1/1 Running 0 9m myapp-vvpgf 1/1 Running 0 16s
(1)使用edit 修改rs 配置,將容器的鏡像改成v2版;便可實如今線升級版本
[root@master manifests]# kubectl edit rs myapp ... ... spec: containers: - image: ikubernetes/myapp:v2 ... ... replicaset.extensions/myapp edited
(2)查詢rs,已經完成修改
[root@master manifests]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 5 5 5 11m myapp-container ikubernetes/myapp:v2 app=myapp,release=canary
(3)可是,修改完並無升級
需刪除pod,再自動生成新的pod時,就會升級成功;
便可以實現灰度發佈:刪除一個,會自動啓動一個版本升級成功的pod
---訪問沒有刪除pod的服務,顯示是V1版 [root@master manifests]# curl 10.244.2.15 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> ---刪除一個pod,訪問新生成pod的服務,版本升級爲v2版 [root@master manifests]# kubectl delete pod myapp-bck7l pod "myapp-bck7l" deleted [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-hxgbh 1/1 Running 0 20m 10.244.1.17 node1 [root@master manifests]# curl 10.244.1.17 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
(1)介紹
Deployment 爲 Pod和Replica Set 提供了一個聲明式定義(declarative)方法,用來替代之前的ReplicationController來方便的管理應用
你只須要在 Deployment 中描述您想要的目標狀態是什麼,Deployment controller 就會幫您將 Pod 和ReplicaSet 的實際狀態改變到您的目標狀態。您能夠定義一個全新的 Deployment 來建立 ReplicaSet 或者刪除已有的 Deployment 並建立一個新的來替換。
注意:您不應手動管理由 Deployment 建立的 Replica Set,不然您就篡越了 Deployment controller 的職責!
(2)典型的應用場景包括
(1)建立一個簡單的ReplicaSet,啓動2個pod
[root@master manifests]# vim deploy-damo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master manifests]# kubectl apply -f deploy-damo.yaml deployment.apps/myapp-deploy configured
注:apply 聲明式建立啓動;和create差很少;可是能夠對一個文件重複操做;create不能夠。
(2)查詢驗證
---查詢deployment信息 [root@master manifests]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp-deploy 2 2 2 2 14s ---查詢replicaset信息;deployment會先生成replicaset [root@master manifests]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp-deploy-69b47bc96d 2 2 2 28s ---查詢pod信息;replicaset會再建立pod [root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-bm8zc 1/1 Running 0 18s myapp-deploy-69b47bc96d-pjr5v 1/1 Running 0 18s
有2中方法實現
(1)方法1:直接修改yaml文件,將副本數改成3
[root@master manifests]# vim deploy-damo.yaml ... ... spec: replicas: 3 ... ... [root@master manifests]# kubectl apply -f deploy-damo.yaml deployment.apps/myapp-deploy configured
查詢驗證成功:有3個pod
[root@master manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-bcdnq 1/1 Running 0 25s myapp-deploy-69b47bc96d-bm8zc 1/1 Running 0 2m myapp-deploy-69b47bc96d-pjr5v 1/1 Running 0 2m
(2)經過patch命令打補丁命令擴容
與方法1的區別:不需修改yaml文件;日常測試時使用方便;
但列表格式複雜,極容易出錯
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' deployment.extensions/myapp-deploy patched
查詢驗證成功:有5個pod
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-67f6f6b4dc-2756p 1/1 Running 0 26s myapp-deploy-67f6f6b4dc-2lkwr 1/1 Running 0 26s myapp-deploy-67f6f6b4dc-knttd 1/1 Running 0 21m myapp-deploy-67f6f6b4dc-ms7t2 1/1 Running 0 21m myapp-deploy-67f6f6b4dc-vl2th 1/1 Running 0 21m
(1)直接修改deploy-damo.yaml
[root@master manifests]# vim deploy-damo.yaml ... ... spec: containers: - name: myapp image: ikubernetes/myapp:v2 ... ...
(2)能夠動態監控版本升級
[root@master ~]# kubectl get pods -w
發現是滾動升級,先停一個,再開啓一個新的(升級);再依次聽一個...
(3)驗證:訪問服務,版本升級成功
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-deploy-67f6f6b4dc-6lv66 1/1 Running 0 2m 10.244.1.75 node1 [root@master ~]# curl 10.244.1.75 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
(1)方法1:修改yaml文件
[root@master manifests]# vim deploy-damo.yaml ... ... strategy: rollingUpdate: maxSurge: 1 #每次更新一個pod maxUnavailable: 0 #最大不可用pod爲0 ... ...
(2)打補丁:修改更新策略
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/myapp-deploy patched
(3)驗證:查詢詳細信息
[root@master manifests]# kubectl describe deployment myapp-deploy ... ... RollingUpdateStrategy: 0 max unavailable, 1 max surge ... ...
(4)升級到v3版
① 金絲雀發佈:先更新一個pod,而後當即暫停,等版本運行沒問題了,再繼續發佈
[root@master manifests]# kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy deployment.extensions/myapp-deploy image updated #一個pod更新成功 deployment.extensions/myapp-deploy paused #暫停更新
② 等版本運行沒問題了,解除暫停,繼續發佈更新
[root@master manifests]# kubectl rollout resume deployment myapp-deploy deployment.extensions/myapp-deploy resumed
③ 中間能夠一直監控過程
[root@master ~]# kubectl rollout status deployment myapp-deploy #輸出版本更新信息 Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "myapp-deploy" rollout to finish: 2 out of 5 new replicas have been updated... ---也可使用get查詢pod 更新過程 [root@master ~]# kubectl get pods -w
④ 驗證:隨便訪問一個pod的服務,版本升級成功
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-deploy-6bdcd6755d-2bnsl 1/1 Running 0 1m 10.244.1.77 node1 [root@master ~]# curl 10.244.1.77 Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>
(1)命令
查詢版本變動歷史
$ kubectl rollout history deployment deployment_name
undo回滾版本;--to-revision= 回滾到第幾版本
$ kubectl rollout undo deployment deployment_name --to-revision=N
(2)演示
---查詢版本變動歷史 [root@master manifests]# kubectl rollout history deployment myapp-deploy deployments "myapp-deploy" REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> ---回滾到第1版本 [root@master manifests]# kubectl rollout undo deployment myapp-deploy --to-revision=1 deployment.extensions/myapp-deploy [root@master manifests]# kubectl rollout history deployment myapp-deploy deployments "myapp-deploy" REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 <none>
(3)查詢驗證,已經回到v1版了
[root@master manifests]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp-deploy-67f6f6b4dc 0 0 0 18h myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary myapp-deploy-69b47bc96d 5 5 5 18h myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary myapp-deploy-6bdcd6755d 0 0 0 10m myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
(1)介紹
DaemonSet保證在每一個Node上都運行一個容器副本,經常使用來部署一些集羣的日誌、監控或者其餘系統管理應用
(2)典型的應用包括
(1)建立並建立一個簡單的DaemonSet,啓動pod,只後臺運行filebeat手機日誌服務
[root@master manifests]# vim ds-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info [root@master manifests]# kubectl apply -f ds-demo.yaml daemonset.apps/myapp-ds created
(2)查詢驗證
[root@master ~]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE filebeat-ds 2 2 2 2 2 <none> 6m [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE filebeat-ds-r25hh 1/1 Running 0 4m filebeat-ds-vvntb 1/1 Running 0 4m [root@master ~]# kubectl exec -it filebeat-ds-r25hh -- /bin/sh / # ps aux PID USER TIME COMMAND 1 root 0:00 /usr/local/bin/filebeat -e -c /etc/filebeat/filebeat.yml
(1)使用kubectl set image 命令更新pod的鏡像;實現版本升級
[root@master ~]# kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine daemonset.extensions/filebeat-ds image updated
(2)驗證,升級成功
[root@master ~]# kubectl get ds -o wide NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR filebeat-ds 2 2 2 2 2 <none> 7m filebeat ikubernetes/filebeat:5.6.6-alpine app=filebeat,release=stable
(1)statefulset介紹
StatefulSet是爲了解決有狀態服務的問題(對應Deployments和ReplicaSets是爲無狀態服務而設計),其應用場景包括
(2)三個必要組件
從上面的應用場景能夠發現,StatefulSet由如下幾個部分組成:
詳情請查詢PV和PVC詳解,建立5個pv,須要有nfs服務器
[root@master volume]# vim pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: nfs accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: nfs accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 15Gi [root@master volume]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Available 3s pv002 5Gi RWO Retain Available 3s pv003 5Gi RWO,RWX Retain Available 3s pv004 10Gi RWO,RWX Retain Available 3s pv005 15Gi RWO,RWX Retain Available 3s
[root@master pod_controller]# vim statefulset-demo.yaml #Headless Service apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- #statefuleset apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html #volumeClaimTemplates volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi [root@master pod_controller]# kubectl apply -f statefulset-demo.yaml service/myapp created statefulset.apps/myapp created
---無頭服務的service建立成功 [root@master pod_controller]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 173d myapp ClusterIP None <none> 80/TCP 3s ---statefulset建立成功 [root@master pod_controller]# kubectl get sts NAME DESIRED CURRENT AGE myapp 3 3 6s ---查看pvc,已經成功綁定時候的pv [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 9s myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 8s myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 6s ---查看pv,有3個已經被綁定 [root@master pod_controller]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 21s pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 21s pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 21s pv004 10Gi RWO,RWX Retain Available 21s pv005 15Gi RWO,RWX Retain Available 21s ---啓動了3個pod [root@master pod_controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-0 1/1 Running 0 16s 10.244.1.127 node1 myapp-1 1/1 Running 0 15s 10.244.2.124 node2 myapp-2 1/1 Running 0 13s 10.244.1.128 node1
可使用scale命令 或 patch打補丁兩種方法實現。
由本來的3個pod擴容到5個
---①使用scale命令實現 [root@master ~]# kubectl scale sts myapp --replicas=5 statefulset.apps/myapp scaled ---②或者經過打補丁來實現 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"replicas":5}}' statefulset.apps/myapp patched [root@master pod_controller]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 11m myapp-1 1/1 Running 0 11m myapp-2 1/1 Running 0 11m myapp-3 1/1 Running 0 9s myapp-4 1/1 Running 0 7s [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 11m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 11m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 11m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 13s myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 11s [root@master pod_controller]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 17m pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 17m pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 17m pv004 10Gi RWO,RWX Retain Bound default/myappdata-myapp-3 17m pv005 15Gi RWO,RWX Retain Bound default/myappdata-myapp-4 17m
由5個pod擴容到2個
---①使用scale命令 [root@master ~]# kubectl scale sts myapp --replicas=2 statefulset.apps/myapp scaled ---②經過打補丁的方法進行縮容 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' statefulset.apps/myapp patched [root@master pod_controller]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 15m myapp-1 1/1 Running 0 15m ---可是pv和pvc不會被刪除,從而實現持久化存儲 [root@master pod_controller]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 15m myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 15m myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 15m myappdata-myapp-3 Bound pv004 10Gi RWO,RWX 4m myappdata-myapp-4 Bound pv005 15Gi RWO,RWX 4m
[root@master ~]# kubectl explain sts.spec.updateStrategy.rollingUpdate.partition KIND: StatefulSet VERSION: apps/v1 FIELD: partition <integer> DESCRIPTION: Partition indicates the ordinal at which the StatefulSet should be partitioned. Default value is 0.
解釋:partition分區指定爲n,升級>n的分區;n指第幾個容器;默認是0
能夠修改yaml資源清單來進行升級;也可經過打補丁的方法升級。
(1)先升級一個pod
先將pod恢復到5個
① 打補丁,將partition的指設爲4,就只升級第4個以後的pod;只升級第5個pod,若新版本有問題,當即回滾;若沒問題,就全面升級
[root@master ~]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' statefulset.apps/myapp patched ---查詢認證 [root@master ~]# kubectl describe sts myapp Name: myapp Namespace: default ... ... Replicas: 5 desired | 5 total Update Strategy: RollingUpdate Partition: 4 ... ...
② 升級
[root@master ~]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 statefulset.apps/myapp image updated ---已將pod鏡像換位v2版 [root@master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 5 5 21h myapp ikubernetes/myapp:v2
③ 驗證
查看第5個pod,已經完成升級 [root@master ~]# kubectl get pods myapp-4 -o yaml |grep image - image: ikubernetes/myapp:v2 查看前4個pod,都仍是v1版本 [root@master ~]# kubectl get pods myapp-3 -o yaml |grep image - image: ikubernetes/myapp:v1 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v1
(2)全面升級剩下的pod
---只需將partition的指設爲0便可 [root@master ~]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}' statefulset.apps/myapp patched ---驗證,全部pod已經完成升級 [root@master ~]# kubectl get pods myapp-0 -o yaml |grep image - image: ikubernetes/myapp:v2