前文咱們瞭解了k8s上的Pod資源的生命週期、健康狀態和就緒狀態探測以及資源限制相關話題,回顧請參考http://www.javashuo.com/article/p-dtndyizk-ny.html;今天咱們來了解下Pod控制器相關話題;html
在k8s上控制器就是k8s的「大腦」,在聊k8s開篇時,咱們說過控制器主要負責建立,管理k8s上的資源,若是對應資源不吻合用戶定義的資源狀態,它就會嘗試重啓或重建的方式讓其狀態和用戶定義的狀態吻合;在k8s上控制器的類型有不少,好比pod控制,service控制器,endpoint控制器等等;不一樣類型的控制器有着不一樣的功能和做用;好比pod控制器就是針對pod資源進行管理的控制器;單說pod控制器,它也有不少類型,根據pod裏容器跑的應用程序來分類,能夠分爲有狀態應用和無狀態應用控制,從應用程序是否運行爲守護進程咱們能夠將控制器分爲,守護進程和非守護進程控制器;其中無狀態控制器中最經常使用的有ReplicaSet控制器和Deployment控制;有狀態應用控制器經常使用的有StatefulSet;守護進程控制器最經常使用的有daemonSet控制器;非守護進程控制器有job控制器,對Job類型的控制器,若是要週期性執行的有Cronjob控制器;nginx
一、ReplicaSet控制器docker
ReplicaSet控制器的主要做用是確保Pod對象副本數量在任什麼時候刻都能精準知足用戶指望的數量;這種控制器啓動之後,它首先會查找集羣中匹配其標籤選擇器的Pod資源對象,當活動pod數量與用戶指望的pod數量不吻合時,若是多了就刪除,少了就建立;它建立新pod是靠咱們在配置清單中定義的pod模板來建立新pod;api
示例:定義建立ReplicaSet控制器bash
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
提示:定義ReplicaSet控制器,apiVersion字段的值爲apps/v1,kind爲ReplicaSet,這兩個字段都是固定的;後面的metadata中主要定義名稱和名稱空間;spec中主要定義replicas、selector、template;其中replicas這個字段的值爲一個整數,表示對應pod的副本數量;selector用於定義標籤選擇器;其值爲一個對象,其中matchLabels字段表示精確匹配標籤,這個字段的值爲一個字典;除了精確匹配標籤選擇器這種方式,還有matchExpressions表示使用匹配表達式,其值爲一個對象;簡單說定義標籤選擇器,第一種是matchLabels,這種方式就是指定一個或多個標籤,每一個標籤就是一個kvj鍵值對;後者matchExpressions是指定一個表達式,其值爲一個對象,這個對象中主要定義key字段,這個字段定義key的名稱;operator定義操做符,values定義值;key和operator字段的值類型都是字符串,其中operator的值有In, NotIn, Exists和DoesNotExist;values是一個字符串列表;其次就是定義pod模板,使用template字段定義,該字段的值爲一個對象其中metadata字段用於定義模板的元素據信息,這個元數據信息必須定義標籤屬性;一般這個標籤屬性和選擇器中的標籤相同;spec字段用於定義pod模板的狀態,最重要的是定義pod裏容器的名字,鏡像等等;app
應用資源配置清單ide
[root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo created [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 3 3 3 9s [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 3 3 3 17s nginx nginx:1.14-alpine app=nginx-pod [root@master01 ~]#
提示:rs就是ReplicaSet的簡寫;從上面的信息能夠看到對應控制器已經建立;而且當前pod副本數量爲3,用戶指望的數量也爲3,有3個準備就緒;測試
查看podspa
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 2m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 2m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 2m57s nginx-pod [root@master01 ~]#
提示:能夠看到當前default名稱空間中建立了3個pod,其標籤爲nginx-pod;orm
測試:更改其中一個pod的標籤爲ngx,看看對應控制器是否會新建一個標籤爲nginx-pod的pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 5m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 5m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 5m48s nginx-pod [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=ngx --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 4s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 6m2s nginx-pod replicaset-demo-twknl 1/1 Running 0 6m2s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 6m2s ngx [root@master01 ~]#
提示:能夠看到當咱們把其中一個pod的標籤更改成app=ngx後,對應控制器又會根據pod模板建立一個新pod;
測試:更改pod標籤爲app=nginx-pod,看看對應控制器是否會刪除一個pod呢?
[root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 2m35s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m33s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m33s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m33s ngx [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=nginx-pod --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 0/1 Terminating 0 2m50s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m48s nginx-pod [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 8m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m57s nginx-pod [root@master01 ~]#
提示:能夠看到當集羣中有多餘用戶指望數量的pod標籤時,對應控制器會把多餘的相同標籤的pod刪除;從上面的測試能夠看到ReplicaSet控制器是依靠標籤選擇器來判斷集羣中pod的數量是否和用戶定義的數量吻合,若是不吻合就嘗試刪除或新建,讓對應pod數量精確知足用戶指望pod數量;
查看rs控制器的詳細信息
[root@master01 ~]# kubectl describe rs replicaset-demo Name: replicaset-demo Namespace: default Selector: app=nginx-pod Labels: <none> Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=nginx-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-twknl Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-vzdbb Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-rsl7q Normal SuccessfulCreate 15m replicaset-controller Created pod: replicaset-demo-qv8tp Normal SuccessfulDelete 12m replicaset-controller Deleted pod: replicaset-demo-qv8tp [root@master01 ~]#
擴展/縮減rs控制pod副本數量
[root@master01 ~]# kubectl scale rs replicaset-demo --replicas=6 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 6 6 6 32m [root@master01 ~]# kubectl scale rs replicaset-demo --replicas=4 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 4 4 4 32m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-5t9tt 0/1 Terminating 0 33s replicaset-demo-j75hk 1/1 Running 0 33s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vvqfw 0/1 Terminating 0 33s replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 41s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]#
提示:scale也能夠對控制器作擴展和縮減pod副本數量,除了以上使用命令的方式來變動對應pod副本數量;也能夠直接在配置清單中修改replicas字段,而後使用apply命令執行配置清單進行修改;
修改配置清單中的replicas字段的值來擴展pod副本數量
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 7 7 7 35m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 3m33s replicaset-demo-k2n9g 1/1 Running 0 9s replicaset-demo-n7fmk 1/1 Running 0 9s replicaset-demo-q4dc6 1/1 Running 0 9s replicaset-demo-rsl7q 1/1 Running 0 36m replicaset-demo-twknl 1/1 Running 0 36m replicaset-demo-vzdbb 1/1 Running 0 36m [root@master01 ~]#
更新pod版本
方式1修改資源配置清單中pod模板的版本,而後在使用apply命令來執行配置清單
[root@master01 ~]# cat ReplicaSet-controller-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: replicaset-demo namespace: default spec: replicas: 7 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo configured [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 55m nginx nginx:1.16-alpine app=nginx-pod [root@master01 ~]#
提示:從上面命令能夠看到,它顯示的鏡像版本是1.16的版本;
驗證:查看對應pod,看看對應pod中容器鏡像版本是否變成了1.16呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]#
提示:從pod建立的時間來看,pod沒有更新;
測試:刪除一個pod看看對應pod裏容器鏡像是否會更新呢?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]# kubectl delete pod/replicaset-demo-vzdbb pod "replicaset-demo-vzdbb" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 0/1 ContainerCreating 0 10s replicaset-demo-j75hk 1/1 Running 0 26m replicaset-demo-k2n9g 1/1 Running 0 23m replicaset-demo-n7fmk 1/1 Running 0 23m replicaset-demo-q4dc6 1/1 Running 0 23m replicaset-demo-rsl7q 1/1 Running 0 58m replicaset-demo-twknl 1/1 Running 0 58m [root@master01 ~]# kubectl describe pod/replicaset-demo-9wqj9 |grep Image Image: nginx:1.16-alpine Image ID: docker-pullable://nginx@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad [root@master01 ~]#
提示:能夠看到咱們刪除了一個pod,對應控制器又新建了一個pod,對應新建的pod鏡像版本就成爲了新版本的pod;從上面測試狀況能夠看到,對於rs控制器當pod模板中的鏡像版本發生更改,若是k8s集羣上對應pod數量和用戶定義的數量吻合,此時rs控制器不會更新pod;只有新建後的pod纔會擁有新版本;也就說若是咱們要rs來對pod版本更新,就得刪除原有老的pod後纔會更新;
方式2使用命令更新pod版本
[root@master01 ~]# kubectl set image rs replicaset-demo nginx=nginx:1.18-alpine replicaset.apps/replicaset-demo image updated [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 72m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 1/1 Running 0 13m replicaset-demo-j75hk 1/1 Running 0 40m replicaset-demo-k2n9g 1/1 Running 0 36m replicaset-demo-n7fmk 1/1 Running 0 36m replicaset-demo-q4dc6 1/1 Running 0 36m replicaset-demo-rsl7q 1/1 Running 0 72m replicaset-demo-twknl 1/1 Running 0 72m [root@master01 ~]#
提示:對於rs控制器,無論用命令仍是修改資源配置清單中pod模板中鏡像版本,若是有和用戶指望數量的pod,它是不會自動更新pod版本的;只有手動刪除老版本pod,對應新版本pod纔會被建立;
二、deployment控制器
對於deployment控制來講,它的定義方式和rs控制都差很少,但deploy控制器的功能要比rs強大,它能夠實現滾動更新,用戶手動定義更新策略;其實deploy控制器是在rs控制器的基礎上來管理pod;也就說咱們在建立deploy控制器時,它自動會建立一個rs控制器;其中使用deployment控制器建立的pod名稱是由deploy控制器名稱加上「-」pod模板hash名稱加上「-」隨機字符串;而對應rs控制器的名稱剛好就是deploy控制器名稱加「-」pod模板hash;即pod名稱就爲rs控制器名稱加「-」隨機字符串;
示例:建立deployment控制器
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@master01 ~]#
應用配置清單
[root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo created [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 10s nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]#
驗證:查看是否有rs控制器建立?
[root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-6d795f958b 3 3 3 57s replicaset-demo 7 7 7 84m [root@master01 ~]#
提示:能夠看到有一個deploy-demo-6d795f958b的rs控制器被建立;
驗證:查看pod,看看對應pod名稱是否有rs控制器名稱加「-」一串隨機字符串?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-bppjr 1/1 Running 0 2m16s deploy-demo-6d795f958b-mxwkn 1/1 Running 0 2m16s deploy-demo-6d795f958b-sh76g 1/1 Running 0 2m16s replicaset-demo-9wqj9 1/1 Running 0 26m replicaset-demo-j75hk 1/1 Running 0 52m replicaset-demo-k2n9g 1/1 Running 0 49m replicaset-demo-n7fmk 1/1 Running 0 49m replicaset-demo-q4dc6 1/1 Running 0 49m replicaset-demo-rsl7q 1/1 Running 0 85m replicaset-demo-twknl 1/1 Running 0 85m [root@master01 ~]#
提示:能夠看到有3個pod的名稱是deploy-demo-6d795f958b-加隨機字符串;
更新pod版本
[root@master01 ~]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.16-alpine ports: - name: http containerPort: 80 [root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 5m45s nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-95cc58f4d-45l5c 1/1 Running 0 43s deploy-demo-95cc58f4d-6bmb6 1/1 Running 0 45s deploy-demo-95cc58f4d-7d5r5 1/1 Running 0 29s replicaset-demo-9wqj9 1/1 Running 0 30m replicaset-demo-j75hk 1/1 Running 0 56m replicaset-demo-k2n9g 1/1 Running 0 53m replicaset-demo-n7fmk 1/1 Running 0 53m replicaset-demo-q4dc6 1/1 Running 0 53m replicaset-demo-rsl7q 1/1 Running 0 89m replicaset-demo-twknl 1/1 Running 0 89m [root@master01 ~]#
提示:能夠看到deploy控制器只要更改了pod模板中鏡像版本,對應pod會自動更新;
使用命令更新pod版本
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.18-alpine deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 3/3 1 3 9m5s [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 1 3 9m11s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 9m38s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-567b54cd6-6h97c 1/1 Running 0 28s deploy-demo-567b54cd6-j74t4 1/1 Running 0 27s deploy-demo-567b54cd6-wcccx 1/1 Running 0 49s replicaset-demo-9wqj9 1/1 Running 0 34m replicaset-demo-j75hk 1/1 Running 0 60m replicaset-demo-k2n9g 1/1 Running 0 56m replicaset-demo-n7fmk 1/1 Running 0 56m replicaset-demo-q4dc6 1/1 Running 0 56m replicaset-demo-rsl7q 1/1 Running 0 92m replicaset-demo-twknl 1/1 Running 0 92m [root@master01 ~]#
提示:能夠看到deploy控制器,只要修改了pod模板中鏡像的版本,對應pod就會隨之滾動更新到咱們指定的版本;
查看rs歷史版本
[root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 3m50s nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 12m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 7m27s nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 95m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:deploy控制器的更新pod版本操做,它會記錄rs的全部歷史版本;由於只要pod模板的hash值發生變化,對應的rs就會從新被建立一遍,不一樣於rs控制器,歷史版本的rs上沒有pod運行,只有當前版本的rs上纔會運行pod;
查看更新歷史記錄
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> [root@master01 ~]#
提示:這裏能夠看到有3個版本,沒有記錄對應的緣由;這是由於咱們在更新pod版本是沒有記錄;要想記錄器更新緣由,能夠在對應名後面加--record選項便可;
示例:記錄更新操做命令到更新歷史記錄
[root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.16.yaml --record deployment.apps/deploy-demo configured [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]#
提示:能夠看到更新操做時加上--record選項後,再次查看更新歷史記錄,就能顯示對應的更新命令;
回滾到上一個版本
[root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 33m nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 24m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 33m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 3 3 3 28m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 116m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 34m nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 26m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 3 3 3 35m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 29m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 118m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:能夠看到執行了kubectl rollout undo deploy/deploy-demo命令後,對應版本從1.16就回滾到1.14的版本了;對應更新歷史記錄也把1.14版本更新爲當前最新記錄;
回滾到指定歷史記錄版本
[root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo --to-revision=3 deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 7 <none> [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 42m nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 33m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 42m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 36m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 125m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]#
提示:指定要回滾到某個歷史記錄的版本,可使用--to-revision選項來指定歷史記錄的編號;
查看deploy控制器的詳細信息
[root@master01 ~]# kubectl describe deploy deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 7 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.18-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-567b54cd6 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 58m deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 3 Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 0 Normal ScalingReplicaSet 55m deployment-controller Scaled up replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 54m deployment-controller Scaled down replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 38m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 29m (x3 over 64m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 22m (x14 over 54m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 [root@master01 ~]#
提示:查看deploy控制器的詳細信息,能夠看到對應pod模板,回滾的過程,以及默認更新策略等等信息;
自定義滾動更新策略
[root@master01 ~]# cat deploy-demo-nginx-1.14.yaml apiVersion: apps/v1 kind: Deployment metadata: name: deploy-demo namespace: default spec: replicas: 3 selector: matchLabels: app: ngx-dep-pod template: metadata: labels: app: ngx-dep-pod spec: containers: - name: nginx image: nginx:1.14-alpine ports: - name: http containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 1 minReadySeconds: 5 [root@master01 ~]#
提示:定義滾動更新策略須要使用strategy這個字段,這個字段的值是一個對象,其中type是指定更新策略,其策略有兩種,第一種是Recreate,這種策略更新方式是新建一個新版pod,而後再刪除一箇舊版pod以這種方式滾動更新;第二種是RollingUpdate,這種策略是用於咱們手動指定的策略;其中maxSurge表示最大容許超出用戶指望的pod數量(即更新時容許新建超出用戶指望的pod數量),maxUnavailable表示最大容許少於用於指望的pod數量(即更新時能夠一次刪除幾個舊版pod);最後minReadySeconds字段不是定義更新策略的,它是spec中的一個字段,用於限定pod最小就緒時長;以上更新策略表示,使用RollingUpdate類型策略,並指定最大新建pod超出用戶指望pod數量爲2個,最大容許少於用戶指望pod數量爲1個;pod最小就緒時間爲5秒;
應用配置清單
[root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.14.yaml deployment.apps/deploy-demo configured [root@master01 ~]# kubectl describe deploy/deploy-demo Name: deploy-demo Namespace: default CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 8 Selector: app=ngx-dep-pod Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 5 RollingUpdateStrategy: 1 max unavailable, 2 max surge Pod Template: Labels: app=ngx-dep-pod Containers: nginx: Image: nginx:1.14-alpine Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: deploy-demo-6d795f958b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 47m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1 Normal ScalingReplicaSet 47m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2 Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 31m (x14 over 64m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 41s (x4 over 73m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2 Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2 Normal ScalingReplicaSet 34s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0 [root@master01 ~]#
提示:能夠看到對應deploy控制器的更新策略已經更改成咱們定義的策略;爲了可以看出更新的效果,咱們這裏先手動把pod數量調整爲10個;
擴展pod副本數量
[root@master01 ~]# kubectl scale deploy/deploy-demo --replicas=10 deployment.apps/deploy-demo scaled [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 3m33s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 8s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 8s deploy-demo-6d795f958b-czwdp 1/1 Running 0 3m33s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 8s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 3m33s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 8s deploy-demo-6d795f958b-ph99t 1/1 Running 0 8s deploy-demo-6d795f958b-wzscg 1/1 Running 0 8s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 8s replicaset-demo-9wqj9 1/1 Running 0 100m replicaset-demo-j75hk 1/1 Running 0 126m replicaset-demo-k2n9g 1/1 Running 0 123m replicaset-demo-n7fmk 1/1 Running 0 123m replicaset-demo-q4dc6 1/1 Running 0 123m replicaset-demo-rsl7q 1/1 Running 0 159m replicaset-demo-twknl 1/1 Running 0 159m [root@master01 ~]#
查看更新過程
[root@master01 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 5m18s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 113s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 113s deploy-demo-6d795f958b-czwdp 1/1 Running 0 5m18s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 113s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 5m18s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 113s deploy-demo-6d795f958b-ph99t 1/1 Running 0 113s deploy-demo-6d795f958b-wzscg 1/1 Running 0 113s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 113s replicaset-demo-9wqj9 1/1 Running 0 102m replicaset-demo-j75hk 1/1 Running 0 128m replicaset-demo-k2n9g 1/1 Running 0 125m replicaset-demo-n7fmk 1/1 Running 0 125m replicaset-demo-q4dc6 1/1 Running 0 125m replicaset-demo-rsl7q 1/1 Running 0 161m replicaset-demo-twknl 1/1 Running 0 161m deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-6d795f958b-mbrlw 1/1 Terminating 0 4m16s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-95srs 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m17s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-578d6b6f94-qhc9j 1/1 Running 0 15s deploy-demo-578d6b6f94-95srs 1/1 Running 0 16s deploy-demo-578d6b6f94-bht84 1/1 Running 0 18s deploy-demo-6d795f958b-ph99t 1/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 1/1 Terminating 0 4m38s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-5zr7r 1/1 Terminating 0 4m43s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m43s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-578d6b6f94-g9c8x 1/1 Running 0 12s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-578d6b6f94-lg6vk 1/1 Running 0 15s deploy-demo-6d795f958b-9mc7k 1/1 Terminating 0 4m56s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-4rpx9 1/1 Running 0 13s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 4m57s deploy-demo-578d6b6f94-4lbwg 1/1 Running 0 2s deploy-demo-6d795f958b-wzscg 1/1 Terminating 0 4m58s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 4m59s deploy-demo-578d6b6f94-fhkk9 1/1 Running 0 2s deploy-demo-6d795f958b-z5mnf 1/1 Terminating 0 5m2s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-6d795f958b-czwdp 1/1 Terminating 0 8m28s deploy-demo-578d6b6f94-sfpz4 0/1 ContainerCreating 0 1s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m28s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-sfpz4 1/1 Running 0 2s deploy-demo-6d795f958b-5bdfw 1/1 Terminating 0 8m29s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-5bs6z 1/1 Running 0 1s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m30s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-jw9n8 1/1 Terminating 0 8m38s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m38s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s
提示:使用-w選項能夠一直跟蹤查看pod變化過程;從上面的監控信息能夠看到,在更新時,首先是將三個pod標記爲pending狀態,而後先刪除一個pod,而後再建立兩個pod;而後又建立一個,再刪除3個,一次進行;無論怎麼刪除和新建,對應新舊pod的數量最少要有9個,最大不超過12個;
使用暫停更新實現金絲雀發佈
[root@master01 ~]# kubectl set image deploy/deploy-demo nginx=nginx:1.14-alpine && kubectl rollout pause deploy/deploy-demo deployment.apps/deploy-demo image updated deployment.apps/deploy-demo paused [root@master01 ~]#
提示:以上命令會根據咱們定義的更新策略,先刪除一個pod,而後再建立3個新版pod,而後更新操做就暫停了;此時對應pod只更新了1個,而後新建了2個新pod,總共就有12個pod;
查看pod狀況
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-df77k 1/1 Running 0 87s deploy-demo-6d795f958b-tll8b 1/1 Running 0 87s deploy-demo-6d795f958b-zbhwp 1/1 Running 0 87s deploy-demo-fb957b9b-44l6g 1/1 Running 0 3m21s deploy-demo-fb957b9b-7q6wh 1/1 Running 0 3m38s deploy-demo-fb957b9b-d45rg 1/1 Running 0 3m27s deploy-demo-fb957b9b-j7p2j 1/1 Running 0 3m38s deploy-demo-fb957b9b-mkpz6 1/1 Running 0 3m38s deploy-demo-fb957b9b-qctnv 1/1 Running 0 3m21s deploy-demo-fb957b9b-rvrtf 1/1 Running 0 3m27s deploy-demo-fb957b9b-wf254 1/1 Running 0 3m12s deploy-demo-fb957b9b-xclhz 1/1 Running 0 3m22s replicaset-demo-9wqj9 1/1 Running 0 135m replicaset-demo-j75hk 1/1 Running 0 161m replicaset-demo-k2n9g 1/1 Running 0 158m replicaset-demo-n7fmk 1/1 Running 0 158m replicaset-demo-q4dc6 1/1 Running 0 158m replicaset-demo-rsl7q 1/1 Running 0 3h14m replicaset-demo-twknl 1/1 Running 0 3h14m [root@master01 ~]# kubectl get pod|grep "^deploy.*" |wc -l 12 [root@master01 ~]#
提示:之因此多兩個是由於咱們在更新策略中定義容許最大超出用戶指望2個pod;
恢復更新
[root@master01 ~]# kubectl rollout resume deploy/deploy-demo && kubectl rollout status deploy/deploy-demo deployment.apps/deploy-demo resumed Waiting for deployment "deploy-demo" rollout to finish: 3 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... deployment "deploy-demo" successfully rolled out [root@master01 ~]#
提示:resume表示恢復剛纔暫停的更新操做;status是用來查看對應更新過程;