自主式
Pod
對象由調度器調度到目標工做節點後即由相應節點上的kubelet
負責監控其容器的存活狀態,容器主進程崩潰後,kubelet
可以自動重啓相應的容器。但對出現非主進程崩潰類的容器錯誤卻無從感知,這便依賴於pod
資源對象定義的存活探測,以便kubelet
可以探知到此類故障。但若pod
被刪除或者工做節點自身發生故障(工做節點上都有kubelet
,kubelet
不可用,所以其健康狀態便沒法保證),則便須要控制器來處理相應的容器重啓和配置。node
Pod
控制器由master
的kube-controller-manager
組件提供,常見的此類控制器有:nginxReplicationControllervim
ReplicaSet:代用戶建立指定數量的
pod
副本數量,確保pod
副本數量符合預期狀態,而且支持滾動式自動擴容和縮容功能apiDeployment:工做在
ReplicaSet
之上,用於管理無狀態應用,目前來講最好的控制器。支持滾動更新和回滾功能,還提供聲明式配置。mvcDaemonSet:用於確保集羣中的每個節點只運行特定的
pod
副本,經常使用於實現系統級**後臺任務。好比ELK
服務appStatefulSet:管理有狀態應用ide
Job:只要完成就當即退出,不須要重啓或重建測試
CronJob:週期性任務控制,不須要持續後臺運行spa
Kubernetes
的核心功能之一還在於要確保各資源對象的當前狀態(status
)以匹配用戶指望的狀態(spec
),使當前狀態不斷地向指望狀態「和解」(reconciliation
)來完成容器應用管理。而這些則是kube-controller-manager
的任務。命令行建立爲具體的控制器對象以後,每一個控制器均經過
API Server
提供的接口持續監控相關資源對象的當前狀態,並在因故障、更新或其餘緣由致使系統狀態發生變化時,嘗試讓資源的當前狀態想指望狀態遷移和逼近。
List-Watch
是kubernetes
實現的核心機制之一,在資源對象的狀態發生變更時,由API Server
負責寫入etcd
並經過水平觸發(level-triggered
)機制主動通知給相關的客戶端程序以確保其不會錯過任何一個事件。控制器經過API Server
的watch
接口實時監控目標資源對象的變更並執行和解操做,但並不會與其餘控制器進行任何交互。
Pod
控制器資源經過持續性地監控集羣中運行着的Pod
資源對象來確保受其管控的資源嚴格符合用戶指望的狀態,例如資源副本的數量要精確符合指望等。一般,一個Pod
控制器資源至少應該包含三個基本的組成部分:標籤選擇器:匹配並關聯
Pod
資源對象,並據此完成受其管控的Pod
資源計數。指望的副本數:指望在集羣中精確運行着的
Pod
資源的對象數量。Pod模板:用於新建
Pod
資源對象的Pod
模板資源。
ReplicaSe
t是取代早期版本中的ReplicationController
控制器,其功能基本上與ReplicationController
相同
確保Pod資源對象的數量精確反映指望值:
ReplicaSet
須要確保由其控制運行的Pod副本數量精確吻合配置中定義的指望值,不然就會自動補足所缺或終止所餘。確保Pod健康運行:探測到由其管控的
Pod
對象因其所在的工做節點故障而不可用時,自動請求由調度器於其餘工做節點建立缺失的Pod
副本。彈性伸縮:可經過
ReplicaSet
控制器動態擴容或者縮容Pod
資源對象的數量。必要時還能夠經過HPA
控制器實現Pod
資源規模的自動伸縮。
spec字段通常嵌套使用如下幾個屬性字段:
replicas <integer>:指按期望的Pod對象副本數量 selector <Object>:當前控制器匹配Pod對象副本的標籤選擇器,支持matchLabels和matchExpressions兩種匹配機制 template <Object>:用於定義Pod時的Pod資源信息 minReadySeconds <integer>:用於定義Pod啓動後多長時間爲可用狀態,默認爲0秒
#(1)命令行查看ReplicaSet清單定義規則 [root@k8s-master ~]# kubectl explain rs [root@k8s-master ~]# kubectl explain rs.spec [root@k8s-master ~]# kubectl explain rs.spec.template #(2)建立ReplicaSet示例 [root@k8s-master ~]# vim manfests/rs-demo.yaml apiVersion: apps/v1 #api版本定義 kind: ReplicaSet #定義資源類型爲ReplicaSet metadata: #元數據定義 name: myapp namespace: default spec: #ReplicaSet的規格定義 replicas: 2 #定義副本數量爲2個 selector: #標籤選擇器,定義匹配Pod的標籤 matchLabels: app: myapp release: canary template: #Pod的模板定義 metadata: #Pod的元數據定義 name: myapp-pod #自定義Pod的名稱 labels: #定義Pod的標籤,須要和上面的標籤選擇器內匹配規則中定義的標籤一致,能夠多出其餘標籤 app: myapp release: canary spec: #Pod的規格定義 containers: #容器定義 - name: myapp-containers #容器名稱 image: ikubernetes/myapp:v1 #容器鏡像 imagePullPolicy: IfNotPresent #拉取鏡像的規則 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #(3)建立ReplicaSet定義的Pod [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml replicaset.apps/myapp created [root@k8s-master ~]# kubectl get rs #查看建立的ReplicaSet控制器 NAME DESIRED CURRENT READY AGE myapp 4 4 4 3m23s [root@k8s-master ~]# kubectl get pods #經過查看pod能夠看出pod命令是規則是前面是replicaset控制器的名稱加隨機生成的字符串 NAME READY STATUS RESTARTS AGE myapp-bln4v 1/1 Running 0 6s myapp-bxpzt 1/1 Running 0 6s #(4)修改Pod的副本數量 [root@k8s-master ~]# kubectl edit rs myapp replicas: 4 [root@k8s-master ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 4 4 4 2m50s myapp-containers ikubernetes/myapp:v2 app=myapp,release=canary [root@k8s-master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8hkcr 1/1 Running 0 2m2s app=myapp,release=canary myapp-bln4v 1/1 Running 0 3m40s app=myapp,release=canary myapp-bxpzt 1/1 Running 0 3m40s app=myapp,release=canary myapp-ql2wk 1/1 Running 0 2m2s app=myapp,release=canary
[root@k8s-master ~]# vim manfests/rs-demo.yaml spec: #Pod的規格定義 containers: #容器定義 - name: myapp-containers #容器名稱 image: ikubernetes/myapp:v2 #容器鏡像 imagePullPolicy: IfNotPresent #拉取鏡像的規則 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml #執行apply讓其重載 [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image myapp-bln4v ikubernetes/myapp:v1 myapp-bxpzt ikubernetes/myapp:v1 #說明:這裏雖然重載了,可是已有的pod所使用的鏡像仍然是v1版本的,只是新建pod時纔會使用v2版本,這裏測試先手動刪除已有的pod。 [root@k8s-master ~]# kubectl delete pods -l app=myapp #刪除標籤app=myapp的pod資源 pod "myapp-bln4v" deleted pod "myapp-bxpzt" deleted [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #再次查看經過ReplicaSet新建的pod資源對象。鏡像已使用v2版本 Name Image myapp-mdn8j ikubernetes/myapp:v2 myapp-v5bgr ikubernetes/myapp:v2
擴容和縮容
[root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 2 2 2 154m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 5m26s myapp-v5bgr 1/1 Running 0 5m26s #擴容 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=5 #將上面的Deployments控制器myapp的Pod副本數量提高爲5個 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 5 5 5 156m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-lrrp8 1/1 Running 0 8s myapp-mbqf8 1/1 Running 0 8s myapp-mdn8j 1/1 Running 0 6m48s myapp-ttmf5 1/1 Running 0 8s myapp-v5bgr 1/1 Running 0 6m48s #收縮 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=3 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 159m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 10m myapp-ttmf5 1/1 Running 0 3m48s myapp-v5bgr 1/1 Running 0 10m
[root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 162m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 12m myapp-ttmf5 1/1 Running 0 6m18s myapp-v5bgr 1/1 Running 0 12m [root@k8s-master ~]# kubectl delete replicasets myapp --cascade=false replicaset.extensions "myapp" deleted [root@k8s-master ~]# kubectl get rs No resources found. [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 13m myapp-ttmf5 1/1 Running 0 7m myapp-v5bgr 1/1 Running 0 13m #經過上面的示例能夠看出,添加--cascade=false參數後再刪除ReplicaSet資源對象時並無將其管控的Pod資源對象一併刪除。
Deployment
(簡寫爲deploy
)是kubernetes
控制器的又一種實現,它構建於ReplicaSet
控制器之上,可爲Pod
和ReplicaSet
資源提供聲明式更新。
Deployment
控制器資源的主要職責是爲了保證Pod
資源的健康運行,其大部分功能都可經過調用ReplicaSet
實現,同時還增添部分特性。
事件和狀態查看:必要時能夠查看
Deployment
對象升級的詳細進度和狀態。回滾:升級操做完成後發現問題時,支持使用回滾機制將應用返回到前一個或由用戶指定的歷史記錄中的版本上。
版本記錄:對
Deployment
對象的每個操做都予以保存,以供後續可能執行的回滾操做使用。暫停和啓動:對於每一次升級,都可以隨時暫停和啓動。
多種自動更新方案:一是
Recreate
,即重建更新機制,全面中止、刪除舊有的Pod
後用新版本替代;另外一個是RollingUpdate
,即滾動升級機制,逐步替換舊有的Pod
至新的版本。
Deployment其核心資源和ReplicaSet類似
#(1)命令行查看ReplicaSet清單定義規則 [root@k8s-master ~]# kubectl explain deployment [root@k8s-master ~]# kubectl explain deployment.spec [root@k8s-master ~]# kubectl explain deployment.spec.template #(2)建立Deployment示例 [root@k8s-master ~]# vim manfests/deploy-demo.yaml apiVersion: apps/v1 #api版本定義 kind: Deployment #定義資源類型爲Deploymant metadata: #元數據定義 name: deploy-demo #deployment控制器名稱 namespace: default #名稱空間 spec: #deployment控制器的規格定義 replicas: 2 #定義副本數量爲2個 selector: #標籤選擇器,定義匹配Pod的標籤 matchLabels: app: deploy-app release: canary template: #Pod的模板定義 metadata: #Pod的元數據定義 labels: #定義Pod的標籤,須要和上面的標籤選擇器內匹配規則中定義的標籤一致,能夠多出其餘標籤 app: deploy-app release: canary spec: #Pod的規格定義 containers: #容器定義 - name: myapp #容器名稱 image: ikubernetes/myapp:v1 #容器鏡像 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #(3)建立Deployment對象 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo created #(4)查看資源對象 [root@k8s-master ~]# kubectl get deployment #查看Deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 10s [root@k8s-master ~]# kubectl get replicaset #查看ReplicaSet資源對象 NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 2 2 2 20s [root@k8s-master ~]# kubectl get pods #查看Pod資源對象 NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-22btc 1/1 Running 0 23s deploy-demo-78c84d4449-5fn2k 1/1 Running 0 23s --- 說明: 經過查看資源對象能夠看出,Deployment會自動建立相關的ReplicaSet控制器資源,並以"[DEPLOYMENT-name]-[POD-TEMPLATE-HASH-VALUE]"格式爲其命名,其中的hash值由Deployment自動生成。而Pod名則是以ReplicaSet控制器的名稱爲前綴,後跟5位隨機字符。
ReplicaSet控制器的應用更新須要手動分紅多步並以特定的次序進行,過程繁雜且容易出錯,而Deployment卻只須要由用戶指定在Pod模板中要改動的內容,(如鏡像文件的版本),餘下的步驟便會由其自動完成。Pod副本數量也是同樣。
Deployment控制器支持兩種更新策略:滾動更新(rollingUpdate)和重建創新(Recreate),默認爲滾動更新
滾動更新(rollingUpdate):即在刪除一部分舊版本Pod資源的同時,補充建立一部分新版本的Pod對象進行應用升級,其優點是升級期間,容器中應用提供的服務不會中斷,但更新期間,不一樣客戶端獲得的相應內容可能會來自不一樣版本的應用。
從新建立(Recreate):即首先刪除現有的Pod對象,然後由控制器基於新模板重行建立出新版本的資源對象。
Deployment控制器的滾動更新操做並不是在同一個ReplicaSet控制器對象下刪除並建立Pod資源,新控制器的Pod對象數量不斷增長,直到舊控制器再也不擁有Pod對象,而新控制器的副本數量變得徹底符合指望值爲止。如圖所示
maxUnavailable:升級期間正常可用的Pod
副本數(包括新舊版本)最多不能低於指望值的個數,其值能夠是0
或正整數,也能夠是指望值的百分比;默認值爲1
,該值意味着若是指望值是3
,則升級期間至少要有兩個Pod
對象處於正常提供服務的狀態。
注:爲了保存版本升級的歷史,須要在建立
Deployment
對象時於命令中使用「--record」
選項。
#打開1個終端進行升級 [root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v2 deployment.extensions/deploy-demo image updated #同時打開終端2進行查看pod資源對象升級過程 [root@k8s-master ~]# kubectl get pods -l app=deploy-app -w NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-2rvxr 1/1 Running 0 33s deploy-demo-78c84d4449-nd7rr 1/1 Running 0 33s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 2s deploy-demo-78c84d4449-2rvxr 1/1 Terminating 0 49s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 1s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 50s deploy-demo-78c84d4449-nd7rr 1/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s #同時打開終端3進行查看pod資源對象變動過程 [root@k8s-master ~]# kubectl get deployment deploy-demo -w NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 37s deploy-demo 2/2 2 2 47s deploy-demo 2/2 2 2 47s deploy-demo 2/2 0 2 47s deploy-demo 2/2 1 2 47s deploy-demo 3/2 1 3 49s deploy-demo 2/2 1 2 49s deploy-demo 2/2 2 2 49s deploy-demo 3/2 2 3 50s deploy-demo 2/2 2 2 51s # 升級完成再次查看rs的狀況,如下能夠看到原的rs做爲備份,而如今啓動的是新的rs [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 0 0 0 4m41s deploy-demo-7c66dbf45b 2 2 2 3m54s
#一、使用kubectl scale命令擴容 [root@k8s-master ~]# kubectl scale deployment deploy-demo --replicas=3 deployment.extensions/deploy-demo scaled [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 10m deploy-demo-7c66dbf45b-gq2tw 1/1 Running 0 3s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 10m #二、使用直接修改配置清單方式進行擴容 [root@k8s-master ~]# vim manfests/deploy-demo.yaml spec: #deployment控制器的規格定義 replicas: 4 #定義副本數量爲2個 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo configured [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 61s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 58s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 61s deploy-demo-78c84d4449-sfxps 1/1 Running 0 57s #三、使用kubectl patch打補丁的方式進行擴容 [root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"replicas":5}}' deployment.extensions/deploy-demo patched [root@k8s-master ~]# [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 3m44s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 3m41s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 3m44s deploy-demo-78c84d4449-sfxps 1/1 Running 0 3m40s deploy-demo-78c84d4449-t7jxb 1/1 Running 0 3s
1)添加其總數多餘指望值一個
[root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/deploy-demo patched
2)啓動更新過程,在修改相應容器的鏡像版本後當即暫停更新進度。
[root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment deploy-demo deployment.extensions/deploy-demo image updated deployment.extensions/deploy-demo paused #查看 [root@k8s-master ~]# kubectl get deployment #查看deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 6/5 1 6 37m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod資源對象的name和image Name Image deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-78c84d4449-6rmnm ikubernetes/myapp:v1 deploy-demo-78c84d4449-9xfp9 ikubernetes/myapp:v1 deploy-demo-78c84d4449-c2m6h ikubernetes/myapp:v1 deploy-demo-78c84d4449-sfxps ikubernetes/myapp:v1 deploy-demo-78c84d4449-t7jxb ikubernetes/myapp:v1 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新狀況 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... --- #經過上面查看能夠看出,當前的pod數量爲6個,由於此前咱們定義的指望值爲5個,這裏多出了一個,且這個鏡像版本爲v3版本。 #所有更新 [root@k8s-master ~]# kubectl rollout resume deployment deploy-demo deployment.extensions/deploy-demo resumed #再次查看 [root@k8s-master ~]# kubectl get deployment #查看deployment資源對象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 5/5 5 5 43m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod資源對象的name和image Name Image deploy-demo-6bf8dbdc9f-2z6gt ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-f79q2 ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-pjf4z ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-x7fnk ikubernetes/myapp:v3 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新狀況 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... deployment "deploy-demo" successfully rolled out
1)回到上一個版本
[root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-78c84d4449-2xspz ikubernetes/myapp:v1 deploy-demo-78c84d4449-f8p46 ikubernetes/myapp:v1 deploy-demo-78c84d4449-mnmvc ikubernetes/myapp:v1 deploy-demo-78c84d4449-tsl7r ikubernetes/myapp:v1 deploy-demo-78c84d4449-xdt8j ikubernetes/myapp:v1
2)回滾到指定版本
#經過該命令查看更新歷史記錄 [root@k8s-master ~]# kubectl rollout history deployment/deploy-demo deployment.extensions/deploy-demo REVISION CHANGE-CAUSE 2 <none> 4 <none> 5 <none> #回滾到版本2 [root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo --to-revision=2 deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-7c66dbf45b-42nj4 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-8zhf5 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-bxw7x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-gmq8x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-mrfdb ikubernetes/myapp:v2
DaemonSet
用於在集羣中的所有節點上同時運行一份指定Pod
資源副本,後續新加入集羣的工做節點也會自動建立一個相關的Pod
對象,當從集羣移除借點時,此類Pod
對象也將被自動回收而無需重建。管理員也可使用節點選擇器及節點標籤指定僅在具備特定特徵的節點上運行指定的Pod
對象。
應用場景
在各個節點上運行日誌收集守護進程,如
fluentd
和logstash
。在各個節點上運行監控系統的代理守護進程,如
Prometheus Node Exporter
、collectd
、Datadog agent
、New Relic agent
和Ganglia gmond
等。
#(1) 定義清單文件 [root@k8s-master ~]# vim manfests/daemonset-demo.yaml apiVersion: apps/v1 #api版本定義 kind: DaemonSet #定義資源類型爲DaemonSet metadata: #元數據定義 name: daemset-nginx #daemonset控制器名稱 namespace: default #名稱空間 labels: #設置daemonset的標籤 app: daem-nginx spec: #DaemonSet控制器的規格定義 selector: #指定匹配pod的標籤 matchLabels: #指定匹配pod的標籤 app: daem-nginx #注意:這裏須要和template中定義的標籤同樣 template: #Pod的模板定義 metadata: #Pod的元數據定義 name: nginx labels: #定義Pod的標籤,須要和上面的標籤選擇器內匹配規則中定義的標籤一致,能夠多出其餘標籤 app: daem-nginx spec: #Pod的規格定義 containers: #容器定義 - name: nginx-pod #容器名字 image: nginx:1.12 #容器鏡像 ports: #暴露端口 - name: http #端口名稱 containerPort: 80 #暴露的端口 #(2)建立上面定義的daemonset控制器 [root@k8s-master ~]# kubectl apply -f manfests/daemonset-demo.yaml daemonset.apps/daemset-nginx created #(3)查看驗證 [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES daemset-nginx-7s474 1/1 Running 0 80s 10.244.1.61 k8s-node1 <none> <none> daemset-nginx-kxpl2 1/1 Running 0 94s 10.244.2.58 k8s-node2 <none> <none> [root@k8s-master ~]# kubectl describe daemonset/daemset-nginx ...... Name: daemset-nginx Selector: app=daem-nginx Node-Selector: <none> ...... Desired Number of Nodes Scheduled: 2 Current Number of Nodes Scheduled: 2 Number of Nodes Scheduled with Up-to-date Pods: 2 Number of Nodes Scheduled with Available Pods: 2 Number of Nodes Misscheduled: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed ......
注意
DaemonSet自Kubernetes1.6版本起也開始支持更新機制,相關配置嵌套在kubectl explain daemonset.spec.updateStrategy字段中。其支持RollingUpdate(滾動更新)和OnDelete(刪除時更新)兩種策略,滾動更新爲默認的更新策略。
#(1)查看鏡像版本 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image NAME NODE Image daemset-nginx-7s474 k8s-node1 nginx:1.12 daemset-nginx-kxpl2 k8s-node2 nginx:1.12 #(2)更新 [root@k8s-master ~]# kubectl set image daemonset/daemset-nginx nginx-pod=nginx:1.14 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image #再次查看 NAME NODE Image daemset-nginx-74c95 k8s-node2 nginx:1.14 daemset-nginx-nz6n9 k8s-node1 nginx:1.14 #(3)查坎詳細信息 [root@k8s-master ~]# kubectl describe daemonset daemset-nginx ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-jjnc2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-jjnc2 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-kxpl2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-7s474 Normal SuccessfulDelete 15s daemonset-controller Deleted pod: daemset-nginx-7s474 Normal SuccessfulCreate 8s daemonset-controller Created pod: daemset-nginx-nz6n9 Normal SuccessfulDelete 5s daemonset-controller Deleted pod: daemset-nginx-kxpl2
DaemonSet
控制器的滾動更新機制也能夠藉助於minReadySeconds
字段控制滾動節奏;必要時也能夠執行暫停和繼續操做。其也能夠進行回滾操做。