在上文Kubernetes Pod操做篇介紹了kubernetes的核心組件Pod,本文繼續介紹kubernetes的副本機制,正是由於副本機制你的部署能自動保待運行,而且保持健康,無須任何手動干預。node
kubernetes能夠經過存活探針(liveness probe)檢查容器是否還在運行。能夠爲pod中的每一個容器單獨指定存活探針;若是探測失敗,kubernetes將按期執行探針並從新啓動容器;
kubernetes有如下三種探測容器的機制:linux
爲了測試探針的做用,須要準備新的鏡像;在以前的服務中稍做改動,在第五個請求以後,給每一個請求返回HTTP狀態碼500(Internal Server Error),app.js作以下改動:git
const http = require('http'); const os = require('os'); console.log("kubia server is starting..."); var requestCount = 0; var handler = function(request,response){ console.log("Received request from " + request.connection.remoteAddress); requestCount++; if (requestCount > 5) { response.writeHead(500); response.end("I'm not well. Please restart me!"); return; } response.writeHead(200); response.end("You've hit " + os.hostname()+"\n"); }; var www = http.createServer(handler); www.listen(8080);
requestCount記錄請求的次數,大於5次直接返回500狀態碼,這樣探針能夠捕獲狀態碼進行服務器重啓;github
[root@localhost unhealthy]# docker build -t kubia-unhealthy . Sending build context to Docker daemon 3.584kB Step 1/3 : FROM node:7 ---> d9aed20b68a4 Step 2/3 : ADD app.js /app.js ---> e9e1b44f8f54 Step 3/3 : ENTRYPOINT ["node","app.js"] ---> Running in f58d6ff6bea3 Removing intermediate container f58d6ff6bea3 ---> d36c6390ec66 Successfully built d36c6390ec66 Successfully tagged kubia-unhealthy:latest
經過docker build構建kubia-unhealthy鏡像docker
[root@localhost unhealthy]# docker tag kubia-unhealthy ksfzhaohui/kubia-unhealthy [root@localhost unhealthy]# docker login Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@localhost unhealthy]# docker push ksfzhaohui/kubia-unhealthy The push refers to repository [docker.io/ksfzhaohui/kubia-unhealthy] 40d9e222a827: Pushed ...... latest: digest: sha256:5fb3ebeda7f98818bc07b2b1e3245d6a21014a41153108c4dcf52f2947a4dfd4 size: 2213
首先給鏡像附加標籤,而後登陸docker hub,最後推送到docker hub:
數據庫
建立YAML描述文件,指定了一個Http Get存活探針,告訴Kubernetes按期在端口路徑下執行Http Get請求,以肯定容器是否健康;json
apiVersion: v1 kind: Pod metadata: name: kubia-liveness spec: containers: - image: ksfzhaohui/kubia-unhealthy name: kubia livenessProbe: httpGet: path: / port: 8080
[d:\k8s]$ kubectl create -f kubia-liveness-probe.yaml pod/kubia-liveness created [d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-liveness 0/1 ContainerCreating 0 3s
建立名稱爲kubia-liveness的Pod,查看的RESTARTS爲0,隔一段時間再次觀察:segmentfault
[d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-liveness 1/1 Running 2 4m
觀察能夠發現此時的RESTARTS=2,表示重啓了2次,由於每次探測都會發送http請求,而服務在接收5次請求以後會返回500狀態碼,Kubernetes探測以後就會重啓容器;api
[d:\k8s]$ kubectl describe po kubia-liveness Name: kubia-liveness ...... State: Running Started: Mon, 23 Dec 2019 15:42:45 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Mon, 23 Dec 2019 15:41:15 +0800 Finished: Mon, 23 Dec 2019 15:42:42 +0800 Ready: True Restart Count: 2 Liveness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3 ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- ...... Warning Unhealthy 85s (x9 over 5m5s) kubelet, minikube Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 85s (x3 over 4m45s) kubelet, minikube Container kubia failed liveness probe, will be restarted ......
State:當前狀態是運行中;
Last State:最後的狀態是終止,緣由是出現了錯誤,退出代碼爲137有特殊的含義:表示該進程由外部信號終止,數字137是兩個數字的總和:128+x, 其中x是終止進程的信號編號,這裏x=9是SIGKILL的信號編號,意味着這個進程被強行終止;
Restart Count:重啓的次數;
Liveness:存活探針的附加信息,delay(延遲)、timeout(超時)、period(週期);大體意思就是開始探測延遲爲0秒,探測超時時間爲1秒,每隔10秒檢測一次,探測連續失敗三次重啓容器;定義探針時能夠自定義這些參數,好比initialDelaySeconds設置初始延遲等;
Events:列出了發生的事件,好比探測到失敗,殺進程,重啓容器等;服務器
首先生產環境運行的pod必定要配置探針;其次探針必定要檢查程序的內部,不受外部因數影響好比外部服務,數據庫等;最後就是探針應該足夠輕量。
以上方式建立的pod,kubernetes在使用探針發現服務不可能就會重啓服務,這項任務由承載pod的節點上的Kubelet執行,在主服務器上運行的Kubernetes Control Plane組件不會參與此過程;但若是節點自己崩潰,因爲Kubelet自己運行在節點上,因此若是節點異常終止,它將沒法執行任何操做,這時候就須要ReplicationController或相似機制管理pod。
ReplicationController是一種kubernetes資源,可確保它的pod始終保持運行狀態;若是pod因任何緣由消失(包括節點崩潰),則ReplicationController會從新建立Pod;
ReplicationController會持續監控正在運行的pod列表,是確保pod的數量始終與其標籤選擇器匹配,一個ReplicationController有三個主要部分:
以上三個屬性能夠隨時修改,可是隻有副本個數修改對當前pod會有影響,好比當前副本數量減小了,那當前pod有可能會被刪除;ReplicationController提供的好處:
apiVersion: v1 kind: ReplicationController metadata: name: kubia spec: replicas: 3 selector: app: kubia template: metadata: labels: app: kubia spec: containers: - name: kubia image: ksfzhaohui/kubia ports: - containerPort: 8080
指定了類型爲ReplicationController,名稱爲kubia;replicas設置副本爲3,selector爲標籤選擇器,template爲pod建立的模版,三個要素都指定了,執行建立命令:
[d:\k8s]$ kubectl create -f kubia-rc.yaml replicationcontroller/kubia created [d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-dssvz 1/1 Running 0 73s kubia-krlcr 1/1 Running 0 73s kubia-tg29c 1/1 Running 0 73s
建立完以後等一會執行獲取pod列表能夠發現建立了三個容器,刪除其中一個,再次觀察:
[d:\k8s]$ kubectl delete pod kubia-dssvz pod "kubia-dssvz" deleted [d:\k8s]$ kubectl get pods NAME READY STATUS RESTARTS AGE kubia-dssvz 1/1 Terminating 0 2m2s kubia-krlcr 1/1 Running 0 2m2s kubia-mgz64 1/1 Running 0 11s kubia-tg29c 1/1 Running 0 2m2s
被刪除的pod結束中,新的pod已經啓動,獲取有關ReplicationController的信息:
[d:\k8s]$ kubectl get rc NAME DESIRED CURRENT READY AGE kubia 3 3 3 4m20s
指望3個副本,當前3個副本,準備好的也是3個,更詳細的可使用describe命令:
[d:\k8s]$ kubectl describe rc kubia Name: kubia Namespace: default Selector: app=kubia Labels: app=kubia Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 5m20s replication-controller Created pod: kubia-dssvz Normal SuccessfulCreate 5m20s replication-controller Created pod: kubia-tg29c Normal SuccessfulCreate 5m20s replication-controller Created pod: kubia-krlcr Normal SuccessfulCreate 3m29s replication-controller Created pod: kubia-mgz64 Normal SuccessfulCreate 75s replication-controller Created pod: kubia-vwnmf
Replicas顯示副本指望數和當前數,Pods Status顯示每種狀態下的副本數,最後的Events爲發生的事件,測試一共刪除2個pod,能夠看到一個建立了5個pod;
注:由於使用的是Minikube,只有一個節點同時充當主節點和工做節點,節點故障沒法模擬。
經過更改pod的標籤,能夠將它從ReplicationController的做用域中添加或刪除:
[d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-mgz64 1/1 Running 0 27m app=kubia kubia-tg29c 1/1 Running 0 28m app=kubia kubia-vwnmf 1/1 Running 0 24m app=kubia [d:\k8s]$ kubectl label pod kubia-mgz64 app=foo --overwrite pod/kubia-mgz64 labeled [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-4dzw8 0/1 ContainerCreating 0 2s app=kubia kubia-mgz64 1/1 Running 0 27m app=foo kubia-tg29c 1/1 Running 0 29m app=kubia kubia-vwnmf 1/1 Running 0 25m app=kubia
能夠發現初始建立的是三個Pod標籤都是app=kubia,當把kubia-mgz64的標籤設置爲foo以後就脫離了當前ReplicationController的控制,這樣ReplicationController控制的副本就變成了2個,因此會裏面從新建立一個Pod;脫離控制的Pod仍是照常運行,除非咱們手動刪除;
[d:\k8s]$ kubectl delete pod kubia-mgz64 pod "kubia-mgz64" deleted [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-4dzw8 1/1 Running 0 20h app=kubia kubia-tg29c 1/1 Running 0 21h app=kubia kubia-vwnmf 1/1 Running 0 21h app=kubia
ReplicationController的pod模板能夠隨時修改:
[d:\k8s]$ kubectl edit rc kubia ...... replicationcontroller/kubia edited
使用如上命令便可,會彈出文本編輯器,修改Pod模版標籤,以下所示:
template: metadata: creationTimestamp: null labels: app: kubia type: special
添加新的標籤type:special,保存退出便可;修改Pod模版以後並不影響現有的pod,只會影響從新建立的pod:
[d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-4dzw8 1/1 Running 0 21h app=kubia kubia-tg29c 1/1 Running 0 21h app=kubia kubia-vwnmf 1/1 Running 0 21h app=kubia [d:\k8s]$ kubectl delete pod kubia-4dzw8 pod "kubia-4dzw8" deleted [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-6qrxj 1/1 Running 0 2m12s app=kubia,type=special kubia-tg29c 1/1 Running 0 21h app=kubia kubia-vwnmf 1/1 Running 0 21h app=kubia
刪除一個pod,從新建立的pod有了新的標籤;
經過文本編輯器來修改副本數,修改spec.replicas爲5
[d:\k8s]$ kubectl edit rc kubia replicationcontroller/kubia edited [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-6qrxj 1/1 Running 0 9m49s app=kubia,type=special kubia-9crmf 0/1 ContainerCreating 0 4s app=kubia,type=special kubia-qpwbl 0/1 ContainerCreating 0 4s app=kubia,type=special kubia-tg29c 1/1 Running 0 21h app=kubia kubia-vwnmf 1/1 Running 0 21h app=kubia
能夠發現自動建立了2個Pod,達到副本數5;經過kubectl scale從新修改成3:
[d:\k8s]$ kubectl scale rc kubia --replicas=3 replicationcontroller/kubia scaled [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-6qrxj 1/1 Running 0 15m app=kubia,type=special kubia-tg29c 1/1 Running 0 22h app=kubia kubia-vwnmf 1/1 Running 0 21h app=kubia
經過kubectl delete刪除ReplicationController時默認會刪除pod,可是也能夠指定不刪除:
[d:\k8s]$ kubectl delete rc kubia --cascade=false replicationcontroller "kubia" deleted [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-6qrxj 1/1 Running 0 103m app=kubia,type=special kubia-tg29c 1/1 Running 0 23h app=kubia kubia-vwnmf 1/1 Running 0 23h app=kubia [d:\k8s]$ kubectl get rc kubia Error from server (NotFound): replicationcontrollers "kubia" not found
--cascade=false能夠不刪除pod,只刪除ReplicationController
ReplicaSet是新一代ReplicationController,將徹底替代ReplicationController;ReplicaSet的行爲與ReplicationController徹底相同,但pod選擇器的表達能力更強;
apiVersion: apps/v1 kind: ReplicaSet metadata: name: kubia spec: replicas: 3 selector: matchLabels: app: kubia template: metadata: labels: app: kubia spec: containers: - name: kubia image: ksfzhaohui/kubia
apiVersion指定爲apps/v1:apps表示API組,v1表示實際的API版本;若是是在覈心的API組中,API是能夠不用指定的,好比以前的ReplicationController只須要指定v1;
其餘定義基本和ReplicationController相似,除了在selector下使用了matchLabels選擇器;
[d:\k8s]$ kubectl create -f kubia-replicaset.yaml replicaset.apps/kubia created [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS kubia-6qrxj 1/1 Running 0 150m app=kubia,type=special kubia-tg29c 1/1 Running 0 24h app=kubia kubia-vwnmf 1/1 Running 0 24h app=kubia [d:\k8s]$ kubectl get rs NAME DESIRED CURRENT READY AGE kubia 3 3 3 49s
建立完ReplicaSet以後,從新接管了原來的3個pod;更詳細的可使用describe命令:
[d:\k8s]$ kubectl describe rs Name: kubia Namespace: default Selector: app=kubia Labels: <none> Annotations: <none> Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=kubia Containers: kubia: Image: ksfzhaohui/kubia Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Events: <none>
能夠看到Events事件列表爲空,當前的3個pod都是接管的原來已經建立的pod;
ReplicaSet相對於ReplicationController的主要改進是它更具表達力的標籤選擇器;
selector: matchExpressions: - key: app operator: In values: - kubia
ReplicaSet除了可使用matchLabels,還可使用功能更強大的matchExpressions;每一個表達式都必須包含一個key、一個operator(運算符)、可能還有一個values的列表,運算符能夠有:
[d:\k8s]$ kubectl delete rs kubia replicaset.apps "kubia" deleted [d:\k8s]$ kubectl get pods --show-labels No resources found in default namespace.
刪除ReplicaSet的同時會刪除其管理的pod;
Replicationcontroller和ReplicaSet都用於在kubernetes集羣上運行部署特定數量的pod;而DaemonSet能夠在全部集羣節點上運行一個pod,好比但願在每一個節點上運行日誌收集器和資源監控器;固然也能夠經過節點選擇器控制只有哪些節點運行pod;
apiVersion: apps/v1 kind: DaemonSet metadata: name: ssd-monitor spec: selector: matchLabels: app: ssd-monitor template: metadata: labels: app: ssd-monitor spec: nodeSelector: disk: ssd containers: - name: main image: ksfzhaohui/kubia
準備如上建立DaemonSet的YAML文件,以上屬性基本和ReplicaSet相似,除了nodeSelector也就是節點選擇器,指定了選擇disk=ssd標籤;
的節點標籤;
[d:\k8s]$ kubectl create -f ssd-monitor-daemonset.yaml daemonset.apps/ssd-monitor created [d:\k8s]$ kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ssd-monitor 0 0 0 0 0 disk=ssd 24s [d:\k8s]$ kubectl get pods --show-labels No resources found in default namespace.
建立完以後,並無給當前節點建立pod,由於當前節點沒有指定disk=ssd標籤;
[d:\k8s]$ kubectl get node NAME STATUS ROLES AGE VERSION minikube Ready master 8d v1.17.0 [d:\k8s]$ kubectl label node minikube disk=ssd node/minikube labeled [d:\k8s]$ kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS minikube Ready master 8d v1.17.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,gpu=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,node-role.kubernetes.io/master= [d:\k8s]$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS ssd-monitor-84hxd 1/1 Running 0 31s app=ssd-monitor,controller-revision-hash=5dc77f567d,pod-template-generation=1
首先獲取當前節點名稱爲minikube,而後設置標籤disk=ssd,這時候會自動在當前節點建立一個pod,由於在minikube中只有一個節點很差在多個節點上模擬;
[d:\k8s]$ kubectl label node minikube disk=hdd --overwrite node/minikube labeled [d:\k8s]$ kubectl get pods --show-labels No resources found in default namespace.
修改節點minkube的標籤,能夠發現節點上的pod會自動刪除,由於不知足節點選擇器;
[d:\k8s]$ kubectl delete ds ssd-monitor daemonset.apps "ssd-monitor" deleted [d:\k8s]$ kubectl get ds No resources found in default namespace.
刪除DaemonSet也會一塊兒刪除這些pod;
ReplicationController、ReplicaSet和DaemonSet會持續運行任務,永遠達不到完成態,這些pod中的進程在退出時會從新啓動;kubernetes經過Job資源容許你運行一種pod, 該pod在內部進程成功結束時,不重啓容器,一旦任務完成,pod就被認爲處千完成狀態;
在發生節點故障時,該節點上由Job管理的pod,從新安排到其餘節點;若是進程自己異常退出,能夠將Job配置爲從新啓動容器;
在建立Job前先準備一個構建在busybox的鏡像,該容器將調用sleep 命令兩分鐘:
FROM busybox ENTRYPOINT echo "$(date) Batch job starting"; sleep 120; echo "$(date) Finished succesfully"
此鏡像已經推送到docker hub:
apiVersion: batch/v1 kind: Job metadata: name: batch-job spec: template: metadata: labels: app: batch-job spec: restartPolicy: OnFailure containers: - name: main image: ksfzhaohui/batch-job
Job屬於batch API組,其中重要的屬性是restartPolicy默認爲Always表示無限期運行,其餘選項還有OnFailure或Never,表示進程失敗重啓和不重啓;
[d:\k8s]$ kubectl create -f exporter.yaml job.batch/batch-job created [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE batch-job 0/1 7s 8s [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE batch-job-7sw68 1/1 Running 0 25s
建立Job,會自動建立一個pod,pod中的進程運行2分鐘後會結束:
[d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE batch-job-7sw68 0/1 Completed 0 3m1s [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE batch-job 1/1 2m11s 3m12s
能夠發現pod狀態爲Completed,一樣job的COMPLETIONS一樣爲完成;
做業能夠配置爲建立多個pod實例,並以並行或串行方式運行它們;能夠經過設置completions和parallelism屬性來完成;
apiVersion: batch/v1 kind: Job metadata: name: multi-completion-batch-job spec: completions: 3 template: metadata: labels: app: multi-completion-batch-job spec: restartPolicy: OnFailure containers: - name: main image: ksfzhaohui/batch-job
completions設置爲3,一個一個的運行3個pod,全部完成整個job完成;
[d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE multi-completion-batch-job-h75j8 0/1 Completed 0 2m19s multi-completion-batch-job-wdhnj 1/1 Running 0 15s [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE multi-completion-batch-job 1/3 2m28s 2m28s
能夠看到完成一個pod以後會啓動第二pod,全部都運行完以後以下所示:
[d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE multi-completion-batch-job-4vjff 0/1 Completed 0 2m7s multi-completion-batch-job-h75j8 0/1 Completed 0 6m16s multi-completion-batch-job-wdhnj 0/1 Completed 0 4m12s [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE multi-completion-batch-job 3/3 6m13s 6m18s
apiVersion: batch/v1 kind: Job metadata: name: multi-completion-parallel-batch-job spec: completions: 3 parallelism: 2 template: metadata: labels: app: multi-completion-parallel-batch-job spec: restartPolicy: OnFailure containers: - name: main image: ksfzhaohui/batch-job
同時設置了completions和parallelism,表示job能夠同時運行兩個pod,其中任何一個執行完成能夠運行第三個pod:
[d:\k8s]$ kubectl create -f multi-completion-parallel-batch-job.yaml job.batch/multi-completion-parallel-batch-job created [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE multi-completion-parallel-batch-job-f7wn8 0/1 ContainerCreating 0 3s multi-completion-parallel-batch-job-h9s29 0/1 ContainerCreating 0 3s
在pod配置中設置activeDeadlineSeconds屬性,能夠限制pod的時間;若是pod運行時間超過此時間,系統將嘗試終止pod, 並將Job標記爲失敗;
apiVersion: batch/v1 kind: Job metadata: name: time-limited-batch-job spec: activeDeadlineSeconds: 30 template: metadata: labels: app: time-limited-batch-job spec: restartPolicy: OnFailure containers: - name: main image: ksfzhaohui/batch-job
指定activeDeadlineSeconds爲30秒,超過30秒自動失敗;
[d:\k8s]$ kubectl create -f time-limited-batch-job.yaml job.batch/time-limited-batch-job created [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE time-limited-batch-job 0/1 3s 3s [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE time-limited-batch-job-jgmm6 1/1 Running 0 29s [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE time-limited-batch-job-jgmm6 1/1 Terminating 0 30s [d:\k8s]$ kubectl get pod No resources found in default namespace. [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE time-limited-batch-job 0/1 101s 101s
能夠觀察AGE標籤下面的時間表示已經運行的時間,30秒以後pod狀態變成Terminating;
job也支持按期執行,有點像quartz,也支持相似的quartz表達式:
apiVersion: batch/v1beta1 kind: CronJob metadata: name: corn-batch-job spec: schedule: "0-59 * * * *" jobTemplate: spec: template: metadata: labels: app: corn-batch-job spec: restartPolicy: OnFailure containers: - name: main image: ksfzhaohui/batch-job
指定schedule用來表示表達式分別是:分鐘,小時,每月中的第幾天,月,星期幾;以上配置表示每分鐘運行一個job;
[d:\k8s]$ kubectl create -f cronjob.yaml cronjob.batch/corn-batch-job created [d:\k8s]$ kubectl get pod NAME READY STATUS RESTARTS AGE corn-batch-job-1577263560-w2fq2 0/1 Completed 0 3m3s corn-batch-job-1577263620-92pc7 1/1 Running 0 2m2s corn-batch-job-1577263680-tmr8p 1/1 Running 0 62s corn-batch-job-1577263740-jmzqk 0/1 ContainerCreating 0 2s [d:\k8s]$ kubectl get job NAME COMPLETIONS DURATION AGE corn-batch-job-1577263560 1/1 2m5s 3m48s corn-batch-job-1577263620 1/1 2m4s 2m47s corn-batch-job-1577263680 0/1 107s 107s corn-batch-job-1577263740 0/1 47s 47s
每一個一分鐘就運行一個job,能夠刪除CronJob
[d:\k8s]$ kubectl delete CronJob corn-batch-job cronjob.batch "corn-batch-job" deleted
本文繼續在閱讀Kubernetes in Action過程當中,實際操做的筆記;主要介紹了相關的副本機制探針,ReplicationController,ReplicaSet,DaemonSet以及Job相關知識點。
Kubernetes in Action