容器編排系統k8s之DaemonSet、Job和CronJob控制器

  前文咱們瞭解了k8s上的pod控制器中的經常使用的兩種控制器ReplicaSet和Deployment控制器的相關話題,回顧請參考:http://www.javashuo.com/article/p-gwvhllkr-ny.html;今天咱們來了解下DaemonSet、Job和CronJob控制器相關話題;html

  一、DaemonSet控制node

  從名字上就能夠看出這個控制器是管理守護進程類的pod;DaemonSet控制器的主要做用是管理守護進程類的Pod,一般用於在每一個節點須要運行一個這樣的Pod場景;好比咱們要收集日誌到es中,咱們就可使用這種控制器在每一個節點上運行一個Pod;DaemonSet控制器和Deployment控制器很相似,不一樣的是ds(DaemonSet的簡寫)控制器不須要咱們手動指定其運行的pod數量,它會根據k8s集羣節點數量的變化而變化,若是新加入一個節點它會自動擴展對應pod數量,減小節點時,它也不會把以前運行在該節點上的pod調度到其餘節點,總之一個節點上只能運行同類型Pod1個;除此以外ds它還支持經過節點選擇器來作選擇性的調度;好比,在某個擁有對應標籤的節點就運行對應pod,沒有就不運行;其餘的更新操做和deployment差很少;linux

  示例:建立DaemonSet控制器nginx

[root@master01 ~]# cat ds-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: ds-demo
  namespace: default
spec:
  selector: 
    matchLabels:
      app: ngx-ds
  template:
    metadata:
      labels:
        app: ngx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:1.14-alpine
        ports:
        - name: http
          containerPort: 80
  minReadySeconds: 5
[root@master01 ~]# 

  提示:對於ds控制器來講,它在spec中最主要定義選擇器和pod模板,這個定義和deploy控制器同樣;上述配置文件主要使用ds控制器運行一個nginx pod,其標籤名爲ngx-ds;docker

  應用配置清單api

[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
daemonset.apps/ds-demo created
[root@master01 ~]# kubectl get ds -o wide
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   3         3         3       3            3           <none>          14s   nginx        nginx:1.14-alpine   app=ngx-ds

  提示:能夠看到咱們並無指定pod數量,對應控制器它會根據節點數量在每一個node節點上建立pod;bash

  驗證:查看pod狀況,看看是否是每一個節點都被調度運行了一個pod?app

[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-fm9cb   1/1     Running   0          27s   10.244.1.57   node01.k8s.org   <none>           <none>
ds-demo-pspbk   1/1     Running   0          27s   10.244.3.57   node03.k8s.org   <none>           <none>
ds-demo-zvpbb   1/1     Running   0          27s   10.244.2.69   node02.k8s.org   <none>           <none>
[root@master01 ~]#

  提示:能夠看到對應pod在每一個node節點都僅跑了一個pod;ide

  定義節點選擇器測試

[root@master01 ~]# cat ds-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: ds-demo
  namespace: default
spec:
  selector: 
    matchLabels:
      app: ngx-ds
  template:
    metadata:
      labels:
        app: ngx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:1.14-alpine
        ports:
        - name: http
          containerPort: 80
      nodeSelector:
        app: nginx-1.14-alpine
  minReadySeconds: 5
[root@master01 ~]# 

  提示:定義節點選擇器須要在pod模板中的spec字段下使用nodeSelector字段來定義,這個字段的值爲一個字典;以上配置定義了只有節點標籤爲app=nginx-1.14-alpine纔會在對應節點上建立pod,不然就不予建立;

  應用配置清單

[root@master01 ~]# kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds-demo   3         3         3       3            3           <none>          14m
[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
daemonset.apps/ds-demo configured
[root@master01 ~]# kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE
ds-demo   0         0         0       0            0           app=nginx-1.14-alpine   14m
[root@master01 ~]# kubectl get pod
NAME            READY   STATUS        RESTARTS   AGE
ds-demo-pspbk   0/1     Terminating   0          14m
[root@master01 ~]# kubectl get pod
No resources found in default namespace.
[root@master01 ~]# 

  提示:能夠看到加上了節點選擇器之後,對應pod都被刪除了,緣由是在k8s節點上沒有任何一個節點擁有對應節點選擇器中的標籤,因此都不知足調度條件,固然對應pod就被控制器刪除了;

  測試:給node01.k8s.org節點添加一個節點標籤,其名爲app=nginx-1.14-alpine,看看對應節點是否會建立pod?

[root@master01 ~]# kubectl label node node01.k8s.org app=nginx-1.14-alpine
node/node01.k8s.org labeled
[root@master01 ~]# kubectl get ds -o wide
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   1         1         1       1            1           app=nginx-1.14-alpine   20m   nginx        nginx:1.14-alpine   app=ngx-ds
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-8hfnq   1/1     Running   0          18s   10.244.1.58   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到只要k8snode節點擁有對應node選擇器匹配的標籤時,對應pod就會精準調度到對應節點上運行;

  刪除節點選擇器,應用資源配置清單,而後再新加一個節點,看看新加的節點是否會自動建立對應pod?

  刪除節點選擇器,應用資源配置清單

[root@master01 ~]# cat ds-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: ds-demo
  namespace: default
spec:
  selector: 
    matchLabels:
      app: ngx-ds
  template:
    metadata:
      labels:
        app: ngx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:1.14-alpine
        ports:
        - name: http
          containerPort: 80
  minReadySeconds: 5
[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
daemonset.apps/ds-demo configured
[root@master01 ~]# kubectl get ds -o wide
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   3         3         3       3            3           <none>          26m   nginx        nginx:1.14-alpine   app=ngx-ds
[root@master01 ~]# 

  準備一臺node節點,其主機名爲node04.k8s.org,準備步驟請參考https://www.cnblogs.com/qiuhom-1874/p/14126750.html

  在master節點上建立節點加入集羣的命令

[root@master01 ~]# kubeadm token create --print-join-command 
kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f     --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 
[root@master01 ~]# 

  複製命令在node04上執行

[root@node04 ~]# kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f     --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node04 ~]# 

  提示:若是開啓了Swap,須要在命令後面加上--ignore-preflight-errors=Swap選項;

  在master節點上查看node狀態,看看node04是否加入到k8s集羣

[root@master01 ~]# kubectl get node
NAME               STATUS     ROLES                  AGE    VERSION
master01.k8s.org   Ready      control-plane,master   10d    v1.20.0
node01.k8s.org     Ready      <none>                 10d    v1.20.0
node02.k8s.org     Ready      <none>                 10d    v1.20.0
node03.k8s.org     Ready      <none>                 10d    v1.20.0
node04.k8s.org     NotReady   <none>                 117s   v1.20.0
[root@master01 ~]# 

  提示:可以看到node04已經加入節點,只是狀態還未準備好,待節點準備就緒後,再看看dspod數量是否增長,是否自動在node04上運行了nginx pod;

  查看ds控制器,看看如今運行了幾個pod

[root@master01 ~]# kubectl get node
NAME               STATUS   ROLES                  AGE     VERSION
master01.k8s.org   Ready    control-plane,master   10d     v1.20.0
node01.k8s.org     Ready    <none>                 10d     v1.20.0
node02.k8s.org     Ready    <none>                 10d     v1.20.0
node03.k8s.org     Ready    <none>                 10d     v1.20.0
node04.k8s.org     Ready    <none>                 8m10s   v1.20.0
[root@master01 ~]# kubectl get ds
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ds-demo   4         4         4       4            4           <none>          53m
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-g74s8   1/1     Running   0          72s   10.244.4.2    node04.k8s.org   <none>           <none>
ds-demo-h4b77   1/1     Running   0          27m   10.244.2.70   node02.k8s.org   <none>           <none>
ds-demo-hpmrg   1/1     Running   0          27m   10.244.3.58   node03.k8s.org   <none>           <none>
ds-demo-kjf6f   1/1     Running   0          27m   10.244.1.59   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到新加的節點準備就緒之後,對應pod會自動在新加的節點上建立pod;

  更新pod版本

[root@master01 ~]# cat ds-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: ds-demo
  namespace: default
spec:
  selector: 
    matchLabels:
      app: ngx-ds
  template:
    metadata:
      labels:
        app: ngx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:1.16-alpine
        ports:
        - name: http
          containerPort: 80
  minReadySeconds: 5
[root@master01 ~]# kubectl get ds -o wide
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   4         4         4       4            4           <none>          55m   nginx        nginx:1.14-alpine   app=ngx-ds
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-g74s8   1/1     Running   0          3m31s   10.244.4.2    node04.k8s.org   <none>           <none>
ds-demo-h4b77   1/1     Running   0          30m     10.244.2.70   node02.k8s.org   <none>           <none>
ds-demo-hpmrg   1/1     Running   0          30m     10.244.3.58   node03.k8s.org   <none>           <none>
ds-demo-kjf6f   1/1     Running   0          30m     10.244.1.59   node01.k8s.org   <none>           <none>
[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
daemonset.apps/ds-demo configured
[root@master01 ~]# kubectl get ds -o wide                  
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   4         4         3       0            3           <none>          56m   nginx        nginx:1.16-alpine   app=ngx-ds
[root@master01 ~]#kubectl get pod -o wide
NAME            READY   STATUS              RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-47gtq   0/1     ContainerCreating   0          7s    <none>        node04.k8s.org   <none>           <none>
ds-demo-h4b77   1/1     Running             0          31m   10.244.2.70   node02.k8s.org   <none>           <none>
ds-demo-jp9dz   1/1     Running             0          38s   10.244.1.60   node01.k8s.org   <none>           <none>
ds-demo-t4njt   1/1     Running             0          21s   10.244.3.59   node03.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-47gtq   1/1     Running   0          37s   10.244.4.3    node04.k8s.org   <none>           <none>
ds-demo-8txr9   1/1     Running   0          14s   10.244.2.71   node02.k8s.org   <none>           <none>
ds-demo-jp9dz   1/1     Running   0          68s   10.244.1.60   node01.k8s.org   <none>           <none>
ds-demo-t4njt   1/1     Running   0          51s   10.244.3.59   node03.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到咱們修改了pod模板中鏡像的版本,應用之後,對應pod會一一更新;

  查看ds詳細信息

[root@master01 ~]# kubectl describe ds ds-demo
Name:           ds-demo
Selector:       app=ngx-ds
Node-Selector:  <none>
Labels:         <none>
Annotations:    deprecated.daemonset.template.generation: 4
Desired Number of Nodes Scheduled: 4
Current Number of Nodes Scheduled: 4
Number of Nodes Scheduled with Up-to-date Pods: 4
Number of Nodes Scheduled with Available Pods: 4
Number of Nodes Misscheduled: 0
Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=ngx-ds
  Containers:
   nginx:
    Image:        nginx:1.16-alpine
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age                From                  Message
  ----    ------            ----               ----                  -------
  Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-fm9cb
  Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-zvpbb
  Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-pspbk
  Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-fm9cb
  Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-pspbk
  Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-zvpbb
  Normal  SuccessfulCreate  38m                daemonset-controller  Created pod: ds-demo-8hfnq
  Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-h4b77
  Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-hpmrg
  Normal  SuccessfulDelete  33m                daemonset-controller  Deleted pod: ds-demo-8hfnq
  Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-kjf6f
  Normal  SuccessfulCreate  6m57s              daemonset-controller  Created pod: ds-demo-g74s8
  Normal  SuccessfulDelete  3m8s               daemonset-controller  Deleted pod: ds-demo-kjf6f
  Normal  SuccessfulCreate  2m58s              daemonset-controller  Created pod: ds-demo-jp9dz
  Normal  SuccessfulDelete  2m52s              daemonset-controller  Deleted pod: ds-demo-hpmrg
  Normal  SuccessfulCreate  2m41s              daemonset-controller  Created pod: ds-demo-t4njt
  Normal  SuccessfulDelete  2m35s              daemonset-controller  Deleted pod: ds-demo-g74s8
  Normal  SuccessfulCreate  2m27s              daemonset-controller  Created pod: ds-demo-47gtq
  Normal  SuccessfulDelete  2m13s              daemonset-controller  Deleted pod: ds-demo-h4b77
  Normal  SuccessfulCreate  2m4s               daemonset-controller  Created pod: ds-demo-8txr9
[root@master01 ~]# 

  使用命令更新pod版本

[root@master01 ~]# kubectl set image ds ds-demo nginx=nginx:1.18-alpine --record
daemonset.apps/ds-demo image updated
[root@master01 ~]# kubectl get ds -o wide
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
ds-demo   4         4         3       0            3           <none>          84m   nginx        nginx:1.18-alpine   app=ngx-ds
[root@master01 ~]# kubectl rollout status ds/ds-demo
Waiting for daemon set "ds-demo" rollout to finish: 1 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available...
Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available...
daemon set "ds-demo" successfully rolled out
[root@master01 ~]# kubectl get pod -o wide
NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-6qr6g   1/1     Running   0          70s   10.244.2.77   node02.k8s.org   <none>           <none>
ds-demo-7gnxd   1/1     Running   0          57s   10.244.3.66   node03.k8s.org   <none>           <none>
ds-demo-g44bd   1/1     Running   0          24s   10.244.1.66   node01.k8s.org   <none>           <none>
ds-demo-hb8vl   1/1     Running   0          43s   10.244.4.10   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl describe pod/ds-demo-6qr6g |grep Image
    Image:          nginx:1.18-alpine
    Image ID:       docker-pullable://nginx@sha256:a7bdf9e789a40bf112c87672a2495fc49de7c89f184a252d59061c1ae800ee52
[root@master01 ~]# 

  提示:默認更新策略是刪除一個pod,而後再新建一個pod;

  定義更新策略

[root@master01 ~]# cat ds-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: ds-demo
  namespace: default
spec:
  selector: 
    matchLabels:
      app: ngx-ds
  template:
    metadata:
      labels:
        app: ngx-ds
    spec:
      containers:
      - name: nginx
        image: nginx:1.16-alpine
        ports:
        - name: http
          containerPort: 80
  minReadySeconds: 5
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 2
[root@master01 ~]# 

  提示:定義ds的更新策略須要在spec字段下使用updateStrategy字段,該字段的值爲一個對象,其中type是更新類型,該類型有兩個值,一個是OnDelete,一個是RollingUpdate;rollingUpdate字段用於指定更新策略,只有當type的值爲RollingUpdate時,定義rollingUpdate字段纔有意義,其中maxUnavaiable是用來定義一次刪除幾個pod(最大容許不可用的pod數量);ds更新只能先刪除再建立,不能先建立再刪除,由於它只容許一個node上運行一個pod,因此只有刪除pod後再建立;默認狀況是刪除一個,新建一個;上述配置定義更新策略爲一次刪除兩個;

  應用配置,查看更新過程

[root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml && kubectl get pod -w
daemonset.apps/ds-demo configured
NAME            READY   STATUS        RESTARTS   AGE
ds-demo-4k2x7   1/1     Terminating   0          15m
ds-demo-b9djn   1/1     Running       0          16m
ds-demo-bxkj7   1/1     Running       0          15m
ds-demo-cg49r   1/1     Terminating   0          16m
ds-demo-cg49r   0/1     Terminating   0          16m
ds-demo-4k2x7   0/1     Terminating   0          15m
ds-demo-cg49r   0/1     Terminating   0          16m
ds-demo-cg49r   0/1     Terminating   0          16m
ds-demo-dtsgc   0/1     Pending       0          0s
ds-demo-dtsgc   0/1     Pending       0          0s
ds-demo-dtsgc   0/1     ContainerCreating   0          0s
ds-demo-dtsgc   1/1     Running             0          2s
ds-demo-4k2x7   0/1     Terminating         0          15m
ds-demo-4k2x7   0/1     Terminating         0          15m
ds-demo-8d7g9   0/1     Pending             0          0s
ds-demo-8d7g9   0/1     Pending             0          0s
ds-demo-8d7g9   0/1     ContainerCreating   0          0s
ds-demo-8d7g9   1/1     Running             0          1s
ds-demo-b9djn   1/1     Terminating         0          16m
ds-demo-b9djn   0/1     Terminating         0          16m
ds-demo-bxkj7   1/1     Terminating         0          16m
ds-demo-bxkj7   0/1     Terminating         0          16m
ds-demo-b9djn   0/1     Terminating         0          16m
ds-demo-b9djn   0/1     Terminating         0          16m
ds-demo-dkxfs   0/1     Pending             0          0s
ds-demo-dkxfs   0/1     Pending             0          0s
ds-demo-dkxfs   0/1     ContainerCreating   0          0s
ds-demo-dkxfs   1/1     Running             0          2s
ds-demo-bxkj7   0/1     Terminating         0          16m
ds-demo-bxkj7   0/1     Terminating         0          16m
ds-demo-q6b5f   0/1     Pending             0          0s
ds-demo-q6b5f   0/1     Pending             0          0s
ds-demo-q6b5f   0/1     ContainerCreating   0          0s
ds-demo-q6b5f   1/1     Running             0          1s

  提示:能夠看到如今更新pod版本就是一次刪除兩個pod,而後新建兩個pod;

   二、Job控制器

  job控制器主要做用是用來運行一個或多個pod來執行任務,當任務執行完成後,自動退出,若是在執行任務過程當中pod故障了,job控制器會根據重啓策略將其進行重啓,直到任務完成pod正常退出;若是重啓策略爲Never,則pod異常後將再也不重啓,它會從新建立一個新pod再次執行任務,最後直到任務完成正常退出;

  Job控制器pod狀態

  提示:上圖主要描述了對於job控制器建立的pod的狀態,正常狀況pod執行完任務正常退出,其狀態爲completed;若是pod非正常退出(即退出碼非0),而且重啓策略爲never,表示不重啓pod,此時pod的狀態就爲Failure:雖然不重啓pod,可是對應的任務仍是在,因此重啓策略爲never時,pod非正常退出,job控制器會從新建立一個pod再次執行任務;若是pod非正常退出且重啓策略爲OnFailure時,pod會被重啓,而後再次執行任務,直到最後任務執行完成pod正常退出,此時pod的狀態爲completed;

  任務做業方式

  串行做業

  提示:串行做業一次只有一個pod被建立,只有當pod任務執行完成後,第二個pod纔會被建立;

  並行做業

  提示:並行做業能夠並行啓動多個pod同時做業;

   示例:定義job控制器

[root@master01 ~]# cat job-demo.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: job-demo
spec:
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 10"]
      restartPolicy: Never
[root@master01 ~]# 

  提示:建立job控制最主要是定義對應pod模板;定義方式和pod其餘控制器定義方式相同;在spec字段下使用template指定來定義pod模板;

  應用配置清單

[root@master01 ~]# kubectl apply -f job-demo.yaml
job.batch/job-demo created
[root@master01 ~]# kubectl get jobs -o wide
NAME       COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES   SELECTOR
job-demo   0/1           7s         7s    myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
[root@master01 ~]# kubectl get pod 
NAME             READY   STATUS    RESTARTS   AGE
ds-demo-8d7g9    1/1     Running   0          91m
ds-demo-dkxfs    1/1     Running   0          91m
ds-demo-dtsgc    1/1     Running   0          91m
ds-demo-q6b5f    1/1     Running   0          91m
job-demo-4h9gb   1/1     Running   0          16s
[root@master01 ~]# kubectl get pod 
NAME             READY   STATUS      RESTARTS   AGE
ds-demo-8d7g9    1/1     Running     0          91m
ds-demo-dkxfs    1/1     Running     0          91m
ds-demo-dtsgc    1/1     Running     0          91m
ds-demo-q6b5f    1/1     Running     0          91m
job-demo-4h9gb   0/1     Completed   0          30s
[root@master01 ~]# 

  提示:能夠看到建立job控制後,對應啓動pod執行完任務後就正常退出,此時pod的狀態爲completed;

  定義並行job控制器

[root@master01 ~]# cat job-multi.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: job-multi-demo
spec:
  completions: 6
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 10"]
      restartPolicy: Never
[root@master01 ~]# 

  提示:定義多路並行pod須要在spec字段下使用completions來指定執行任務須要的對應pod的數量;以上配置表示job-multi-demo這個job控制器執行任務須要啓動6個pod;

  應用配置清單

[root@master01 ~]# kubectl apply -f job-multi.yaml
job.batch/job-multi-demo created
[root@master01 ~]# kubectl get jobs -o wide
NAME             COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
job-demo         1/1           18s        9m49s   myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
job-multi-demo   0/6           6s         6s      myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
[root@master01 ~]# kubectl get pods -o wide
NAME                   READY   STATUS      RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-8d7g9          1/1     Running     0          101m   10.244.3.69   node03.k8s.org   <none>           <none>
ds-demo-dkxfs          1/1     Running     0          101m   10.244.1.69   node01.k8s.org   <none>           <none>
ds-demo-dtsgc          1/1     Running     0          101m   10.244.4.13   node04.k8s.org   <none>           <none>
ds-demo-q6b5f          1/1     Running     0          100m   10.244.2.80   node02.k8s.org   <none>           <none>
job-demo-4h9gb         0/1     Completed   0          10m    10.244.3.70   node03.k8s.org   <none>           <none>
job-multi-demo-rbw7d   1/1     Running     0          21s    10.244.1.70   node01.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pods -o wide
NAME                   READY   STATUS      RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-8d7g9          1/1     Running     0          101m   10.244.3.69   node03.k8s.org   <none>           <none>
ds-demo-dkxfs          1/1     Running     0          101m   10.244.1.69   node01.k8s.org   <none>           <none>
ds-demo-dtsgc          1/1     Running     0          101m   10.244.4.13   node04.k8s.org   <none>           <none>
ds-demo-q6b5f          1/1     Running     0          101m   10.244.2.80   node02.k8s.org   <none>           <none>
job-demo-4h9gb         0/1     Completed   0          10m    10.244.3.70   node03.k8s.org   <none>           <none>
job-multi-demo-f7rz4   1/1     Running     0          21s    10.244.3.71   node03.k8s.org   <none>           <none>
job-multi-demo-rbw7d   0/1     Completed   0          43s    10.244.1.70   node01.k8s.org   <none>           <none>
[root@master01 ~]#

  提示:默認狀況沒有指定並行度,其並行度爲1,即pod和pod之間就是串行執行任務;

  定義並行度

[root@master01 ~]# cat job-multi.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: job-multi-demo2
spec:
  completions: 6
  parallelism: 2
  template:
    metadata:
      labels:
        app: myjob
    spec:
      containers:
      - name: myjob
        image: alpine
        command: ["/bin/sh",  "-c", "sleep 10"]
      restartPolicy: Never
[root@master01 ~]# 

  提示:定義並行度須要在spec字段下使用parallelism字段來指定,所謂並行度指一次並行運行幾個pod,上述配置表示一次運行2個pod;即2個pod同時做業;

  應用配置清單

[root@master01 ~]# kubectl apply -f job-multi.yaml
job.batch/job-multi-demo2 created
[root@master01 ~]# kubectl get jobs -o wide
NAME              COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
job-demo          1/1           18s        18m     myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
job-multi-demo    6/6           116s       8m49s   myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
job-multi-demo2   0/6           8s         8s      myjob        alpine   controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43
[root@master01 ~]# kubectl get pod -o wide
NAME                    READY   STATUS      RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-8d7g9           1/1     Running     0          109m    10.244.3.69   node03.k8s.org   <none>           <none>
ds-demo-dkxfs           1/1     Running     0          109m    10.244.1.69   node01.k8s.org   <none>           <none>
ds-demo-dtsgc           1/1     Running     0          110m    10.244.4.13   node04.k8s.org   <none>           <none>
ds-demo-q6b5f           1/1     Running     0          109m    10.244.2.80   node02.k8s.org   <none>           <none>
job-demo-4h9gb          0/1     Completed   0          18m     10.244.3.70   node03.k8s.org   <none>           <none>
job-multi-demo-f7rz4    0/1     Completed   0          8m44s   10.244.3.71   node03.k8s.org   <none>           <none>
job-multi-demo-hhcrm    0/1     Completed   0          7m23s   10.244.3.72   node03.k8s.org   <none>           <none>
job-multi-demo-kjmld    0/1     Completed   0          8m20s   10.244.2.81   node02.k8s.org   <none>           <none>
job-multi-demo-lfzrj    0/1     Completed   0          8m1s    10.244.2.82   node02.k8s.org   <none>           <none>
job-multi-demo-rbw7d    0/1     Completed   0          9m6s    10.244.1.70   node01.k8s.org   <none>           <none>
job-multi-demo-vdkrm    0/1     Completed   0          7m41s   10.244.2.83   node02.k8s.org   <none>           <none>
job-multi-demo2-66tdd   0/1     Completed   0          25s     10.244.2.84   node02.k8s.org   <none>           <none>
job-multi-demo2-fsl9r   0/1     Completed   0          25s     10.244.3.73   node03.k8s.org   <none>           <none>
job-multi-demo2-js7qs   1/1     Running     0          9s      10.244.2.85   node02.k8s.org   <none>           <none>
job-multi-demo2-nqmps   1/1     Running     0          12s     10.244.1.71   node01.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pod -o wide
NAME                    READY   STATUS      RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
ds-demo-8d7g9           1/1     Running     0          110m    10.244.3.69   node03.k8s.org   <none>           <none>
ds-demo-dkxfs           1/1     Running     0          109m    10.244.1.69   node01.k8s.org   <none>           <none>
ds-demo-dtsgc           1/1     Running     0          110m    10.244.4.13   node04.k8s.org   <none>           <none>
ds-demo-q6b5f           1/1     Running     0          109m    10.244.2.80   node02.k8s.org   <none>           <none>
job-demo-4h9gb          0/1     Completed   0          19m     10.244.3.70   node03.k8s.org   <none>           <none>
job-multi-demo-f7rz4    0/1     Completed   0          8m57s   10.244.3.71   node03.k8s.org   <none>           <none>
job-multi-demo-hhcrm    0/1     Completed   0          7m36s   10.244.3.72   node03.k8s.org   <none>           <none>
job-multi-demo-kjmld    0/1     Completed   0          8m33s   10.244.2.81   node02.k8s.org   <none>           <none>
job-multi-demo-lfzrj    0/1     Completed   0          8m14s   10.244.2.82   node02.k8s.org   <none>           <none>
job-multi-demo-rbw7d    0/1     Completed   0          9m19s   10.244.1.70   node01.k8s.org   <none>           <none>
job-multi-demo-vdkrm    0/1     Completed   0          7m54s   10.244.2.83   node02.k8s.org   <none>           <none>
job-multi-demo2-5f5tn   1/1     Running     0          9s      10.244.1.72   node01.k8s.org   <none>           <none>
job-multi-demo2-66tdd   0/1     Completed   0          38s     10.244.2.84   node02.k8s.org   <none>           <none>
job-multi-demo2-fsl9r   0/1     Completed   0          38s     10.244.3.73   node03.k8s.org   <none>           <none>
job-multi-demo2-js7qs   0/1     Completed   0          22s     10.244.2.85   node02.k8s.org   <none>           <none>
job-multi-demo2-md84p   1/1     Running     0          9s      10.244.3.74   node03.k8s.org   <none>           <none>
job-multi-demo2-nqmps   0/1     Completed   0          25s     10.244.1.71   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:能夠看到如今pod就一次運行兩個;

  三、CronJob控制器

  這種類型的控制器主要用來建立週期性任務pod;

  示例:定義CronJob控制器

[root@master01 ~]# cat cronjob-demo.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cronjob-demo
  labels:
    app: mycronjob
spec:
  schedule: "*/2 * * * *"
  jobTemplate:
    metadata:
      labels:
        app: mycronjob-jobs
    spec:
      parallelism: 2
      template:
        spec:
          containers:
          - name: myjob
            image: alpine
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster; sleep 10
          restartPolicy: OnFailure
[root@master01 ~]# 

  提示:定義cronjob控制器,最主要是定義job模板;其實cronjob控制器是經過job控制器來管理pod,這個邏輯有點相似deploy控制器經過rs來控制pod;其中schedule字段用來指定週期性調度時間策略,這個定義和咱們在linux上定義週期性任務同樣;對於job模板和咱們定義job同樣;以上配置表示每2分鐘執行一次job模板中的定義的job任務;其每次並行運行2個pod;

  執行配置清單

[root@master01 ~]# kubectl apply -f cronjob-demo.yaml
cronjob.batch/cronjob-demo created
[root@master01 ~]# kubectl get cronjob -o wide
NAME           SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES   SELECTOR
cronjob-demo   */2 * * * *   False     0        <none>          12s   myjob        alpine   <none>
[root@master01 ~]# kubectl get pod 
NAME                            READY   STATUS      RESTARTS   AGE
cronjob-demo-1608307560-5hwmb   1/1     Running     0          9s
cronjob-demo-1608307560-rgkkr   1/1     Running     0          9s
ds-demo-8d7g9                   1/1     Running     0          125m
ds-demo-dkxfs                   1/1     Running     0          125m
ds-demo-dtsgc                   1/1     Running     0          125m
ds-demo-q6b5f                   1/1     Running     0          125m
job-demo-4h9gb                  0/1     Completed   0          34m
job-multi-demo-f7rz4            0/1     Completed   0          24m
job-multi-demo-hhcrm            0/1     Completed   0          23m
job-multi-demo-kjmld            0/1     Completed   0          24m
job-multi-demo-lfzrj            0/1     Completed   0          23m
job-multi-demo-rbw7d            0/1     Completed   0          25m
job-multi-demo-vdkrm            0/1     Completed   0          23m
job-multi-demo2-5f5tn           0/1     Completed   0          15m
job-multi-demo2-66tdd           0/1     Completed   0          16m
job-multi-demo2-fsl9r           0/1     Completed   0          16m
job-multi-demo2-js7qs           0/1     Completed   0          16m
job-multi-demo2-md84p           0/1     Completed   0          15m
job-multi-demo2-nqmps           0/1     Completed   0          16m
[root@master01 ~]#

  提示:能夠看到對應就有兩個pod正在運行;

  查看是否建立的有job控制器呢?

[root@master01 ~]# kubectl get job -o wide
NAME                      COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
cronjob-demo-1608307560   2/1 of 2      15s        3m18s   myjob        alpine   controller-uid=4a84b474-b890-4dd2-80d4-a6115130785a
cronjob-demo-1608307680   2/1 of 2      17s        77s     myjob        alpine   controller-uid=affecad9-03e6-430c-8c58-c845773c8ff7
job-demo                  1/1           18s        37m     myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
job-multi-demo            6/6           116s       28m     myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
job-multi-demo2           6/6           46s        19m     myjob        alpine   controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43
[root@master01 ~]# 

  提示:有兩個job控制器,名字都同樣;從上面顯示的結果,不難理解,cronjob每執行一次,它都會調用對應的job控制來建立新pod;

相關文章
相關標籤/搜索