(十一)Kubernetes StatefulSet控制器

StatefulSet介紹

前面使用Deployment建立的Pod是無狀態的,當掛載了volume以後,若是該Pod掛了,Replication Controller會再啓動一個Pod來保證可用性,可是因爲Pod是無狀態的,pod掛了就會和以前的Volume的關係斷開,新建立的Pod沒法找到以前的Pod。可是對於用戶來講,他們對底層的Pod掛了是沒有感知的,可是當Pod掛了以後就沒法再使用以前掛載的存儲卷。爲了解決這一問題,就引入了StatefulSet用於保留Pod的狀態信息。html

StatefulSetPod資源控制器的一種實現,用於部署和擴展有狀態應用的Pod資源,確保它們的運行順序及每一個Pod資源的惟一性。其應用場景包括:node

  • 穩定的持久化存儲,即Pod從新調度後仍是能訪問到相同的持久化數據,基於PVC來實現。nginx

  • 穩定的網絡標識,即Pod從新調度後其PodNameHostName不變,基於Headless Service(即沒有Cluster IPService)來實現vim

  • 有序部署,有序擴展,即Pod是有順序的,在部署或者擴展的時候要依據定義的順序依次進行(即從0到N-1,在下一個Pod運行以前的全部以前的Pod必須都是RunningReady狀態),基於init Containers來實現後端

  • 有序收縮,有序刪除(即從N-1到0)api

StatefulSet由如下幾個部分組成:網絡

  • 用於定義網絡標誌(DNS domain)和Headless Serviceapp

  • 用於建立PersistentVolumesVolumeClaimTemplatesless

  • 定義具體應用的StatefulSetdom

StatefulSet中的每一個PodDNS格式爲statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local,其中

  • serviceName:爲Headless Service的名字

  • 0..N-1:爲Pod所在的序號,從0開始到N-1

  • statefulSetName:爲StatefulSet的名字

  • namespace:爲服務所在的namaspaceHeadless ServiceStatefulSet必須在相同的namespace

  • .cluster.local:爲Cluster Domain

爲何要有headless?

Deployment中,每個pod是沒有名稱,是隨機字符串,是無序的。而statefulSet中是要求有序的,每個Pod的名稱必須是固定的。當節點掛了,重建以後的標識符是不變的,每個節點的節點名稱是不會改變的。Pod名稱是做爲Pod識別的惟一標識符,必須保證其標識符的穩定而且惟一。

爲了實現標識符的穩定,這時候就須要一個headless service解析直達到Pod,還須要給Pod配置一個惟一的名稱。

爲何要有volumeClainTemplate?

大部分有狀態副本集都會用到持久存儲,好比分佈式系統來講,因爲數據是不同的,每一個節點都須要本身專用的存儲節點。而在DeploymentPod模板中建立的存儲卷是一個共享的存儲卷,多個Pod使用同一個存儲卷,而statefulSet定義中的每個Pod都不能使用同一個存儲卷,由此基於Pod模板建立Pod是不適應的,這就須要引入volumeClainTemplate,當在使用StatefulSet建立Pod時,會自動生成一個PVC,從而請求綁定一個PV,從而有本身專用的存儲卷。

Pod名稱、PVCPV的關係圖以下:

StatefulSet定義

在建立StatefulSet以前須要準備的東西,建立順序很是關鍵,以下

一、Volume

二、Persistent Volume

三、Persistent Volume Clain

四、Service

五、StatefulSet

Volume能夠有不少中類型,好比nfs、gluster等,下面使用nfs

statefulSet字段說明:

[root@k8s-master ~]# kubectl explain statefulset
KIND:     StatefulSet
VERSION:  apps/v1

DESCRIPTION:
     StatefulSet represents a set of pods with consistent identities. Identities
     are defined as: - Network: A single stable DNS and hostname. - Storage: As
     many VolumeClaims as requested. The StatefulSet guarantees that a given
     network identity will always map to the same storage identity.
FIELDS:
   apiVersion    <string>
   kind    <string>
   metadata    <Object>
   spec    <Object>
   status    <Object>

[root@k8s-master ~]# kubectl explain statefulset.spec
podManagementPolicy    <string>    #Pod管理策略
replicas    <integer>    #Pod副本數量
revisionHistoryLimit    <integer>    #歷史版本限制
selector    <Object> -required-    #標籤選擇器,根據標籤選擇管理的Pod資源;必選字段
serviceName    <string> -required-    #服務名稱,必選字段
template    <Object> -required-    #模板,定義pod資源,必選字段
updateStrategy    <Object>    #更新策略
volumeClaimTemplates    <[]Object>    #存儲卷申請模板,列表對象形式

示例,清單定義StatefulSet

經過上面的描述,下面示例定義StatefulSet資源,在定義以前首先得準備PV資源對象。這裏一樣使用NFS做爲後端存儲。

1)準備NFS(安裝軟件省略,參考

(1)建立存儲卷對應的目錄
[root@storage ~]# mkdir /data/volumes/v{1..5} -p

(2)修改nfs的配置文件
[root@storage ~]# vim /etc/exports
/data/volumes/v1  192.168.1.0/24(rw,no_root_squash)
/data/volumes/v2  192.168.1.0/24(rw,no_root_squash)
/data/volumes/v3  192.168.1.0/24(rw,no_root_squash)
/data/volumes/v4  192.168.1.0/24(rw,no_root_squash)
/data/volumes/v5  192.168.1.0/24(rw,no_root_squash)

(3)查看nfs的配置
[root@storage ~]# exportfs -arv
exporting 192.168.1.0/24:/data/volumes/v5
exporting 192.168.1.0/24:/data/volumes/v4
exporting 192.168.1.0/24:/data/volumes/v3
exporting 192.168.1.0/24:/data/volumes/v2
exporting 192.168.1.0/24:/data/volumes/v1

(4)使配置生效
[root@storage ~]# showmount -e
Export list for storage:
/data/volumes/v5 192.168.1.0/24
/data/volumes/v4 192.168.1.0/24
/data/volumes/v3 192.168.1.0/24
/data/volumes/v2 192.168.1.0/24
/data/volumes/v1 192.168.1.0/24

2)建立PV;這裏建立5PV,存儲大小各不相等,是否可讀也不相同,這裏新建立一個目錄用於存放statefulset全部的資源清單文件等

[root@k8s-master ~]# mkdir statefulset && cd statefulset

(1)編寫建立pv的資源清單
[root@k8s-master statefulset]# vim pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: 192.168.1.34
    readOnly: false 
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: 192.168.1.34
    readOnly: false 
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: 192.168.1.34
    readOnly: false 
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: 192.168.1.34
    readOnly: false 
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: 192.168.1.34
    readOnly: false 
  accessModes: ["ReadWriteOnce","ReadWriteMany"]
  capacity:
    storage: 5Gi
  persistentVolumeReclaimPolicy: Retain

(2)建立PV
[root@k8s-master statefulset]# kubectl apply -f pv-nfs.yaml  
persistentvolume/pv-nfs-001 created
persistentvolume/pv-nfs-002 created
persistentvolume/pv-nfs-003 created
persistentvolume/pv-nfs-004 created
persistentvolume/pv-nfs-005 created

(3)查看PV
[root@k8s-master statefulset]# kubectl get pv 
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                   3s
pv-nfs-002   5Gi        RWO            Retain           Available                                   3s
pv-nfs-003   5Gi        RWO,RWX        Retain           Available                                   3s
pv-nfs-004   5Gi        RWO,RWX        Retain           Available                                   3s
pv-nfs-005   5Gi        RWO,RWX        Retain           Available                                   3s

3)編寫定義StatefulSet的資源清單,首先咱們要定義一個Headless Service,這裏headless ServiceStatefulSet寫在一個文件。

[root@k8s-master statefulset]# vim statefulset-demo.yaml
#定義一個Headless Service
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx-svc
spec:
  ports:
  - name: http
    port: 80
  clusterIP: None
  selector:
    app: nginx-pod
---
#定義StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-statefulset
spec:
  serviceName: nginx-svc    #指定service,和上面定義的service對應
  replicas: 5    #指定副本數量
  selector:    #指定標籤選擇器,和後面的pod的標籤對應
    matchLabels:
      app: nginx-pod
  template:    #定義後端Pod的模板
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.12
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: nginxdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:    #定義存儲卷申請模板
  - metadata: 
      name: nginxdata
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 5Gi

#---
解析上面的資源清單:因爲StatefulSet資源依賴於一個事先存在的Service資源,因此須要先定義一個名爲nginx-svc的Headless Service資源,用於關聯到每一個Pod資源建立DNS資源記錄。接着定義了一個名爲nginx-statefulset的StatefulSet資源,它經過Pod模板建立了5個Pod資源副本,並基於volumeClaiTemplate向前面建立的PV進行了請求大小爲5Gi的專用存儲卷。

4)建立StatefulSet資源,這裏打開另一個窗口實時查看pod

[root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml 
service/nginx-svc created
statefulset.apps/nginx-statefulset created

[root@k8s-master statefulset]# kubectl get svc   #查看建立的無頭服務nginx-svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5d19h
nginx-svc    ClusterIP   None         <none>        80/TCP    29s

[root@k8s-master statefulset]# kubectl get pv     #查看PV綁定
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS   REASON   AGE
pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                                                   3m49s
pv-nfs-002   5Gi        RWO            Retain           Bound       default/nginxdata-nginx-statefulset-0                           3m49s
pv-nfs-003   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-1                           3m49s
pv-nfs-004   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-2                           3m49s
pv-nfs-005   5Gi        RWO,RWX        Retain           Available                                                                   3m48s
[root@k8s-master statefulset]# kubectl get pvc     #查看PVC綁定
NAME                            STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginxdata-nginx-statefulset-0   Bound    pv-nfs-002   5Gi        RWO                           21s
nginxdata-nginx-statefulset-1   Bound    pv-nfs-003   5Gi        RWO,RWX                       18s
nginxdata-nginx-statefulset-2   Bound    pv-nfs-004   5Gi        RWO,RWX                       15s
[root@k8s-master statefulset]# kubectl get statefulset    #查看StatefulSet
NAME                READY   AGE
nginx-statefulset   3/3     58s

[root@k8s-master statefulset]# kubectl get pods    #查看Pod信息
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          78s
nginx-statefulset-1   1/1     Running   0          75s
nginx-statefulset-2   1/1     Running   0          72s


[root@k8s-master ~]# kubectl get pods -w    #動態查看pod建立過程,能夠發現它是按照順序從0-(n-1)的順序建立
nginx-statefulset-0   0/1   Pending   0     0s
nginx-statefulset-0   0/1   Pending   0     0s
nginx-statefulset-0   0/1   Pending   0     1s
nginx-statefulset-0   0/1   ContainerCreating   0     1s
nginx-statefulset-0   1/1   Running             0     3s
nginx-statefulset-1   0/1   Pending             0     0s
nginx-statefulset-1   0/1   Pending             0     0s
nginx-statefulset-1   0/1   Pending             0     1s
nginx-statefulset-1   0/1   ContainerCreating   0     1s
nginx-statefulset-1   1/1   Running             0     3s
nginx-statefulset-2   0/1   Pending             0     0s
nginx-statefulset-2   0/1   Pending             0     0s
nginx-statefulset-2   0/1   Pending             0     2s
nginx-statefulset-2   0/1   ContainerCreating   0     2s
nginx-statefulset-2   1/1   Running             0     4s

5)刪除測試,一樣在另一個窗口動態查看pod

[root@k8s-master statefulset]# kubectl delete -f statefulset-demo.yaml 
service "nginx-svc" deleted
statefulset.apps "nginx-statefulset" deleted

[root@k8s-master ~]# kubectl get pods -w     #動態查看刪除過程,能夠也是按照順序刪除,逆向關閉。
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          18m
nginx-statefulset-1   1/1     Running   0          18m
nginx-statefulset-2   1/1     Running   0          18m
nginx-statefulset-2   1/1     Terminating   0          18m
nginx-statefulset-0   1/1     Terminating   0          18m
nginx-statefulset-1   1/1     Terminating   0          18m
nginx-statefulset-2   0/1     Terminating   0          18m
nginx-statefulset-0   0/1     Terminating   0          18m
nginx-statefulset-1   0/1     Terminating   0          18m
nginx-statefulset-2   0/1     Terminating   0          18m
nginx-statefulset-2   0/1     Terminating   0          18m
nginx-statefulset-2   0/1     Terminating   0          18m
nginx-statefulset-1   0/1     Terminating   0          18m
nginx-statefulset-1   0/1     Terminating   0          18m
nginx-statefulset-0   0/1     Terminating   0          18m
nginx-statefulset-0   0/1     Terminating   0          18m


此時PVC依舊存在的,再從新建立pod時,依舊會從新去綁定原來的PVC
[root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml 
service/nginx-svc created
statefulset.apps/nginx-statefulset created

[root@k8s-master statefulset]# kubectl get pvc     #查看PVC綁定
NAME                            STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginxdata-nginx-statefulset-0   Bound    pv-nfs-002   5Gi        RWO                           30m
nginxdata-nginx-statefulset-1   Bound    pv-nfs-003   5Gi        RWO,RWX                       30m
nginxdata-nginx-statefulset-2   Bound    pv-nfs-004   5Gi        RWO,RWX                       30m

6)名稱解析,在建立的每個Pod中,每個Pod本身的名稱都是能夠被解析的,以下:

[root@k8s-master statefulset]# kubectl get pods -o wide 
NAME                  READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-statefulset-0   1/1     Running   0          12m   10.244.2.96   k8s-node2   <none>           <none>
nginx-statefulset-1   1/1     Running   0          12m   10.244.1.96   k8s-node1   <none>           <none>
nginx-statefulset-2   1/1     Running   0          12m   10.244.2.97   k8s-node2   <none>           <none>


[root@k8s-master statefulset]# dig -t A nginx-statefulset-0.nginx-svc.default.svc.cluster.local @10.96.0.10
......
;; ANSWER SECTION:
nginx-statefulset-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.96

[root@k8s-master statefulset]# dig -t A nginx-statefulset-1.nginx-svc.default.svc.cluster.local @10.96.0.10
......
;; ANSWER SECTION:
nginx-statefulset-1.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.96

[root@k8s-master statefulset]# dig -t A nginx-statefulset-2.nginx-svc.default.svc.cluster.local @10.96.0.10
......
;; ANSWER SECTION:
nginx-statefulset-2.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.97

也能夠進入到容器中進行解析,經過對Pod的名稱解析獲得IP
# pod_name.service_name.ns_name.svc.cluster.local
eg: nginx-statefulset-0.nginx-svc.default.svc.cluster.local

StatefulSet資源擴縮容

StatefulSet資源的擴縮容與Deployment資源類似,即經過修改資源的副本數來改動其目標Pod資源數量。對StatefulSet資源來講,kubectl scalekubectl patch命令都可以實現此功能,也可使用kubectl edit命令直接修改其副本數,或者修改資源清單文件,由kubectl apply命令從新聲明。

1)經過scalenginx-statefulset資源副本數量擴容爲4個

[root@k8s-master statefulset]# kubectl scale statefulset/nginx-statefulset --replicas=4   #擴容副本增長到4個
statefulset.apps/nginx-statefulset scaled
[root@k8s-master statefulset]# kubectl get pods     #查看pv信息
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          16m
nginx-statefulset-1   1/1     Running   0          16m
nginx-statefulset-2   1/1     Running   0          16m
nginx-statefulset-3   1/1     Running   0          3s

[root@k8s-master statefulset]# kubectl get pv   #查看pv綁定
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS   REASON   AGE
pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                                                   21m
pv-nfs-002   5Gi        RWO            Retain           Bound       default/nginxdata-nginx-statefulset-0                           21m
pv-nfs-003   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-1                           21m
pv-nfs-004   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-2                           21m
pv-nfs-005   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-3                           21m

2)經過patchnginx-statefulset資源副本數量縮容爲3個

[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":2}}'    #經過patch打補丁方式縮容
statefulset.apps/nginx-statefulset patched

[root@k8s-master ~]# kubectl get pods -w    #動態查看縮容過程
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          17m
nginx-statefulset-1   1/1     Running   0          17m
nginx-statefulset-2   1/1     Running   0          17m
nginx-statefulset-3   1/1     Running   0          1m
nginx-statefulset-3   1/1     Terminating   0          20s
nginx-statefulset-3   0/1     Terminating   0          20s
nginx-statefulset-3   0/1     Terminating   0          22s
nginx-statefulset-3   0/1     Terminating   0          22s
nginx-statefulset-2   1/1     Terminating   0          24s
nginx-statefulset-2   0/1     Terminating   0          24s
nginx-statefulset-2   0/1     Terminating   0          36s
nginx-statefulset-2   0/1     Terminating   0          36s

更新策略

StatefulSet的默認更新策略爲滾動更新,也能夠暫停更新

滾動更新示例:

[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":4}}'    #這裏先將副本擴容到4個。方便測試

[root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.14    #更新鏡像版本
statefulset.apps/nginx-statefulset image updated

[root@k8s-master ~]# kubectl get pods -w    #動態查看更新
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          18m
nginx-statefulset-1   1/1     Running   0          18m
nginx-statefulset-2   1/1     Running   0          13m
nginx-statefulset-3   1/1     Running   0          13m
nginx-statefulset-3   1/1     Terminating   0          13m
nginx-statefulset-3   0/1     Terminating   0          13m
nginx-statefulset-3   0/1     Terminating   0          13m
nginx-statefulset-3   0/1     Terminating   0          13m
nginx-statefulset-3   0/1     Pending       0          0s
nginx-statefulset-3   0/1     Pending       0          0s
nginx-statefulset-3   0/1     ContainerCreating   0          0s
nginx-statefulset-3   1/1     Running             0          2s
nginx-statefulset-2   1/1     Terminating         0          13m
nginx-statefulset-2   0/1     Terminating         0          13m
nginx-statefulset-2   0/1     Terminating         0          14m
nginx-statefulset-2   0/1     Terminating         0          14m
nginx-statefulset-2   0/1     Pending             0          0s
nginx-statefulset-2   0/1     Pending             0          0s
nginx-statefulset-2   0/1     ContainerCreating   0          0s
nginx-statefulset-2   1/1     Running             0          1s
nginx-statefulset-1   1/1     Terminating         0          18m
nginx-statefulset-1   0/1     Terminating         0          18m
nginx-statefulset-1   0/1     Terminating         0          18m
nginx-statefulset-1   0/1     Terminating         0          18m
nginx-statefulset-1   0/1     Pending             0          0s
nginx-statefulset-1   0/1     Pending             0          0s
nginx-statefulset-1   0/1     ContainerCreating   0          0s
nginx-statefulset-1   1/1     Running             0          2s
nginx-statefulset-0   1/1     Terminating         0          18m
nginx-statefulset-0   0/1     Terminating         0          18m
nginx-statefulset-0   0/1     Terminating         0          18m
nginx-statefulset-0   0/1     Terminating         0          18m
nginx-statefulset-0   0/1     Pending             0          0s
nginx-statefulset-0   0/1     Pending             0          0s
nginx-statefulset-0   0/1     ContainerCreating   0          0s
nginx-statefulset-0   1/1     Running             0          2s

[root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image    #查看更新完成後的鏡像版本
NAME                  IMAGE
nginx-statefulset-0   nginx:1.14
nginx-statefulset-1   nginx:1.14
nginx-statefulset-2   nginx:1.14
nginx-statefulset-3   nginx:1.14   

經過上面示例能夠看出,默認爲滾動更新,倒序更新,更新完成一個接着更新下一個。

暫停更新示例

有時候設定了一個更新操做,可是又不但願一次性所有更新完成,想先更新幾個,觀察其是否穩定,而後再更新全部的。這時候只須要將.spec.spec.updateStrategy.rollingUpdate.partition字段的值進行修改便可。(默認值爲0,因此咱們看到了更新效果爲上面那樣,所有更新)。該字段表示若是設置爲2,那麼只有當編號大於等於2的纔會進行更新。相似於金絲雀的發佈方式。示例以下:

[root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}}'     #將更新值partition設置爲2
statefulset.apps/nginx-statefulset patched

[root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.12    #更新鏡像版本
statefulset.apps/nginx-statefulset image updated

[root@k8s-master ~]# kubectl get pods -w     #動態查看更新
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          11m
nginx-statefulset-1   1/1     Running   0          11m
nginx-statefulset-2   1/1     Running   0          11m
nginx-statefulset-3   1/1     Running   0          11m
nginx-statefulset-3   1/1     Terminating   0          12m
nginx-statefulset-3   0/1     Terminating   0          12m
nginx-statefulset-3   0/1     Terminating   0          12m
nginx-statefulset-3   0/1     Terminating   0          12m
nginx-statefulset-3   0/1     Pending       0          0s
nginx-statefulset-3   0/1     Pending       0          0s
nginx-statefulset-3   0/1     ContainerCreating   0          0s
nginx-statefulset-3   1/1     Running             0          2s
nginx-statefulset-2   1/1     Terminating         0          11m
nginx-statefulset-2   0/1     Terminating         0          11m
nginx-statefulset-2   0/1     Terminating         0          12m
nginx-statefulset-2   0/1     Terminating         0          12m
nginx-statefulset-2   0/1     Pending             0          0s
nginx-statefulset-2   0/1     Pending             0          0s
nginx-statefulset-2   0/1     ContainerCreating   0          0s
nginx-statefulset-2   1/1     Running             0          2s

[root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image    #查看更新完成後的鏡像版本,能夠發現只有當編號大於等於2的進行了更新。
NAME                  IMAGE
nginx-statefulset-0   nginx:1.14
nginx-statefulset-1   nginx:1.14
nginx-statefulset-2   nginx:1.12
nginx-statefulset-3   nginx:1.12



將剩餘的也所有更新,只須要將更新策略的partition的值改成0便可,以下:
[root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}}'    #將更新值partition設置爲0
statefulset.apps/nginx-statefulset patche
[root@k8s-master ~]# kubectl get pods -w    #動態查看更新
NAME                  READY   STATUS    RESTARTS   AGE
nginx-statefulset-0   1/1     Running   0          18m
nginx-statefulset-1   1/1     Running   0          18m
nginx-statefulset-2   1/1     Running   0          6m44s
nginx-statefulset-3   1/1     Running   0          6m59s
nginx-statefulset-1   1/1     Terminating   0          19m
nginx-statefulset-1   0/1     Terminating   0          19m
nginx-statefulset-1   0/1     Terminating   0          19m
nginx-statefulset-1   0/1     Terminating   0          19m
nginx-statefulset-1   0/1     Pending       0          0s
nginx-statefulset-1   0/1     Pending       0          0s
nginx-statefulset-1   0/1     ContainerCreating   0          0s
nginx-statefulset-1   1/1     Running             0          2s
nginx-statefulset-0   1/1     Terminating         0          19m
nginx-statefulset-0   0/1     Terminating         0          19m
nginx-statefulset-0   0/1     Terminating         0          19m
nginx-statefulset-0   0/1     Terminating         0          19m
nginx-statefulset-0   0/1     Pending             0          0s
nginx-statefulset-0   0/1     Pending             0          0s
nginx-statefulset-0   0/1     ContainerCreating   0          0s
nginx-statefulset-0   1/1     Running             0          2s
相關文章
相關標籤/搜索