前面使用
Deployment
建立的Pod
是無狀態的,當掛載了volume
以後,若是該Pod
掛了,Replication Controller
會再啓動一個Pod
來保證可用性,可是因爲Pod
是無狀態的,pod
掛了就會和以前的Volume
的關係斷開,新建立的Pod
沒法找到以前的Pod
。可是對於用戶來講,他們對底層的Pod
掛了是沒有感知的,可是當Pod
掛了以後就沒法再使用以前掛載的存儲卷。爲了解決這一問題,就引入了StatefulSet
用於保留Pod的狀態信息。html
StatefulSet
是Pod
資源控制器的一種實現,用於部署和擴展有狀態應用的Pod
資源,確保它們的運行順序及每一個Pod
資源的惟一性。其應用場景包括:node
穩定的持久化存儲,即
Pod
從新調度後仍是能訪問到相同的持久化數據,基於PVC
來實現。nginx穩定的網絡標識,即
Pod
從新調度後其PodName
和HostName
不變,基於Headless Service
(即沒有Cluster IP
的Service
)來實現vim有序部署,有序擴展,即
Pod
是有順序的,在部署或者擴展的時候要依據定義的順序依次進行(即從0到N-1,在下一個Pod
運行以前的全部以前的Pod
必須都是Running
和Ready
狀態),基於init Containers
來實現後端有序收縮,有序刪除(即從N-1到0)api
StatefulSet
由如下幾個部分組成:網絡
用於定義網絡標誌(
DNS domain
)和Headless Service
app用於建立
PersistentVolumes
和VolumeClaimTemplates
less定義具體應用的
StatefulSet
dom
StatefulSet
中的每一個Pod
的DNS
格式爲statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
,其中
serviceName:爲
Headless Service
的名字0..N-1:爲Pod所在的序號,從0開始到N-1
statefulSetName:爲
StatefulSet
的名字namespace:爲服務所在的
namaspace
,Headless Service
和StatefulSet
必須在相同的namespace
.cluster.local:爲
Cluster Domain
爲了實現標識符的穩定,這時候就須要一個
headless service
解析直達到Pod
,還須要給Pod
配置一個惟一的名稱。
一、Volume
二、Persistent Volume
三、Persistent Volume Clain
四、Service
五、StatefulSet
Volume能夠有不少中類型,好比nfs、gluster等,下面使用nfs
statefulSet字段說明:
[root@k8s-master ~]# kubectl explain statefulset KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersion <string> kind <string> metadata <Object> spec <Object> status <Object> [root@k8s-master ~]# kubectl explain statefulset.spec podManagementPolicy <string> #Pod管理策略 replicas <integer> #Pod副本數量 revisionHistoryLimit <integer> #歷史版本限制 selector <Object> -required- #標籤選擇器,根據標籤選擇管理的Pod資源;必選字段 serviceName <string> -required- #服務名稱,必選字段 template <Object> -required- #模板,定義pod資源,必選字段 updateStrategy <Object> #更新策略 volumeClaimTemplates <[]Object> #存儲卷申請模板,列表對象形式
經過上面的描述,下面示例定義StatefulSet
資源,在定義以前首先得準備PV
資源對象。這裏一樣使用NFS做爲後端存儲。
1)準備NFS
(安裝軟件省略,參考)
(1)建立存儲卷對應的目錄 [root@storage ~]# mkdir /data/volumes/v{1..5} -p (2)修改nfs的配置文件 [root@storage ~]# vim /etc/exports /data/volumes/v1 192.168.1.0/24(rw,no_root_squash) /data/volumes/v2 192.168.1.0/24(rw,no_root_squash) /data/volumes/v3 192.168.1.0/24(rw,no_root_squash) /data/volumes/v4 192.168.1.0/24(rw,no_root_squash) /data/volumes/v5 192.168.1.0/24(rw,no_root_squash) (3)查看nfs的配置 [root@storage ~]# exportfs -arv exporting 192.168.1.0/24:/data/volumes/v5 exporting 192.168.1.0/24:/data/volumes/v4 exporting 192.168.1.0/24:/data/volumes/v3 exporting 192.168.1.0/24:/data/volumes/v2 exporting 192.168.1.0/24:/data/volumes/v1 (4)使配置生效 [root@storage ~]# showmount -e Export list for storage: /data/volumes/v5 192.168.1.0/24 /data/volumes/v4 192.168.1.0/24 /data/volumes/v3 192.168.1.0/24 /data/volumes/v2 192.168.1.0/24 /data/volumes/v1 192.168.1.0/24
[root@k8s-master ~]# mkdir statefulset && cd statefulset (1)編寫建立pv的資源清單 [root@k8s-master statefulset]# vim pv-nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: 192.168.1.34 readOnly: false accessModes: ["ReadWriteOnce","ReadWriteMany"] capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain (2)建立PV [root@k8s-master statefulset]# kubectl apply -f pv-nfs.yaml persistentvolume/pv-nfs-001 created persistentvolume/pv-nfs-002 created persistentvolume/pv-nfs-003 created persistentvolume/pv-nfs-004 created persistentvolume/pv-nfs-005 created (3)查看PV [root@k8s-master statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 3s pv-nfs-002 5Gi RWO Retain Available 3s pv-nfs-003 5Gi RWO,RWX Retain Available 3s pv-nfs-004 5Gi RWO,RWX Retain Available 3s pv-nfs-005 5Gi RWO,RWX Retain Available 3s
[root@k8s-master statefulset]# vim statefulset-demo.yaml #定義一個Headless Service apiVersion: v1 kind: Service metadata: name: nginx-svc labels: app: nginx-svc spec: ports: - name: http port: 80 clusterIP: None selector: app: nginx-pod --- #定義StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx-statefulset spec: serviceName: nginx-svc #指定service,和上面定義的service對應 replicas: 5 #指定副本數量 selector: #指定標籤選擇器,和後面的pod的標籤對應 matchLabels: app: nginx-pod template: #定義後端Pod的模板 metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.12 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: nginxdata mountPath: /usr/share/nginx/html volumeClaimTemplates: #定義存儲卷申請模板 - metadata: name: nginxdata spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi #--- 解析上面的資源清單:因爲StatefulSet資源依賴於一個事先存在的Service資源,因此須要先定義一個名爲nginx-svc的Headless Service資源,用於關聯到每一個Pod資源建立DNS資源記錄。接着定義了一個名爲nginx-statefulset的StatefulSet資源,它經過Pod模板建立了5個Pod資源副本,並基於volumeClaiTemplate向前面建立的PV進行了請求大小爲5Gi的專用存儲卷。
[root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml service/nginx-svc created statefulset.apps/nginx-statefulset created [root@k8s-master statefulset]# kubectl get svc #查看建立的無頭服務nginx-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d19h nginx-svc ClusterIP None <none> 80/TCP 29s [root@k8s-master statefulset]# kubectl get pv #查看PV綁定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 3m49s pv-nfs-002 5Gi RWO Retain Bound default/nginxdata-nginx-statefulset-0 3m49s pv-nfs-003 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-1 3m49s pv-nfs-004 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-2 3m49s pv-nfs-005 5Gi RWO,RWX Retain Available 3m48s [root@k8s-master statefulset]# kubectl get pvc #查看PVC綁定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginxdata-nginx-statefulset-0 Bound pv-nfs-002 5Gi RWO 21s nginxdata-nginx-statefulset-1 Bound pv-nfs-003 5Gi RWO,RWX 18s nginxdata-nginx-statefulset-2 Bound pv-nfs-004 5Gi RWO,RWX 15s [root@k8s-master statefulset]# kubectl get statefulset #查看StatefulSet NAME READY AGE nginx-statefulset 3/3 58s [root@k8s-master statefulset]# kubectl get pods #查看Pod信息 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 78s nginx-statefulset-1 1/1 Running 0 75s nginx-statefulset-2 1/1 Running 0 72s [root@k8s-master ~]# kubectl get pods -w #動態查看pod建立過程,能夠發現它是按照順序從0-(n-1)的順序建立 nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 1s nginx-statefulset-0 0/1 ContainerCreating 0 1s nginx-statefulset-0 1/1 Running 0 3s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 1s nginx-statefulset-1 0/1 ContainerCreating 0 1s nginx-statefulset-1 1/1 Running 0 3s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 2s nginx-statefulset-2 0/1 ContainerCreating 0 2s nginx-statefulset-2 1/1 Running 0 4s
[root@k8s-master statefulset]# kubectl delete -f statefulset-demo.yaml service "nginx-svc" deleted statefulset.apps "nginx-statefulset" deleted [root@k8s-master ~]# kubectl get pods -w #動態查看刪除過程,能夠也是按照順序刪除,逆向關閉。 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 18m nginx-statefulset-2 1/1 Terminating 0 18m nginx-statefulset-0 1/1 Terminating 0 18m nginx-statefulset-1 1/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-2 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m 此時PVC依舊存在的,再從新建立pod時,依舊會從新去綁定原來的PVC [root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml service/nginx-svc created statefulset.apps/nginx-statefulset created [root@k8s-master statefulset]# kubectl get pvc #查看PVC綁定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginxdata-nginx-statefulset-0 Bound pv-nfs-002 5Gi RWO 30m nginxdata-nginx-statefulset-1 Bound pv-nfs-003 5Gi RWO,RWX 30m nginxdata-nginx-statefulset-2 Bound pv-nfs-004 5Gi RWO,RWX 30m
[root@k8s-master statefulset]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-statefulset-0 1/1 Running 0 12m 10.244.2.96 k8s-node2 <none> <none> nginx-statefulset-1 1/1 Running 0 12m 10.244.1.96 k8s-node1 <none> <none> nginx-statefulset-2 1/1 Running 0 12m 10.244.2.97 k8s-node2 <none> <none> [root@k8s-master statefulset]# dig -t A nginx-statefulset-0.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.96 [root@k8s-master statefulset]# dig -t A nginx-statefulset-1.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-1.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.96 [root@k8s-master statefulset]# dig -t A nginx-statefulset-2.nginx-svc.default.svc.cluster.local @10.96.0.10 ...... ;; ANSWER SECTION: nginx-statefulset-2.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.97 也能夠進入到容器中進行解析,經過對Pod的名稱解析獲得IP # pod_name.service_name.ns_name.svc.cluster.local eg: nginx-statefulset-0.nginx-svc.default.svc.cluster.local
StatefulSet
資源的擴縮容與Deployment
資源類似,即經過修改資源的副本數來改動其目標Pod
資源數量。對StatefulSet
資源來講,kubectl scale
和kubectl patch
命令都可以實現此功能,也可使用kubectl edit
命令直接修改其副本數,或者修改資源清單文件,由kubectl apply
命令從新聲明。
[root@k8s-master statefulset]# kubectl scale statefulset/nginx-statefulset --replicas=4 #擴容副本增長到4個 statefulset.apps/nginx-statefulset scaled [root@k8s-master statefulset]# kubectl get pods #查看pv信息 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 16m nginx-statefulset-1 1/1 Running 0 16m nginx-statefulset-2 1/1 Running 0 16m nginx-statefulset-3 1/1 Running 0 3s [root@k8s-master statefulset]# kubectl get pv #查看pv綁定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-nfs-001 2Gi RWO,RWX Retain Available 21m pv-nfs-002 5Gi RWO Retain Bound default/nginxdata-nginx-statefulset-0 21m pv-nfs-003 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-1 21m pv-nfs-004 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-2 21m pv-nfs-005 5Gi RWO,RWX Retain Bound default/nginxdata-nginx-statefulset-3 21m
[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":2}}' #經過patch打補丁方式縮容 statefulset.apps/nginx-statefulset patched [root@k8s-master ~]# kubectl get pods -w #動態查看縮容過程 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 17m nginx-statefulset-1 1/1 Running 0 17m nginx-statefulset-2 1/1 Running 0 17m nginx-statefulset-3 1/1 Running 0 1m nginx-statefulset-3 1/1 Terminating 0 20s nginx-statefulset-3 0/1 Terminating 0 20s nginx-statefulset-3 0/1 Terminating 0 22s nginx-statefulset-3 0/1 Terminating 0 22s nginx-statefulset-2 1/1 Terminating 0 24s nginx-statefulset-2 0/1 Terminating 0 24s nginx-statefulset-2 0/1 Terminating 0 36s nginx-statefulset-2 0/1 Terminating 0 36s
StatefulSet
的默認更新策略爲滾動更新,也能夠暫停更新
滾動更新示例:
[root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":4}}' #這裏先將副本擴容到4個。方便測試 [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.14 #更新鏡像版本 statefulset.apps/nginx-statefulset image updated [root@k8s-master ~]# kubectl get pods -w #動態查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 13m nginx-statefulset-3 1/1 Running 0 13m nginx-statefulset-3 1/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Terminating 0 13m nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 ContainerCreating 0 0s nginx-statefulset-3 1/1 Running 0 2s nginx-statefulset-2 1/1 Terminating 0 13m nginx-statefulset-2 0/1 Terminating 0 13m nginx-statefulset-2 0/1 Terminating 0 14m nginx-statefulset-2 0/1 Terminating 0 14m nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 ContainerCreating 0 0s nginx-statefulset-2 1/1 Running 0 1s nginx-statefulset-1 1/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Terminating 0 18m nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 ContainerCreating 0 0s nginx-statefulset-1 1/1 Running 0 2s nginx-statefulset-0 1/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Terminating 0 18m nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 ContainerCreating 0 0s nginx-statefulset-0 1/1 Running 0 2s [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image #查看更新完成後的鏡像版本 NAME IMAGE nginx-statefulset-0 nginx:1.14 nginx-statefulset-1 nginx:1.14 nginx-statefulset-2 nginx:1.14 nginx-statefulset-3 nginx:1.14
經過上面示例能夠看出,默認爲滾動更新,倒序更新,更新完成一個接着更新下一個。
暫停更新示例
有時候設定了一個更新操做,可是又不但願一次性所有更新完成,想先更新幾個,觀察其是否穩定,而後再更新全部的。這時候只須要將
.spec.spec.updateStrategy.rollingUpdate.partition
字段的值進行修改便可。(默認值爲0
,因此咱們看到了更新效果爲上面那樣,所有更新)。該字段表示若是設置爲2
,那麼只有當編號大於等於2
的纔會進行更新。相似於金絲雀的發佈方式。示例以下:
[root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}}' #將更新值partition設置爲2 statefulset.apps/nginx-statefulset patched [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.12 #更新鏡像版本 statefulset.apps/nginx-statefulset image updated [root@k8s-master ~]# kubectl get pods -w #動態查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 11m nginx-statefulset-1 1/1 Running 0 11m nginx-statefulset-2 1/1 Running 0 11m nginx-statefulset-3 1/1 Running 0 11m nginx-statefulset-3 1/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Terminating 0 12m nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 Pending 0 0s nginx-statefulset-3 0/1 ContainerCreating 0 0s nginx-statefulset-3 1/1 Running 0 2s nginx-statefulset-2 1/1 Terminating 0 11m nginx-statefulset-2 0/1 Terminating 0 11m nginx-statefulset-2 0/1 Terminating 0 12m nginx-statefulset-2 0/1 Terminating 0 12m nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 Pending 0 0s nginx-statefulset-2 0/1 ContainerCreating 0 0s nginx-statefulset-2 1/1 Running 0 2s [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image #查看更新完成後的鏡像版本,能夠發現只有當編號大於等於2的進行了更新。 NAME IMAGE nginx-statefulset-0 nginx:1.14 nginx-statefulset-1 nginx:1.14 nginx-statefulset-2 nginx:1.12 nginx-statefulset-3 nginx:1.12 將剩餘的也所有更新,只須要將更新策略的partition的值改成0便可,以下: [root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}}' #將更新值partition設置爲0 statefulset.apps/nginx-statefulset patche [root@k8s-master ~]# kubectl get pods -w #動態查看更新 NAME READY STATUS RESTARTS AGE nginx-statefulset-0 1/1 Running 0 18m nginx-statefulset-1 1/1 Running 0 18m nginx-statefulset-2 1/1 Running 0 6m44s nginx-statefulset-3 1/1 Running 0 6m59s nginx-statefulset-1 1/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Terminating 0 19m nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 Pending 0 0s nginx-statefulset-1 0/1 ContainerCreating 0 0s nginx-statefulset-1 1/1 Running 0 2s nginx-statefulset-0 1/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Terminating 0 19m nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 Pending 0 0s nginx-statefulset-0 0/1 ContainerCreating 0 0s nginx-statefulset-0 1/1 Running 0 2s