目錄html
從前面的學習咱們知道使用Deployment建立的pod是無狀態的,當掛載了Volume以後,若是該pod掛了,Replication Controller會再啓動一個pod來保證可用性,可是因爲pod是無狀態的,pod掛了就會和以前的Volume的關係斷開,新建立的Pod沒法找到以前的Pod。可是對於用戶而言,他們對底層的Pod掛了是沒有感知的,可是當Pod掛了以後就沒法再使用以前掛載的存儲卷。node
爲了解決這一問題,就引入了StatefulSet用於保留Pod的狀態信息。nginx
StatefulSet是爲了解決有狀態服務的問題(對應Deployments和ReplicaSets是爲無狀態服務而設計),其應用場景包括:web
從上面的應用場景能夠發現,StatefulSet由如下幾個部分組成:redis
- Headless Service(無頭服務)用於爲Pod資源標識符生成可解析的DNS記錄。
- volumeClaimTemplates (存儲卷申請模板)基於靜態或動態PV供給方式爲Pod資源提供專有的固定存儲。
- StatefulSet,用於管控Pod資源。
在deployment中,每個pod是沒有名稱,是隨機字符串,是無序的。而statefulset中是要求有序的,每個pod的名稱必須是固定的。當節點掛了,重建以後的標識符是不變的,每個節點的節點名稱是不能改變的。pod名稱是做爲pod識別的惟一標識符,必須保證其標識符的穩定而且惟一。
爲了實現標識符的穩定,這時候就須要一個headless service 解析直達到pod,還須要給pod配置一個惟一的名稱。docker
大部分有狀態副本集都會用到持久存儲,好比分佈式系統來講,因爲數據是不同的,每一個節點都須要本身專用的存儲節點。而在deployment中pod模板中建立的存儲卷是一個共享的存儲卷,多個pod使用同一個存儲卷,而statefulset定義中的每個pod都不能使用同一個存儲卷,由此基於pod模板建立pod是不適應的,這就須要引入volumeClainTemplate,當在使用statefulset建立pod時,會自動生成一個PVC,從而請求綁定一個PV,從而有本身專用的存儲卷。Pod名稱、PVC和PV關係圖以下:
vim
在建立StatefulSet以前須要準備的東西,值得注意的是建立順序很是關鍵,建立順序以下:
一、Volume
二、Persistent Volume
三、Persistent Volume Claim
四、Service
五、StatefulSet
Volume能夠有不少種類型,好比nfs、glusterfs等,咱們這裏使用的ceph RBD來建立。api
[root@k8s-master ~]# kubectl explain statefulset KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersion <string> kind <string> metadata <Object> spec <Object> status <Object> [root@k8s-master ~]# kubectl explain statefulset.spec KIND: StatefulSet VERSION: apps/v1 RESOURCE: spec <Object> DESCRIPTION: Spec defines the desired identities of pods in this set. A StatefulSetSpec is the specification of a StatefulSet. FIELDS: podManagementPolicy <string> #Pod管理策略 replicas <integer> #副本數量 revisionHistoryLimit <integer> #歷史版本限制 selector <Object> -required- #選擇器,必選項 serviceName <string> -required- #服務名稱,必選項 template <Object> -required- #模板,必選項 updateStrategy <Object> #更新策略 volumeClaimTemplates <[]Object> #存儲卷申請模板,列表對象形式
如上所述,一個完整的StatefulSet控制器由一個Headless Service、一個StatefulSet和一個volumeClaimTemplate組成。以下資源清單中的定義:tomcat
[root@k8s-master mainfests]# vim stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp-svc labels: app: myapp-svc spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp-svc replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 2Gi
解析上例:因爲StatefulSet資源依賴於一個實現存在的Headless類型的Service資源,因此須要先定義一個名爲myapp-svc的Headless Service資源,用於爲關聯到每一個Pod資源建立DNS資源記錄。接着定義了一個名爲myapp的StatefulSet資源,它經過Pod模板建立了3個Pod資源副本,並基於volumeClaimTemplates向前面建立的PV進行了請求大小爲2Gi的專用存儲卷。網絡
[root@k8s-master mainfests]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 23h pv002 2Gi RWO Retain Available 23h pv003 2Gi RWO,RWX Retain Bound default/mypvc 23h pv004 4Gi RWO,RWX Retain Available 23h pv005 5Gi RWO,RWX Retain Available 23h [root@k8s-master mainfests]# kubectl delete pods pod-vol-pvc pod "pod-vol-pvc" deleted [root@k8s-master mainfests]# kubectl delete pods/pod-cm-3 pods/pod-secret-env pods/pod-vol-hostpath pod "pod-cm-3" deleted pod "pod-secret-env" deleted pod "pod-vol-hostpath" deleted [root@k8s-master mainfests]# kubectl delete deploy/myapp-backend-pod deploy/tomcat-deploy deployment.extensions "myapp-backend-pod" deleted deployment.extensions "tomcat-deploy" deleted [root@k8s-master mainfests]# kubectl delete pods pod-vol-pvc pod "pod-vol-pvc" deleted [root@k8s-master mainfests]# kubectl delete pods/pod-cm-3 pods/pod-secret-env pods/pod-vol-hostpath pod "pod-cm-3" deleted pod "pod-secret-env" deleted pod "pod-vol-hostpath" deleted [root@k8s-master mainfests]# kubectl delete deploy/myapp-backend-pod deploy/tomcat-deploy deployment.extensions "myapp-backend-pod" deleted deployment.extensions "tomcat-deploy" deleted persistentvolumeclaim "mypvc" deleted [root@k8s-master mainfests]# kubectl delete pv --all persistentvolume "pv001" deleted persistentvolume "pv002" deleted persistentvolume "pv003" deleted persistentvolume "pv004" deleted persistentvolume "pv005" deleted
[root@k8s-master ~]# cd mainfests/volumes [root@k8s-master volumes]# vim pv-demo.yaml [root@k8s-master volumes]# kubectl apply -f pv-demo.yaml persistentvolume/pv001 created persistentvolume/pv002 created persistentvolume/pv003 created persistentvolume/pv004 created persistentvolume/pv005 created [root@k8s-master volumes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 5s pv002 2Gi RWO Retain Available 5s pv003 2Gi RWO,RWX Retain Available 5s pv004 2Gi RWO,RWX Retain Available 5s pv005 2Gi RWO,RWX Retain Available 5s
[root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get svc #查看建立的無頭服務myapp-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50d myapp-svc ClusterIP None <none> 80/TCP 38s [root@k8s-master mainfests]# kubectl get sts #查看statefulset NAME DESIRED CURRENT AGE myapp 3 3 55s [root@k8s-master mainfests]# kubectl get pvc #查看pvc綁定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 1m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 1m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 1m [root@k8s-master mainfests]# kubectl get pv #查看pv綁定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 6m pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 6m pv003 2Gi RWO,RWX Retain Bound default/myappdata-myapp-1 6m pv004 2Gi RWO,RWX Retain Bound default/myappdata-myapp-2 6m pv005 2Gi RWO,RWX Retain Available 6m [root@k8s-master mainfests]# kubectl get pods #查看Pod信息 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 2m myapp-1 1/1 Running 0 2m myapp-2 1/1 Running 0 2m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d
當刪除的時候是從myapp-2開始進行刪除的,關閉是逆向關閉
[root@k8s-master mainfests]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE filebeat-ds-hxgdx 1/1 Running 1 33d filebeat-ds-s466l 1/1 Running 2 33d myapp-0 1/1 Running 0 3m myapp-1 1/1 Running 0 3m myapp-2 1/1 Running 0 3m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d myapp-0 1/1 Terminating 0 3m myapp-2 1/1 Terminating 0 3m myapp-1 1/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 4m myapp-0 0/1 Terminating 0 4m myapp-2 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m 此時PVC依舊存在的,再從新建立pod時,依舊會從新去綁定原來的pvc [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 5m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 5m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 5m
[root@k8s-master mainfests]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE filebeat-ds-hxgdx 1/1 Running 1 33d filebeat-ds-s466l 1/1 Running 2 33d myapp-0 1/1 Running 0 3m myapp-1 1/1 Running 0 3m myapp-2 1/1 Running 0 3m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d myapp-0 1/1 Terminating 0 3m myapp-2 1/1 Terminating 0 3m myapp-1 1/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 4m myapp-0 0/1 Terminating 0 4m myapp-2 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m 此時PVC依舊存在的,再從新建立pod時,依舊會從新去綁定原來的pvc [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 5m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 5m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 5m
RollingUpdate 更新策略在 StatefulSet 中實現 Pod 的自動滾動更新。 當StatefulSet的 .spec.updateStrategy.type 設置爲 RollingUpdate 時,默認爲:RollingUpdate。StatefulSet 控制器將在 StatefulSet 中刪除並從新建立每一個 Pod。 它將以與 Pod 終止相同的順序進行(從最大的序數到最小的序數),每次更新一個 Pod。 在更新其前身以前,它將等待正在更新的 Pod 狀態變成正在運行並就緒。以下操做的滾動更新是有2-0的順序更新。
[root@k8s-master mainfests]# vim stateful-demo.yaml #修改image版本爲v2 ..... image: ikubernetes/myapp:v2 .... [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc unchanged statefulset.apps/myapp configured [root@k8s-master ~]# kubectl get pods -w #查看滾動更新的過程 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 36m myapp-1 1/1 Running 0 36m myapp-2 1/1 Running 0 36m myapp-2 1/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Pending 0 0s myapp-2 0/1 Pending 0 0s myapp-2 0/1 ContainerCreating 0 0s myapp-2 1/1 Running 0 2s myapp-1 1/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Pending 0 0s myapp-1 0/1 Pending 0 0s myapp-1 0/1 ContainerCreating 0 0s myapp-1 1/1 Running 0 1s myapp-0 1/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m
在建立的每個Pod中,每個pod本身的名稱都是能夠被解析的,以下:
[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-0 1/1 Running 0 8m 10.244.1.62 k8s-node01 myapp-1 1/1 Running 0 8m 10.244.2.49 k8s-node02 myapp-2 1/1 Running 0 8m 10.244.1.61 k8s-node01 [root@k8s-master mainfests]# kubectl exec -it myapp-0 -- /bin/sh / # nslookup myapp-0.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-0.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.62 myapp-0.myapp-svc.default.svc.cluster.local / # nslookup myapp-1.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-1.myapp-svc.default.svc.cluster.local Address 1: 10.244.2.49 myapp-1.myapp-svc.default.svc.cluster.local / # nslookup myapp-2.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-2.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.61 myapp-2.myapp-svc.default.svc.cluster.local 從上面的解析,咱們能夠看到在容器當中能夠經過對Pod的名稱進行解析到ip。其解析的域名格式以下: pod_name.service_name.ns_name.svc.cluster.local eg: myapp-0.myapp.default.svc.cluster.local
[root@k8s-master mainfests]# kubectl scale sts myapp --replicas=4 #擴容副本增長到4個 statefulset.apps/myapp scaled [root@k8s-master ~]# kubectl get pods -w #動態查看擴容 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 23m myapp-1 1/1 Running 0 23m myapp-2 1/1 Running 0 23m myapp-3 0/1 Pending 0 0s myapp-3 0/1 Pending 0 0s myapp-3 0/1 ContainerCreating 0 0s myapp-3 1/1 Running 0 1s [root@k8s-master mainfests]# kubectl get pv #查看pv綁定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 1h pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 1h pv003 2Gi RWO,RWX Retain Bound default/myappdata-myapp-1 1h pv004 2Gi RWO,RWX Retain Bound default/myappdata-myapp-2 1h pv005 2Gi RWO,RWX Retain Bound default/myappdata-myapp-3 1h [root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' #打補丁方式縮容 statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get pods -w #動態查看縮容 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 25m myapp-1 1/1 Running 0 25m myapp-2 1/1 Running 0 25m myapp-3 1/1 Running 0 1m myapp-3 1/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-2 1/1 Terminating 0 26m myapp-2 0/1 Terminating 0 26m myapp-2 0/1 Terminating 0 27m myapp-2 0/1 Terminating 0 27m
修改更新策略,以partition方式進行更新,更新值爲2,只有myapp編號大於等於2的纔會進行更新。相似於金絲雀部署方式。
[root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}' statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get sts myapp NAME DESIRED CURRENT AGE myapp 4 4 1h [root@k8s-master ~]# kubectl describe sts myapp Name: myapp Namespace: default CreationTimestamp: Wed, 10 Oct 2018 21:58:24 -0400 Selector: app=myapp-pod Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match... Replicas: 4 desired | 4 total Update Strategy: RollingUpdate Partition: 2 ......
版本升級,將image的版本升級爲v3,升級後對比myapp-2和myapp-1的image版本是不一樣的。這樣就實現了金絲雀發佈的效果。
[root@k8s-master mainfests]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v3 statefulset.apps/myapp image updated [root@k8s-master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 4 4 1h myapp ikubernetes/myapp:v3 [root@k8s-master ~]# kubectl get pods myapp-2 -o yaml |grep image - image: ikubernetes/myapp:v3 imagePullPolicy: IfNotPresent image: ikubernetes/myapp:v3 imageID: docker-pullable://ikubernetes/myapp@sha256:b8d74db2515d3c1391c78c5768272b9344428035ef6d72158fd9f6c4239b2c69 [root@k8s-master ~]# kubectl get pods myapp-1 -o yaml |grep image - image: ikubernetes/myapp:v2 imagePullPolicy: IfNotPresent image: ikubernetes/myapp:v2 imageID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
將剩餘的Pod也更新版本,只須要將更新策略的partition值改成0便可,以下:
[root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}' statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 58m myapp-1 1/1 Running 0 58m myapp-2 1/1 Running 0 13m myapp-3 1/1 Running 0 13m myapp-1 1/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Pending 0 0s myapp-1 0/1 Pending 0 0s myapp-1 0/1 ContainerCreating 0 0s myapp-1 1/1 Running 0 2s myapp-0 1/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Pending 0 0s myapp-0 0/1 Pending 0 0s myapp-0 0/1 ContainerCreating 0 0s myapp-0 1/1 Running 0 2s