StatefulSet must contain three components:html
We have created pv in the section pvc and pv, now we can use one of these pv for StatefulSet volume. Assume that you also setup a nfs service on k8snode2. If not, see the previous sections...node
Create PV by using template:nginx
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv01-demo labels: name: pv01 # pv doesn't need to define namespace but pvc does! spec: nfs: path: /data/v1 # for pv01 server: k8snode2 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 20Mi --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv02-demo labels: name: pv02 # pv doesn't need to define namespace but pvc does! spec: nfs: path: /data/v2 # for pv02 server: k8snode2 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 20Mi --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv03-demo labels: name: pv03 # pv doesn't need to define namespace but pvc does! spec: nfs: path: /data/v3 # for pv03 server: k8snode2 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 20Mi
Validating PV:web
[root@k8smaster statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv01-demo 20Mi RWO,RWX Retain Available 16m nfs-pv02-demo 20Mi RWO,RWX Retain Available 16m nfs-pv03-demo 20Mi RWO,RWX Retain Available 16m
As you see, we have three PVs and each has 20 Mi space.api
Now deploy a StatefulSet pod with template below:app
apiVersion: v1 kind: Service metadata: name: stateful-svc-myapp labels: app: myapp-stateful-svc spec: ports: - port: 80 name: web clusterIP: None # headless service selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp-statefulset spec: selector: matchLabels: app: myapp-pod serviceName: myapp-svc replicas: 2 template: metadata: labels: app: myapp-pod spec: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - mountPath: /usr/share/nginx/html name: myappdata volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Mi
As we discussed at the beginning, for StatefulSet we need three components.less
Now deploy this StatefulSet template:ide
[root@k8smaster statefulset]# kubectl apply -f statefulset-demo.yaml service/stateful-svc-myapp created statefulset.apps/myapp-statefulset created
[root@k8smaster statefulset]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25d stateful-svc-myapp ClusterIP None <none> 80/TCP 17s
[root@k8smaster statefulset]# kubectl get sts NAME DESIRED CURRENT AGE myapp-statefulset 2 2 36s
[root@k8smaster statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-statefulset-0 Bound nfs-pv01-demo 20Mi RWO,RWX 48s myappdata-myapp-statefulset-1 Bound nfs-pv03-demo 20Mi RWO,RWX 45s
[root@k8smaster statefulset]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv01-demo 20Mi RWO,RWX Retain Bound default/myappdata-myapp-statefulset-0 26m nfs-pv02-demo 20Mi RWO,RWX Retain Available 26m nfs-pv03-demo 20Mi RWO,RWX Retain Bound default/myappdata-myapp-statefulset-1 26m
[root@k8smaster statefulset]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-statefulset-0 1/1 Running 0 1m myapp-statefulset-1 1/1 Running 0 1m pod-secret-volumes-demo 1/1 Running 0 3h
See commandline output above, we got:ui
If we delete this StatefulSet, it will delete from higher pod number to lower pod number.this
PS: If we delete all StetefulSet even with kubectl delete sts <sts name>, if will delete all pods, svc and sts. But PVC will stay! If we resume StatefulSet with the same template, the PVCs will bind the same StatefulSets!
in the container you can use nslookup to resolve the pod ip address
like by default:
use "kubectl exec -it pod-name -- /bin/sh" go into the container and then perform
nslookup pod_name.service_name.ns_name.svc.cluster.local
this name: pod_name.service_name.ns_name.svc.cluster.local is identified in k8s-cluster!
You can also use "kubectl scale sts myapp-stateful-demo --replicas=3" to scale out the StatefulSet
You can use StatefulSet's sts.spec.updateStrategy.rollingUpdate -> partition (number) to perform a canary release with rolling update.
For example assume that we have 3 pods:
myapp-statefulset-0, myapp-statefulset-1, myapp-statefulset-2
1. Now set StatefulSet myapp-statefulset -> partition to 2 (0 by default)
kubectl patch sts myapp-statefulset -p '{"spec": {"updateStrategy": {"rollingUpdate": {"partition:"2}}}}
It means, that by rollingupdate k8s will start with myapp-statefulset-2 forward (2, 3, 4 ... if 3, 4 exist) update the containers.
Because myapp-statefulset-2 is our last pod, the update will only performs on this pod. myapp-statefulset-0 and myapp-statefulset-1 will unchanged at this moment.
2. Perform update
this command change image version from v1 to v2
kubectl patch sts myapp-statefulset -p '{"template": {"spec": {"containers[0]": {"image": "ikubernetes/myapp:v2}}}}'
* container[0] means that we take the first container image in the template file to patch...
3. set partition back to 0
If we have seen, that myapp-statefulset-2 update doesn't have any issue, we can set partition back to 0
kubectl patch sts myapp-statefulset -p '{"spec": {"updateStrategy": {"rollingUpdate": {"partition:0"}}}}
4. after partition be setted to 0, the rest will be updated automatically
k8s will update myapp-statefulset-1 then myapp-statefulset-0... at the end, myapp-statefulset-(0-1) will reach the final status.