http://blog.itpub.net/28916011/viewspace-2215046/ html
在應用程序中,能夠分爲有狀態應用和無狀態應用。 python
無狀態的應用更關注於羣體,任何一個成員均可以被取代。 mysql
對有狀態的應用是關注個體。 nginx
像咱們前面用deployment控制器管理的nginx、myapp等都屬於無狀態應用。 web
像mysql、redis,zookeeper等都屬於有狀態應用,他們有的還有主從之分、前後順序之分。 redis
statefulset控制器能實現有狀態應用的管理,但實現起來也是很是麻煩的。須要把咱們運維管理的過程寫入腳本並注入到statefulset中才能使用。雖然互聯網上有人作好了stateful的腳本,可是仍是建議你們不要輕易的把redis、mysql等這樣有狀態的應用遷移到k8s上。sql
在k8s中,statefulset主要管理一下特效的應用: docker
a)、每個Pod穩定且有惟一的網絡標識符;api
b)、穩定且持久的存儲設備; tomcat
c)、要求有序、平滑的部署和擴展;
d)、要求有序、平滑的終止和刪除;
e)、有序的滾動更新,應該先更新從節點,再更新主節點;
statefulset由三個組件組成:
a) headless service(無頭的服務,即沒名字);
b)statefulset控制器
c)volumeClaimTemplate(存儲卷申請模板,由於每一個pod要有專用存儲卷,而不能共用存儲卷)
[root@master ~]# kubectl explain sts #stateful的簡稱
建立以前刪除以前建立的多餘的pod和svc避免待會衝突出錯,固然也能夠不刪,只不過yaml裏有些是衝突的,本身得另行定義
kubectl delete pods pod-vol-pvc kubectl delete pod pod-cm-3 kubectl delete pods pod-secret-1 kubectl delete deploy myapp-deploy kubectl delete deploy tomcat-deploy kubectl delete pvc mypvc kubectl delete pv --all kubectl delete svc myapp kubectl delete svc tomcat
而後從新生成pv
[root@master volumes]# cat pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: 172.16.100.64 accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 9Gi
[root@master stateful]# cat stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp-svc labels: app: myapp-svc spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp-svc replicas: 2 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: #存儲卷申請模板,能夠爲每一個pod定義volume;能夠爲pod所在的名稱空間自動建立pvc。 - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] #storageClassName: "gluster-dynamic" resources: requests: storage: 5Gi #2G的pvc
[root@master stateful]# kubectl apply -f stateful-demo.yaml service/myapp-svc unchanged statefulset.apps/myapp created
[root@master stateful]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-svc ClusterIP None <none> 80/TCP 12m
看到myapp-svc是無頭服務。
[root@master stateful]# kubectl get sts NAME DESIRED CURRENT AGE myapp 2 2 6m
[root@master stateful]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 3s myappdata-myapp-1 Bound pv003 1Gi RWO,RWX 1s
[root@master stateful]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 1d pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 1d pv003 1Gi RWO,RWX Retain Bound default/myappdata-myapp-1 1d pv004 1Gi RWO,RWX Retain Bound default/mypvc 1d pv005 1Gi RWO,RWX Retain Available
[root@master stateful]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 4m myapp-1 1/1 Running 0 4m
[root@master stateful]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted
上面刪除會使pod和service刪除,可是pvc是不會刪除,因此還能恢復。
[root@master stateful]# kubectl exec -it myapp-0 -- /bin/sh / # nslookup myapp-0.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-0.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.110 myapp-0.myapp-svc.default.svc.cluster.local / # / # / # nslookup myapp-1.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-1.myapp-svc.default.svc.cluster.local Address 1: 10.244.2.97 myapp-1.myapp-svc.default.svc.cluster.local
myapp-0.myapp-svc.default.svc.cluster.local
格式爲:pod_name.service_name.namespace.svc.cluster.local
下面擴展myapp pod爲5個:
[root@master stateful]# kubectl scale sts myapp --replicas=5 statefulset.apps/myapp scaled
[root@master stateful]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 17d myapp-0 1/1 Running 0 37m myapp-1 1/1 Running 0 37m myapp-2 1/1 Running 0 46s myapp-3 1/1 Running 0 43s myapp-4 0/1 Pending 0 41s
[root@master stateful]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 52m myappdata-myapp-1 Bound pv003 1Gi RWO,RWX 52m myappdata-myapp-2 Bound pv005 1Gi RWO,RWX 2m myappdata-myapp-3 Bound pv001 1Gi RWO,RWX 2m myappdata-myapp-4 Pending 2m
另外也能夠用patch打補丁的方法來進行擴容和縮容:
[root@master stateful]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' statefulset.apps/myapp patched
下面咱們再來介紹一下滾動更新。
[root@master stateful]# kubectl explain sts.spec.updateStrategy.rollingUpdate
假設有4個pod(pod0,pod1,pod2,pod3),若是設置partition爲5,那麼說明大於等於5的pod更新,咱們四個Pod就都不更新;若是partition爲4,那麼說明大於等於4的pod更新,即pod3更新,其餘pod都不更新;若是partiton爲3,那麼說明大於等於3的pod更新,那麼就是pod2和pod3更新,其餘pod都不更新。
[root@master stateful]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' statefulset.apps/myapp patched
1.13和視頻中partition是不同,視頻版本1.11??出現的是4,1.13.怎麼該也不是4顯示的是一大串數字。,可是更新是按照上面的策略更新的
[root@master stateful]# kubectl describe sts myapp Update Strategy: RollingUpdate Partition: 4
下面把myapp升級爲v2版本
[root@master stateful]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 statefulset.apps/myapp image updated [root@master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 2 2 1h myapp ikubernetes/myapp:v2 [root@master ~]# kubectl get pods myapp-4 -o yaml containerStatuses: - containerID: docker://898714f2e5bf4f642e2a908e7da67eebf6d3074c89bbd0d798d191a2061a3115 image: ikubernetes/myapp:v2
能夠看到pod myapp-4使用的模板版本是v2了。