k8s中pod資源分爲有狀態(數據類型的container)和無狀態(服務類型container)。html
k8s中的數據存儲通常是以volume掛載存儲卷,在多node集羣模式下,最優的方案就是提供一個存儲系統(存儲系統的選擇條件: 能夠遠程訪問,能夠多線程讀寫等),k8s提供的nfs,gluster,cepf等,具體的使用方法可使用kubectl explain pod.spec查看。通常的無狀態服務類型直接把文件存儲在存儲系統固定的目錄下,在k8s的pod建立時,mount該存儲目錄便可。可是有些服務不能使用該存儲模式,例如redis集羣,es等,這些數據都是分開存儲的,當pod重啓以後,pod的ip和主機名數據都已經變化了,因此對於有狀態的服務平常volume就不適用。node
能夠建立一個StatefulSet資源代替ReplicaSet來運行這類pod.它們是專門定製的一類應用,這類應用中每個實例都是不可替代的個體,都擁有穩定的名字和狀態。nginx
對比StatefulSet 與 ReplicaSet 或 ReplicationControllerredis
RS或RC管理的pod副本比較像牛,它們都是無狀態的,任什麼時候候它們均可以被一個全新的pod替換。而後有狀態的pod須要不一樣的方法,當一個有狀態的pod掛掉後,這個pod實例須要在別的節點上重建,可是新的實例必須與被替換的實例擁有相同的名稱、網絡標識和狀態。這就是StatefulSet如何管理pod的。windows
StatefulSet 保證了pod在從新調度後保留它們的標識和狀態。它讓你方便地擴容、縮容。與RS相似,StatefulSet也會指按期望的副本數,它決定了在同一時間內運行的寵物數。也是依據pod模版建立的,與RS不一樣的是,StatefulSet 建立的pod副本並非徹底同樣的。每一個pod均可以擁有一組獨立的數據卷(持久化狀態)。另外pod的名字都是規律的(固定的),而不是每一個新pod都隨機獲取一個名字。後端
提供穩定的網絡標識api
StatefulSet 建立的pod的名稱,按照從零開始的順序索引,這個會體如今pod的名稱和主機名稱上,一樣還會體如今pod對應的固定存儲上。瀏覽器
建立statefulset服務,存儲使用nfsbash
先基於nfs建立pv,master和node節點上必須安裝nfs-utils否則沒法mount網絡
查看nfs共享的目錄:
[root@k8s-3 ~]# showmount -e 192.168.191.50 Export list for 192.168.191.50: /data/nfs/04 192.168.191.0/24 /data/nfs/03 192.168.191.0/24 /data/nfs/02 192.168.191.0/24 /data/nfs/01 192.168.191.0/24 /data/nfs 192.168.191.0/24
apiVersion: v1 kind: PersistentVolume metadata: name: pv02 labels: app: pv02 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/02 server: zy.nfs.com --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03 labels: app: pv03 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/03 server: zy.nfs.com --- apiVersion: v1 kind: PersistentVolume metadata: name: pv04 labels: app: pv04 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/04 server: zy.nfs.com
查看pv
[root@k8s-3 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv02 2Mi RWX Retain Available nfs 5h27m pv03 2Mi RWX Retain Available nfs 5h27m pv04 2Mi RWX Retain Available nfs 5h27m
* access modes訪問模式,
ReadWriteOnce -能夠經過單個節點以讀寫方式安裝該卷
ReadOnlyMany -該卷能夠被許多節點只讀掛載
ReadWriteMany -該卷能夠被許多節點讀寫安裝
* RECLAIM POLICY pv的回收策略
retain 刪除pvc後,pv一直存儲,數據不會丟失
delete 刪除pvc後,pv自動刪除
storageclass 自定義的內同
建立headless服務
[root@k8s-3 statefulset]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: headless-svc spec: clusterIP: None selector: app: sfs ports: - name: http port: 80 protocol: TCP
查看headless服務,注意clusterIP: None
[root@k8s-3 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE headless-svc ClusterIP None <none> 80/TCP 11m
建立statefulset
[root@k8s-3 statefulset]# cat statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: serviceName: sfs replicas: 2 selector: matchLabels: app: sfs template: metadata: name: sfs labels: app: sfs spec: containers: - name: sfs image: nginx:latest ports: - name: http containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: ["ReadWriteMany"] storageClassName: nfs resources: requests: storage: 2Mi
執行後查看pod的建立狀況和pv,pvc
pod的建立過程,pod命名 name-0/1/2/......,依次建立pod
[root@k8s-3 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE nginx-0 0/1 Pending 0 0s nginx-0 0/1 Pending 0 0s nginx-0 0/1 Pending 0 1s nginx-0 0/1 ContainerCreating 0 1s nginx-0 1/1 Running 0 22s nginx-1 0/1 Pending 0 0s nginx-1 0/1 Pending 0 0s nginx-1 0/1 Pending 0 0s nginx-1 0/1 ContainerCreating 0 0s nginx-1 1/1 Running 0 25s # 最終的結果 [root@k8s-3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-0 1/1 Running 0 2m53s nginx-1 1/1 Running 0 2m31s
pvc的建立與pv的綁定
[root@k8s-3 ~]# kubectl get pvc -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-nginx-0 Bound pv04 2Mi RWX nfs 16s www-nginx-1 Pending nfs 0s www-nginx-1 Pending pv02 0 nfs 0s www-nginx-1 Bound pv02 2Mi RWX nfs 0s [root@k8s-3 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv02 2Mi RWX Retain Bound default/www-nginx-1 nfs 10m pv03 2Mi RWX Retain Available nfs 10m pv04 2Mi RWX Retain Bound default/www-nginx-0 nfs 10m
查看headless與後端pod的關係
[root@k8s-3 ~]# kubectl describe svc headless-svc Name: headless-svc Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"headless-svc","namespace":"default"},"spec":{"clusterIP":"None","... Selector: app=sfs Type: ClusterIP IP: None Port: http 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.50:80,10.244.3.86:80 Session Affinity: None Events: <none>
headless由於沒有clusterIP,因此沒法在外網訪問,本身測試的使用瀏覽器訪問的話,能夠把windows的host文件添加解析
10.244.1.50:80,10.244.3.86:80,這裏就不配置了,在node節點直接curl
[root@k8s3-1 ~]# curl 10.244.1.50:80 this is 02 [root@k8s3-1 ~]# curl 10.244.3.86:80 this is 04
nfs共享目錄設置
[root@zy nfs]# echo "this is 02" > 02/index.html [root@zy nfs]# echo "this is 03" > 03/index.html [root@zy nfs]# echo "this is 04" > 04/index.html
到此一個statefulset的服務與存儲測試結束。對於statefulset類型的擴容和縮容,均可以使用使用kubectl get pod -w 查看,擴容新加pod-num+1(已存在最大num);縮容,刪除pod-num(已存在最大num),這裏就不在演示了,有興趣的能夠驗證下。