一:Statefulsethtml
StatefulSet是爲了解決有狀態服務的問題,對應的Deployment和ReplicaSet是爲了無狀態服務而設計,其應用場景包括:
1.穩定的持久化存儲,即Pod從新調度後仍是能訪問到相同的持久化數據,基於PVC來實現
2.穩定的網絡標誌,即Pod從新調度後其PodName和HostName不變,基於Headless Service(即沒有Cluster IP的Service)來實現
3.有序部署,有序擴展,即Pod是有順序的,在部署或者擴展的時候要依據定義的順序依次依次進行(即從0到N-1,在下一個Pod運行以前全部以前的Pod必須都是Running和Ready狀態),基於init containers來實現
4.有序收縮,有序刪除(即從N-1到0)node由於statefulset要求Pod的名稱是有順序的,每個Pod都不能被隨意取代,也就是即便Pod重建以後,名稱依然不變。爲後端的每個Pod去命名。web
從上面的應用場景能夠發現,StatefulSet由如下幾部分組成: 1.用於定義網絡標誌的Headless Service(headless-svc:無頭服務。由於沒有IP地址,因此它不具有負載均衡的功能了。) 2.用於建立PersistentVolumes的volumeClaimTemplates 3.定義具體應用的StatefulSet
StatefulSet:Pod控制器。
RC、RS、Deployment、DS。 無狀態的服務。
template(模板):根據模板建立出來的Pod,它們的狀態都是如出一轍的(除了名稱、IP、域名以外)
能夠理解爲:任何一個Pod,均可以被刪除,而後用新生成的Pod進行替換。數據庫有狀態的服務:須要記錄前一次或者屢次通訊中的相關時間,以做爲下一次通訊的分類標準。好比:MySQL等數據庫服務。(Pod的名稱,不能隨意變化。數據持久化的目錄也是不同,每個Pod都有本身獨有的數據持久化存儲目錄。)vim
每個Pod-----對應一個PVC-----每個PVC對應一個PV。後端
測試:要求
2、以本身的名稱建立一個名稱空間,如下全部資源都運行在此空間中。
用statefuset資源運行一個httpd web服務,要求3個Pod,可是每一個Pod的主界面內容不同,而且都要作專有的數據持久化,嘗試刪除其中一個Pod,查看新生成的Pod,是否數據與以前一致。api
1.基於NFS服務,建立NFS服務。網絡
1.[root@master ~]# yum -y install nfs-utils rpcbind br/>2.[root@master ~]# mkdir /nfsdata
3.[root@master ~]# vim /etc/exports br/>4./nfsdata *(rw,sync,no_root_squash)
5.[root@master ~]# systemctl start nfs-server.service
6.[root@master ~]# systemctl start rpcbind br/>7.[root@master ~]# showmount -e
8.Export list for master:
9./nfsdata * app
2.建立RBAC權限
vim rbac-rolebind.yaml負載均衡
apiVersion: v1 kind: Namespace metadata: name: lbs-test apiVersion: v1 kind: ServiceAccount 建立rbac受權用戶。及定義權限 metadata: name: nfs-provisioner name:lbs-test --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: nfs-provisioner-runner name:lbs-test rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: lbs-test 如沒有名稱空間須要添加這個default默認不然報錯 roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io
執行yaml文件:
1.[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
2.namespace/lbh-test created
3.serviceaccount/nfs-provisioner created
4.clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
5.clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
3.建立Deployment資源對象。 [root@master yaml]# vim nfs-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-client-provisioner name:lbs-test spec: replicas: 1#副本數量爲1 strategy: type: Recreate#重置 template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner#指定帳戶 containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner使用的是這個鏡像。 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes#指定容器內的掛載目錄 env: - name: PROVISIONER_NAME#容器內置變量 value: bdqn#這是變量的名字 - name: NFS_SERVER value: 192.168.1.1 - name: NFS_PATH#指定Nfs的共享目錄 value: /nfsdata volumes:#指定掛載到容器內的nfs路徑與IP - name: nfs-client-root nfs: server: 192.168.1.1 path: /nfsdata
執行yaml文件,查看Pod:br/>1.[root@master yaml]# kubectl apply -f nfs-deployment.yaml
2.deployment.extensions/nfs-client-provisioner created br/>3.[root@master yaml]# kubectl get pod -n lbs-test
4.NAME READY STATUS RESTARTS AGE
5.nfs-client-provisioner-5d88975f6d-wdbnc 1/1 Running 0 13s
4.建立Storageclass資源對象(sc): root@master yaml]# vim sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-nfs namespace:lbs-test #名稱空間 名 provisioner: lbs-test#與deployment資源的env環境變量value值相同 reclaimPolicy: Retain #回收策略
執行yaml文件,查看SC:br/>1.[root@master yaml]# kubectl apply -f sc.yaml
2.storageclass.storage.k8s.io/sc-nfs created br/>3.[root@master yaml]# kubectl get sc -n lbs-test
4.NAME PROVISIONER AGE
5.sc-nfs lbs-test 8s
5.建立StatefulSet資源對象,自動建立PVC:
vim statefulset.yaml apiVersion: v1 kind: Service metadata: name: headless-svc namespace: lbs-test labels: app: headless-svc spec: ports: - port: 80 name: myweb selector: app: headless-pod clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: name: statefulset-test namespace: lbs-test spec: serviceName: headless-svc replicas: 3 selector: matchLabels: app: headless-pod template: metadata: labels: app: headless-pod spec: containers: - image: httpd name: myhttpd ports: - containerPort: 80 name: httpd volumeMounts: - mountPath: /mnt name: test volumeClaimTemplates: 這個字段:自動建立PVC - metadata: name: test annotations: //這是指定storageclass,名稱要一致 volume.beta.kubernetes.io/storage-class: sc-nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
執行yaml文件,查看Pod:br/>1.[root@master yaml]# kubectl apply -f statefulset.yaml
2.service/headless-svc created br/>3.statefulset.apps/statefulset-test created
4.[root@master yaml]# kubectl get pod -n lbs-test
5.NAME READY STATUS RESTARTS AGE
6.nfs-client-provisioner-5d88975f6d-wdbnc 1/1 Running 0 22m
7.statefulset-test-0 1/1 Running 0 8m59s
8.statefulset-test-1 1/1 Running 0 2m30s
9.statefulset-test-2 1/1 Running 0 109s
**查看是否自動建立PV及PVC** PV: 1.[root@master yaml]# kubectl get pv -n lbs-test 2.NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 3.pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5 100Mi RWO Delete Bound lbh-test/test-statefulset-test-2 sc-nfs 4m23s 4.pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5 100Mi RWO Delete Bound lbh-test/test-statefulset-test-0 sc-nfs 11m 5.pvc-99137753-ccd0-4524-bf40-f3576fc97eba 100Mi RWO Delete Bound lbh-test/test-statefulset-test-1 sc-nfs 5m4s PVC: 1.[root@master yaml]# kubectl get pvc -n lbs-test 2.NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 3.test-statefulset-test-0 Bound pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5 100Mi RWO sc-nfs 13m 4.test-statefulset-test-1 Bound pvc-99137753-ccd0-4524-bf40-f3576fc97eba 100Mi RWO sc-nfs 6m42s 5.test-statefulset-test-2 Bound pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5 100Mi RWO sc-nfs 6m1s
查看是否建立持久化目錄:
1.[root@master yaml]# ls /nfsdata/
2.lbh-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5
3.lbh-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba
4.lbh-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5
6.在pod資源內建立數據。並訪問測試。
1.[root@master yaml]# cd /nfsdata/ 2.[root@master nfsdata]# echo 111 > lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html 3.[root@master nfsdata]# echo 222 > lbs-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba/index.html 4.[root@master nfsdata]# echo 333 > lbs-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5/index.html 5.[root@master nfsdata]# kubectl get pod -o wide -n lbs-test 6.NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 7.nfs-client-provisioner-5d88975f6d-wdbnc 1/1 Running 0 30m 10.244.2.2 node02 <none> <none> 8.statefulset-test-0 1/1 Running 0 17m 10.244.1.2 node01 <none> <none> 9.statefulset-test-1 1/1 Running 0 10m 10.244.2.3 node02 <none> <none> 10.statefulset-test-2 1/1 Running 0 9m57s 10.244.1.3 node01 <none> <none> 11.[root@master nfsdata]# curl 10.244.1.2 12.111 13.[root@master nfsdata]# curl 10.244.2.3 14.222 15.[root@master nfsdata]# curl 10.244.1.3 16.333
7.刪除其中一個pod,查看該pod資源的數據是否會**從新建立並存在。** 1.[root@master ~]# kubectl get pod -n lbs-test 2.NAME READY STATUS RESTARTS AGE 3.nfs-client-provisioner-5d88975f6d-wdbnc 1/1 Running 0 33m 4.statefulset-test-0 1/1 Running 0 20m 5.statefulset-test-1 1/1 Running 0 13m 6.statefulset-test-2 1/1 Running 0 13m 7.[root@master ~]# kubectl delete pod -n lbs-test statefulset-test-0 8.pod "statefulset-test-0" deleted **9. 刪除後會從新建立pod資源** 10.[root@master ~]# kubectl get pod -n lbs-test -o wide 11.NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 12.nfs-client-provisioner-5d88975f6d-wdbnc 1/1 Running 0 35m 10.244.2.2 node02 <none> <none> 13.statefulset-test-0 1/1 Running 0 51s 10.244.1.4 node01 <none> <none> 14.statefulset-test-1 1/1 Running 0 15m 10.244.2.3 node02 <none> <none> 15.statefulset-test-2 1/1 Running 0 14m 10.244.1.3 node01 <none> <none> **數據依舊存在。** 16.[root@master ~]# curl 10.244.1.4 17.111 18.[root@master ~]# cat /nfsdata/lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html 19.111
StatefulSet資源對象,針對有狀態的服務的數據持久化測試完成。經過測試,即便刪除Pod,從新生成調度後,依舊能訪問到以前的持久化數據。