k8s之針對有狀態服務實現數據持久化

前言

一、什麼是有狀態服務和無狀態服務?

對服務器程序來講,到底是有狀態服務,仍是無狀態服務,其判斷依舊是指兩個來自相同發起者的請求在服務器端是否具有上下文關係。若是是狀態化請求,那麼服務器端通常都要保存請求的相關信息,每一個請求能夠默認地使用之前的請求信息。而對於無狀態請求,服務器端所可以處理的過程必須所有來自於請求所攜帶的信息,以及其餘服務器端自身所保存的、而且能夠被全部請求所使用的公共信息。
無狀態的服務器程序,最著名的就是WEB服務器。每次HTTP請求和之前都沒有什麼關係,只是獲取目標URI。獲得目標內容以後,此次鏈接就被殺死,沒有任何痕跡。在後來的發展進程中,逐漸在無狀態化的過程當中,加入狀態化的信息,好比COOKIE。服務端在響應客戶端的請求的時候,會向客戶端推送一個COOKIE,這個COOKIE記錄服務端上面的一些信息。客戶端在後續的請求中,能夠攜帶這個COOKIE,服務端能夠根據這個COOKIE判斷這個請求的上下文關係。COOKIE的存在,是無狀態化向狀態化的一個過渡手段,他經過外部擴展手段,COOKIE來維護上下文關係。
狀態化的服務器有更廣闊的應用範圍,好比MSN、網絡遊戲等服務器。他在服務端維護每一個鏈接的狀態信息,服務端在接收到每一個鏈接的發送的請求時,能夠從本地存儲的信息來重現上下文關係。這樣,客戶端能夠很容易使用缺省的信息,服務端也能夠很容易地進行狀態管理。好比說,當一個用戶登陸後,服務端能夠根據用戶名獲取他的生日等先前的註冊信息;並且在後續的處理中,服務端也很容易找到這個用戶的歷史信息。
狀態化服務器在功能實現方面具備更增強大的優點,但因爲他須要維護大量的信息和狀態,在性能方面要稍遜於無狀態服務器。無狀態服務器在處理簡單服務方面有優點,但複雜功能方面有不少弊端,好比,用無狀態服務器來實現即時通信服務器,將會是場惡夢。html

二、K8s有狀態服務和無狀態服務的數據持久化有什麼區別?

在k8s中,對web這種無狀態服務實現數據持久化時,採用我以前的博文:K8s數據持久化之自動建立PV的方式對其實現便可。可是若是對數據庫這種有狀態的服務使用這種數據持久化方式的話,那麼將會有一個很嚴重的問題,就是當對數據庫進行寫入操做時,你會發現只能對後端的多個容器中的其中一個容器進行寫入,固然,nfs目錄下也會有數據庫寫入的數據,可是,其沒法被其餘數據庫讀取到,由於在數據庫中有不少影響因素,好比server_id,數據庫分區表信息等。node

固然,除了數據庫以外,還有其餘的有狀態服務不可使用上述的數據持久化方式。mysql

三、數據持久化實現方式——StatefullSet

StatefulSet也是一種資源對象(在kubelet 1.5版本以前都叫作PetSet),這種資源對象和RS、RC、Deployment同樣,都是Pod控制器。web

在Kubernetes中,大多數的Pod管理都是基於無狀態、一次性的理念。例如Replication Controller,它只是簡單的保證可提供服務的Pod數量。若是一個Pod被認定爲不健康的,Kubernetes就會以對待牲畜的態度對待這個Pod——刪掉、重建。相比於牲畜應用,PetSet(寵物應用),是由一組有狀態的Pod組成,每一個Pod有本身特殊且不可改變的ID,且每一個Pod中都有本身獨一無2、不能刪除的數據。sql

  衆所周知,相比於無狀態應用的管理,有狀態應用的管理是很是困難的。有狀態的應用須要固定的ID、有本身內部可不見的通訊邏輯、特別容器出現劇烈波動等。傳統意義上,對有狀態應用的管理通常思路都是:固定機器、靜態IP、持久化存儲等。Kubernetes利用PetSet這個資源,弱化有狀態Pet與具體物理設施之間的關聯。一個PetSet可以保證在任意時刻,都有固定數量的Pet在運行,且每一個Pet都有本身惟一的身份。docker

一個「有身份」的Pet指的是該Pet中的Pod包含以下特性:數據庫

  • 靜態存儲;
  • 有固定的主機名,且DNS可尋址(穩定的網絡身份,這是經過一種叫 Headless Service 的特殊Service來實現的。 和普通Service相比,Headless Service沒有Cluster IP,用於爲一個集羣內部的每一個成員提供一個惟一的DNS名字,用於集羣內部成員之間通訊 。);
  • 一個有序的index(好比PetSet的名字叫mysql,那麼第一個啓起來的Pet就叫mysql-0,第二個叫mysql-1,如此下去。當一個Pet down掉後,新建立的Pet會被賦予跟原來Pet同樣的名字,經過這個名字就能匹配到原來的存儲,實現狀態保存。)

一、應用舉例:apache

  • 數據庫應用,如Mysql、PostgreSQL,須要一個固定的ID(用於數據同步)以及外掛一塊NFS Volume(持久化存儲)。
  • 集羣軟件,如zookeeper、Etcd,須要固定的成員關係。
    二、使用限制
  • 1.4新加功能,1.3及以前版本不可用;
  • DNS,要求使用1.4或1.4以後的DNS插件,1.4以前的插件只能解析Service對應的IP,沒法解析Pod(HostName)對應的域名;
  • 須要持久化數據卷(PV,若爲nfs這種沒法經過調用API來建立存儲的網絡存儲,數據卷要在建立PetSet以前靜態建立;若爲aws-ebs、vSphere、openstack Cinder這種能夠經過API調用來動態建立存儲的虛擬存儲,數據卷除了能夠經過靜態的方式建立之外,還能夠經過StorageClass進行動態建立。須要注意的是,動態建立出來的PV,默認的回收策略是delete,及在刪除數據的同時,還會把虛擬存儲卷刪除);
  • 刪除或縮容PetSet不會刪除對應的持久化數據卷,這麼作是出於數據安全性的考慮;
  • 只能經過手動的方式升級PetSet。

配置示例

這種方式,與K8s數據持久化之自動建立PV的方式有不少相同點,都須要底層NFS存儲、rbac受權帳戶,nfs-client-Provisioner提供存儲,SC存儲類這些東西,惟一不一樣的是,這種針對於有狀態服務的數據持久化,並不須要咱們手動建立PV。vim

搭建registry私有倉庫:後端

[root@master ~]# docker run -tid --name registry -p 5000:5000 -v /data/registry:/var/lib/registry --restart always registry
[root@master ~]# vim /usr/lib/systemd/system/docker.service   #更改docker的配置文件,以便指定私有倉庫
ExecStart=/usr/bin/dockerd -H unix:// --insecure-registry 192.168.20.6:5000
[root@master ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/
[root@master ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker

搭建NFS服務:

[root@master ~]# yum -y install nfs-utils
[root@master ~]# systemctl enable rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs
[root@master ~]# showmount -e
Export list for master:
/nfsdata *

以上皆屬於準備工做。

一、使用自定義鏡像,建立StatefulSet資源對象,要求每一個都作數據持久化。副本數量爲6個。數據持久化目錄爲:/usr/local/apache2/htdocs

建立rbac受權

[root@master ljz]# vim rbac.yaml  #編寫yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@master ljz]# kubectl apply -f rbac.yaml       #執行yaml文件

建立NFS-clinet-Provisioner

[root@master ljz]# vim nfs-deploymnet.yaml   #編寫yaml文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: ljz
            - name: NFS_SERVER
              value: 192.168.20.6
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.20.6
            path: /nfsdata
[root@master ljz]# kubectl apply -f nfs-deploymnet.yaml      #執行yaml文件

建立SC(storageClass)

[root@master ljz]# vim sc.yaml       #編寫yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
provisioner: ljz
reclaimPolicy: Retain
[root@master ljz]# kubectl apply -f sc.yaml         #執行yaml文件

建立POd

[root@master ljz]# vim statefulset.yaml        #編寫yaml文件

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  serviceName: headless-svc
  replicas: 6
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v1
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl apply -f statefulset.yaml 
[root@master ljz]# kubectl get pod -w
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          7m57s
statefulset-0                             1/1     Running   0          26s
statefulset-1                             1/1     Running   0          24s
statefulset-2                             1/1     Running   0          20s
statefulset-3                             1/1     Running   0          16s
statefulset-4                             1/1     Running   0          13s
statefulset-5                             1/1     Running   0          9s
[root@master ljz]# kubectl get pv,pvc

二、完成以後,要求第0--5個Pod的主目錄應該爲: Version:--v1
將服務進行擴容:副本數量更新爲10個,驗證是否會繼續爲新的Pod建立持久化的PV,PVC

[root@master ljz]# vim a.sh   #編寫腳本定義首頁

#!/bin/bash
for i in `ls /nfsdata`
do
  echo "Version: --v1" > /nfsdata/${i}/index.html
done
[root@master ljz]# kubectl get pod -o wide      #查看節點IP,隨機驗證首頁文件
[root@master ljz]# curl 10.244.1.3
Version: --v1
[root@master ljz]# curl 10.244.1.5
Version: --v1
[root@master ljz]# curl 10.244.2.4
Version: --v1
#進行擴容更新
[root@master ljz]# vim statefulset.yaml 

apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  labels:
    app: headless-svc
spec:
  ports:
  - name: testweb
    port: 80
  selector:
    app: headless-pod
  clusterIP: None

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset
spec:
  updateStrategy:
    rollingUpdate:
      partition: 4
  serviceName: headless-svc
  replicas: 10
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - name: testhttpd
        image: 192.168.20.6:5000/ljz:v2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test
          mountPath: /usr/local/apache2/htdocs
  volumeClaimTemplates:
  - metadata:
      name: test
      annotations:
        volume.beta.kubernetes.io/storage-class: test-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

[root@master ljz]# kubectl get pod -w #查看其更新過程
NAME                                      READY   STATUS             RESTARTS   AGE
nfs-client-provisioner-6649749f97-cl92m   1/1     Running            0          40m
statefulset-0                             1/1     Running            0          33m
statefulset-1                             1/1     Running            0          33m
statefulset-2                             1/1     Running            0          33m
statefulset-3                             1/1     Running            0          33m
statefulset-4                             1/1     Running            0          33m
statefulset-5                             1/1     Running            0          33m
statefulset-6                             0/1     ImagePullBackOff   0          5m9s
statefulset-6                             1/1     Running            0          5m41s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          0s
statefulset-7                             0/1     Pending            0          2s
statefulset-7                             0/1     ContainerCreating   0          2s
statefulset-7                             1/1     Running             0          4s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          0s
statefulset-8                             0/1     Pending             0          1s
statefulset-8                             0/1     ContainerCreating   0          1s
statefulset-8                             1/1     Running             0          3s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          0s
statefulset-9                             0/1     Pending             0          1s
statefulset-9                             0/1     ContainerCreating   0          1s
statefulset-9                             1/1     Running             0          3s
statefulset-5                             1/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Terminating         0          33m
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     Pending             0          0s
statefulset-5                             0/1     ContainerCreating   0          0s
statefulset-5                             1/1     Running             0          1s
statefulset-4                             1/1     Terminating         0          33m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Terminating         0          34m
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     Pending             0          0s
statefulset-4                             0/1     ContainerCreating   0          0s
statefulset-4                             1/1     Running             0          1s
[root@master ljz]# kubectl get pv,pvc                #查看其爲擴容後的容器建立的pv及pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            Delete           Bound    default/test-statefulset-0   test-sc                 38m
persistentvolume/pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            Delete           Bound    default/test-statefulset-4   test-sc                 37m
persistentvolume/pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            Delete           Bound    default/test-statefulset-7   test-sc                 3m41s
persistentvolume/pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            Delete           Bound    default/test-statefulset-5   test-sc                 37m
persistentvolume/pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            Delete           Bound    default/test-statefulset-2   test-sc                 37m
persistentvolume/pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            Delete           Bound    default/test-statefulset-9   test-sc                 3m34s
persistentvolume/pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            Delete           Bound    default/test-statefulset-3   test-sc                 37m
persistentvolume/pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            Delete           Bound    default/test-statefulset-8   test-sc                 3m37s
persistentvolume/pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            Delete           Bound    default/test-statefulset-6   test-sc                 9m22s
persistentvolume/pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            Delete           Bound    default/test-statefulset-1   test-sc                 37m

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-statefulset-0   Bound    pvc-161fc655-7601-4996-99c8-a13cabaa4ad1   100Mi      RWO            test-sc        38m
persistentvolumeclaim/test-statefulset-1   Bound    pvc-ab64f8b8-737e-49a5-ae5d-e3b33188ce39   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-2   Bound    pvc-407e2c0e-209d-4b5a-a3fa-454787f617a7   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-3   Bound    pvc-90a05d1b-f555-44df-9bb3-73284001dda3   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-4   Bound    pvc-1d1b0cfd-83cc-4cd6-a380-f23b9eac0411   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-5   Bound    pvc-2e48a292-cb30-488e-90a9-5184811b9eb8   100Mi      RWO            test-sc        37m
persistentvolumeclaim/test-statefulset-6   Bound    pvc-9f60aab0-4491-4422-9514-1a945151909d   100Mi      RWO            test-sc        9m22s
persistentvolumeclaim/test-statefulset-7   Bound    pvc-297495a8-2117-4232-8e9a-61019c03f0d0   100Mi      RWO            test-sc        3m41s
persistentvolumeclaim/test-statefulset-8   Bound    pvc-9e2fd35e-5151-4790-b248-6545815d8c06   100Mi      RWO            test-sc        3m37s
persistentvolumeclaim/test-statefulset-9   Bound    pvc-56ac09a0-e51d-42a9-843b-f0a3a0c60a08   100Mi      RWO            test-sc        3m34s
[root@master nfsdata]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          54m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          47m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          47m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          47m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          47m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          12m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          13m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          18m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          13m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          13m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          13m   10.244.1.7   node01   <none>           <none>
#查看其首頁文件
[root@master nfsdata]# curl 10.244.2.6
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
 <head>
  <title>Index of /</title>
 </head>
 <body>
<h1>Index of /</h1>
<ul></ul>
</body></html>
[root@master nfsdata]# curl 10.244.1.8
Version: --v1

服務進行更新:在更新過程當中,要求3之後的所有更新爲Version:v2

[root@master ljz]# vim a.sh      #編寫首頁文件

#!/bin/bash
for i in `ls /nfsdata/`
do
  if [ `echo $i | awk -F - '{print $4}'` -gt 3 ]
  then
    echo "Version: --v2" > /nfsdata/${i}/index.html
  fi
done
[root@master ljz]# sh a.sh        #執行腳本
[root@master ljz]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-6649749f97-cl92m   1/1     Running   0          68m   10.244.1.2   node01   <none>           <none>
statefulset-0                             1/1     Running   0          60m   10.244.1.3   node01   <none>           <none>
statefulset-1                             1/1     Running   0          60m   10.244.2.3   node02   <none>           <none>
statefulset-2                             1/1     Running   0          60m   10.244.2.4   node02   <none>           <none>
statefulset-3                             1/1     Running   0          60m   10.244.1.4   node01   <none>           <none>
statefulset-4                             1/1     Running   0          26m   10.244.2.8   node02   <none>           <none>
statefulset-5                             1/1     Running   0          26m   10.244.1.8   node01   <none>           <none>
statefulset-6                             1/1     Running   0          32m   10.244.2.6   node02   <none>           <none>
statefulset-7                             1/1     Running   0          26m   10.244.1.6   node01   <none>           <none>
statefulset-8                             1/1     Running   0          26m   10.244.2.7   node02   <none>           <none>
statefulset-9                             1/1     Running   0          26m   10.244.1.7   node01   <none>           <none>
#確認內容
[root@master ljz]# curl 10.244.1.4
Version: --v1
[root@master ljz]# curl 10.244.2.8
Version: --v2
相關文章
相關標籤/搜索