pod自己是無狀態,因此不少有狀態的應用,就須要將數據進行持久化。html
1:將數據掛在到宿主機。可是pod重啓以後有可能到另一個節點,這樣數據雖然不會丟可是仍是有可能會找不到nginx
apiVersion: v1 kind: Pod metadata: name: busybox labels: name: busybox spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: /busybox-data name: data volumes: - hostPath: path: /tmp/data name: data
2:掛到外部存儲,如nfsweb
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: web01 template: metadata: name: nginx labels: app: web01 spec: containers: - name: nginx image: reg.docker.tb/harbor/nginx ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html readOnly: false name: nginx-data volumes: - name: nginx-data nfs: server: 10.0.10.31 path: "/data/www-data"
上述說的是簡單的存儲方法,直接在deployment中定義了具體的存儲,可是這樣會存在幾個問題。docker
1:權限管理,任何一個pod均可以動任意一個路徑api
2:磁盤大小限制,沒法對某個存儲塊進行限制app
3:若是NFS的url變了,那麼全部的配置都須要修改url
爲了解決以上的問題,引入了PV-PVC的概念spa
建立一個卷PV,不屬於任何namespaces,能夠限制大小,讀寫權限code
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 labels: app: "my-nfs" spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false
再對應的namespace下面建立PVC。server
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: app: "my-nfs"
而後kubectl apply 建立PV,PVC
最後在應用用使用該PVC
apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc
這樣能夠方便的限制每一個pvc所在的子目錄,同時萬一nfs遷移後,只須要更改pv中的url便可