一、k8s 存儲原理
node
底層存儲支持各類方式,NAS、雲盤、CEPH是咱們經常使用的存儲方式之一。 這是最底層的硬件存儲,在底層硬件基礎之上再開闢 PV(Persistent Volume)。 經過PV申請PVC(PersistentVolumeClaim)資源。 實現方式分爲兩種,經過定義PV再定義PVC; 直接動態申請PVC。
二、卷訪問模式
1 ) RWO ROM RWM nginx
ReadWriteOnce 卷能夠被一個節點以讀寫方式掛載; ReadOnlyMany 卷能夠被多個節點以只讀方式掛載; ReadWriteMany 卷能夠被多個節點以讀寫方式掛載。
2)pv 的幾種生命週期
vim
3) pv生命週期的4階段api
Available 可用狀態,未與PVC綁定 Bound 綁定狀態 Released 綁定的PVC刪除、資源釋放、未被集羣收回 Failed 資源回收失敗
三、kubernetes目錄掛載方式app
0 ) emptyDiride
emptyDir,它的生命週期和所屬的 Pod 是相同的,同生共死。 empty Volume 在 Pod 分配到 Node 上時會被建立, 自動分配一個目錄。 當 Pod 從 Node 上刪除或者遷移時,emptyDir 中的數據會被永久刪除
1)hostpath模式,建立在node節點,與pod 生命週期不一樣
將nginx 鏡像中/data 掛載到 宿中的/datak8s, yaml文件以下:spa
apiVersion: v1 kind: Pod metadata: name: test-pd-hostpath spec: containers: - image: nginx:1.14.2 name: test-container volumeMounts: - mountPath: /data/ name: test-volume volumes: - name: test-volume hostPath: path: /datak8s type: DirectoryOrCreate
hostPath type 模式以下:
3d
在node節點 /datak8s 下建立文件
日誌
刪除test-pd-hostpath,修改yaml文件code
kubectl apply -f hostpath.yaml
查看鏡像內目錄
在低版本某雲中,使用該種對應關係進行日誌收集, node節點使用一樣的目錄,pod中由程序 logback 來建立日誌
二、使用nfs-pvc 作爲數據存儲 或者在雲端能夠使用雲存儲掛載但僅侷限於RWO環境
k8s 掛載NFS的 rbac.yaml 建立
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
#pvc 建立
apiVersion: v1 kind: PersistentVolume metadata: name: pv-test spec: capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: /nfs/k8s server: 10.0.0.181 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-test spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi
#建立yaml文件並掛載pvc
vim nginx2-pvc-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment labels: app: nginx2 spec: replicas: 2 selector: matchLabels: app: nginx2 template: metadata: labels: app: nginx2 spec: containers: - name: nginx2 image: nginx:1.14.2 command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ] ports: - containerPort: 80 volumeMounts: - name: nginx2-pvc-storageclass mountPath: /logs volumes: - name: nginx2-pvc-storageclass persistentVolumeClaim: claimName: pvc-test
#部署nginx2 deployment
kubectl apply -f nginx2-pvc-deployment.yaml
#刪除 nginx2,hello.txt 文件還存在
[root@k8s01 storage]# kubectl delete -f nginx2-pvc-deployment.yaml deployment.apps "nginx2-deployment" deleted