前文咱們聊到了k8s上的ingress資源相關話題,回顧請參考:https://www.cnblogs.com/qiuhom-1874/p/14167581.html;今天們來聊一下k8s上volume相關話題;html
在說k8s上的volume的使用前,咱們先來回顧下docker裏的volume;對於docker容器來講,鏡像是分紅構建的且每一層都是隻讀的,只讀就意味着不能修改數據;只有當一個鏡像運行爲容器之後,在鏡像的最頂端纔會加上一個可寫層,一旦這個容器被刪除,對應可寫層上的數據也隨之被刪除;爲了解決docker容器上的數據持久化的問題;docker使用了volume;在docker上volume有兩種管理方式,第一種是用戶手動指定把宿主機(對於宿主機上的目錄多是掛載存儲系統上的某目錄)上的某個目錄掛載到容器某個目錄,這種管理方式叫作綁定掛載卷;還有一種就是docker自身維護把某個目錄掛載到容器某個目錄,這種叫docker本身管理的卷;無論使用那種方式管理的volume,它都是容器直接關聯宿主機上的某個目錄或文件;docker中的volume解決了容器生命週期內產生的數據在容器終止後可以持久化保存的問題;一樣k8s也有一樣的煩惱,不一樣的是k8s面對的是pod;咱們知道pod是k8s上最小調度單元,一個pod被刪除之後,pod裏運行的容器也隨之被刪除,那麼pod裏容器產生的數據該如何被持久化呢?要想解決這個問題,咱們先來看看pod的組成;node
提示:在k8s上pod裏能夠運行一個或多個容器,運行多個容器,其中一個容器咱們叫主容器,其餘的容器是用來輔助主容器,咱們叫作sidecar;對於pod來講,無論裏面運行多少個容器,在最底層都會運行一個pause容器,該容器最主要用來爲pod提供基礎架構支撐;而且位於同一個pod中的容器都共享pause容器的網絡名稱空間以及IPC和UTS;這樣一來咱們要想給pod裏的容器提供存儲卷,首先要把存儲卷關聯到pause容器,而後在容器裏掛載pause裏的存儲卷便可;以下圖所示mysql
提示:如上圖所示,對於pause容器來講它能夠關聯存儲A,也能夠關聯存儲B;對於pause關聯某個存儲,其位於同一pod中的其餘容器就也能夠掛載pause裏關聯的存儲目錄或文件;對於k8s來講存儲原本就不屬於k8s內部組件,它是一個外來系統,這也意味着咱們要想k8s使用外部存儲系統,首先pause容器要有適配其對應存儲系統的驅動;咱們知道同一宿主機上運行的多個容器都是共享同一內核,即宿主機內核上有某個存儲系統的驅動,那麼pause就可使用對應的驅動去適配對應的存儲;nginx
volumes類型git
咱們知道要想在k8s上使用存儲卷,咱們須要在對應的節點上提供對應存儲系統的驅動,對應運行在該節點上的全部pod就可使用對應的存儲系統,那麼問題來了,pod怎麼使用對應的存儲系統呢?該怎麼向其驅動程序傳遞參數呢?咱們知道在k8s上一切皆對象,要在k8s上使用存儲卷,咱們還須要把對應的驅動抽象成k8s上的資源;在使用時,咱們直接初始化對應的資源爲對象便可;爲了在k8s上簡化使用存儲卷的複雜度,k8s內置了一些存儲接口,對於不一樣類型的存儲,其使用的接口、傳遞的參數也有所不一樣;除此以外在k8s上也支持用戶使用自定義存儲,經過csi接口來定義;web
查看k8s上支持的volumes接口sql
[root@master01 ~]# kubectl explain pod.spec.volumes KIND: Pod VERSION: v1 RESOURCE: volumes <[]Object> DESCRIPTION: List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Volume represents a named volume in a pod that may be accessed by any container in the pod. FIELDS: awsElasticBlockStore <Object> AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <Object> AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <Object> AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs <Object> CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder <Object> Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap <Object> ConfigMap represents a configMap that should populate this volume csi <Object> CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI <Object> DownwardAPI represents downward API about the pod that should populate this volume emptyDir <Object> EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral <Object> Ephemeral represents a volume that is handled by a cluster storage driver (Alpha feature). The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc <Object> FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume <Object> FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker <Object> Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk <Object> GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo <Object> GitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs <Object> Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath <Object> HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi <Object> ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name <string> -required- Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs <Object> NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim <Object> PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk <Object> PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume <Object> PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected <Object> Items for all in one resources secrets, configmaps, and downward API quobyte <Object> Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd <Object> RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO <Object> ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret <Object> Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos <Object> StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume <Object> VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine [root@master01 ~]#
提示:從上面的幫助信息能夠看到k8s上支持的存儲接口仍是不少,每個存儲接口都是一種類型;對於這些存儲類型咱們大體能夠分爲雲存儲,分佈式存儲,網絡存儲、臨時存儲,節點本地存儲,特殊類型存儲、用戶自定義存儲等等;好比awsElasticBlockStore、azureDisk、azureFile、gcePersistentDisk、vshperVolume、cinder這些類型劃分爲雲存儲;cephfs、glusterfs、rbd這些劃分爲分佈式存儲;nfs、iscsi、fc這些劃分爲網絡存儲;enptyDIR劃分爲臨時存儲;hostPath、local劃分爲本地存儲;自定義存儲csi;特殊存儲configMap、secret、downwardAPId;持久卷申請persistentVolumeClaim等等;mongodb
volumes的使用docker
示例:建立使用hostPath類型存儲卷Pod後端
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:以上配置清單表示建立一個名爲nginx的pod,對應使用nginx:1.14-alpine的鏡像;而且定義了一個存儲卷,該存儲卷的名稱爲webhtml,其類型爲hostPath;在定義存儲卷時,咱們須要在spec字段下使用volumes字段來指定,該字段的值爲一個對象列表;其中name是必須字段,用於指定存儲卷的名稱,方便容器掛載其存儲卷時引用的標識符;其次咱們須要對應的存儲卷類型來表示使用對應的存儲接口;hostPath表示使用hostPath類型存儲接口;該類型存儲接口須要咱們手動傳遞兩個參數,第一個是path指定宿主機的某個目錄或文件路徑;type用來指定當宿主機上的指定的路徑不存在時該怎麼辦,這個值有7個值;其中DirectoryOrCteate表示對應path字段指定的文件必須是一個目錄,當這個目錄在宿主機上不存在時就建立;Directory表示對應path字段指定的文件必須是一個已存在的目錄;FileOrCreate表示對應path字段指定的文件必須是一個文件,若是文件不存在就建立;File表示對應path字段必須爲一個已存在的文件;Socket表示對應path必須爲一個已存在的Socket文件;CharDevice表示對應path字段指定的文件必須是一個已存在的字符設備;BlockDevice表示對應path字段指定的是一個已存在的塊設備;定義volumes至關於把外部存儲關聯到對應pod的pause容器,至於pod裏的其餘容器是否要使用,怎麼使用,取決於volumeMounts字段是否認義;spec.containers.volumeMounts字段用於指定對應pod裏的容器存儲卷掛載配置,其中name和mountPath是必選字段,name字段用於指定引用的存儲卷名稱;mountPath字段用於指定在容器內部的掛載點,readOnly用於指定是否爲只讀,默認是讀寫,即readOnly的值爲false;
應用配置清單
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h vol-hostpath-demo 1/1 Running 0 11s [root@master01 ~]# kubectl describe pod/vol-hostpath-demo Name: vol-hostpath-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Wed, 23 Dec 2020 23:14:35 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.92 IPs: IP: 10.244.3.92 Containers: nginx: Container ID: docker://eb8666714b8697457ce2a88271a4615f836873b4729b6a0938776e3d527c6536 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Wed, 23 Dec 2020 23:14:37 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: HostPath (bare host directory volume) Path: /vol/html/ HostPathType: DirectoryOrCreate default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 43s default-scheduler Successfully assigned default/vol-hostpath-demo to node03.k8s.org Normal Pulled 42s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 41s kubelet Created container nginx Normal Started 41s kubelet Started container nginx [root@master01 ~]#
提示:能夠看到對應pod裏以只讀方式掛載了webhtml存儲卷,對應webhtm存儲卷類型爲HostPath,對應path是/vol/html/;
查看對應pod所在節點
[root@master01 ~]# kubectl get pod vol-hostpath-demo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vol-hostpath-demo 1/1 Running 0 3m39s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]#
在node03上查看對應目錄是否建立?
[root@node03 ~]# ll / total 16 lrwxrwxrwx. 1 root root 7 Sep 15 20:33 bin -> usr/bin dr-xr-xr-x. 5 root root 4096 Sep 15 20:39 boot drwxr-xr-x 20 root root 3180 Dec 23 23:10 dev drwxr-xr-x. 80 root root 8192 Dec 23 23:10 etc drwxr-xr-x. 2 root root 6 Nov 5 2016 home lrwxrwxrwx. 1 root root 7 Sep 15 20:33 lib -> usr/lib lrwxrwxrwx. 1 root root 9 Sep 15 20:33 lib64 -> usr/lib64 drwxr-xr-x. 2 root root 6 Nov 5 2016 media drwxr-xr-x. 2 root root 6 Nov 5 2016 mnt drwxr-xr-x. 4 root root 35 Dec 8 14:25 opt dr-xr-xr-x 141 root root 0 Dec 23 23:09 proc dr-xr-x---. 4 root root 213 Dec 21 22:46 root drwxr-xr-x 26 root root 780 Dec 23 23:13 run lrwxrwxrwx. 1 root root 8 Sep 15 20:33 sbin -> usr/sbin drwxr-xr-x. 2 root root 6 Nov 5 2016 srv dr-xr-xr-x 13 root root 0 Dec 23 23:09 sys drwxrwxrwt. 9 root root 251 Dec 23 23:11 tmp drwxr-xr-x. 13 root root 155 Sep 15 20:33 usr drwxr-xr-x. 19 root root 267 Sep 15 20:38 var drwxr-xr-x 3 root root 18 Dec 23 23:14 vol [root@node03 ~]# ll /vol total 0 drwxr-xr-x 2 root root 6 Dec 23 23:14 html [root@node03 ~]# ll /vol/html/ total 0 [root@node03 ~]#
提示:能夠看到對應節點上已經建立/vol/html/目錄,對應目錄下沒有任何文件;
在對應節點對應目錄下建立一個網頁文件,訪問對應pod看看是否對應網頁文件是否可以被訪問到?
[root@node03 ~]# echo "this is test page from node03 /vol/html/test.html" > /vol/html/test.html [root@node03 ~]# cat /vol/html/test.html this is test page from node03 /vol/html/test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7m45s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.92/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:能夠看到在對應節點上建立網頁文件,訪問pod可以正常被訪問到;
測試:刪除pod,看看對應節點上的目錄是否會被刪除?
[root@master01 ~]# kubectl delete -f hostPath-demo.yaml pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# ssh node03 Last login: Wed Dec 23 23:18:51 2020 from master01 [root@node03 ~]# ll /vol/html/ total 4 -rw-r--r-- 1 root root 50 Dec 23 23:22 test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]#
提示:能夠看到刪除了pod之後,在對應節點上的目錄並不會被刪除;對應的網頁文件仍是無缺無損;
測試:從新引用配置清單,訪問對應的pod,看看是否可以訪問到對應的網頁文件內容?
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7s 10.244.3.93 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.93/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:能夠看到對應pod被調度到node03上了,訪問對應的pod可以訪問到咱們建立的網頁文件;假如咱們明確指定將此pod運行在node02上,對應pod是否還能夠訪問到對應的網頁文件呢?
測試:綁定pod運行在node02.k8s.org上
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:綁定pod運行爲某個節點上,咱們能夠在spec字段中用nodeName字段來指定對應節點的主機名便可;
刪除原有pod,從新應用新資源清單
[root@master01 ~]# kubectl delete pod/vol-hostpath-demo pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 8s 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
提示:能夠看到從新應用新資源清單,對應pod運行在node02上;
訪問對應pod,看看test.html是否可以被訪問到?
[root@master01 ~]# curl 10.244.2.100/test.html <html> <head><title>404 Not Found</title></head> <body bgcolor="white"> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.14.2</center> </body> </html> [root@master01 ~]#
提示:能夠看到如今訪問pod,對應網頁文件就不能被訪問到;其實緣由很簡單;hostPath類型存儲卷是將宿主機上的某個目錄或文件看成存儲卷映射進pause容器,而後供pod裏的容器掛載使用;這種類型的存儲卷不能跨節點;因此在node02上建立的pod,node03上的文件確定是不能被訪問到的;爲此,若是要使用hostPath類型的存儲卷,咱們就必須綁定節點;除此以外咱們就應該在k8s節點上建立相同的文件或目錄;
示例:建立使用emptyDir類型存儲卷pod
[root@master01 ~]# cat emptyDir-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-emptydir-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: web-cache-dir mountPath: /usr/share/nginx/html readOnly: true readOnly: true - name: alpine image: alpine volumeMounts: - name: web-cache-dir mountPath: /nginx/html command: ["/bin/sh", "-c"] args: - while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done volumes: - name: web-cache-dir emptyDir: medium: Memory sizeLimit: "10Mi" [root@master01 ~]#
提示:以上清單表示定義運行一個名爲vol-emptydir-demo的pod;在其pod內部運行兩個容器,一個名爲nginx,一個名爲alpine;同時這兩個容器都同時掛載一個名爲web-cache-dir的存儲卷,其類型爲emptyDir,以下圖所示;定義empytDir類型的存儲卷,咱們須要在spec.volumes字段下使用name指定其存儲卷的名稱;用emptyDir指定其存儲卷類型爲emptyDir;對於empytDir類型存儲卷,它有兩個屬性,medium字段用於指定媒介類型,Memory表示使用內存做爲存儲媒介;默認該字段的值爲「」,表示使用默認的對應節點默認的存儲媒介;sizeLimit字段是用來限制其對應存儲大小,默認是空,表示不限制;
提示:如上圖,其pod內部有兩個容器,一個名爲alpine的容器,它會每隔10往/nginx/html/inde.html文件中寫入對應主機名+時間;而nginx容器掛載對應的empytDir類型存儲捲到本地的網頁存儲目錄;簡單講就是alpine容器往/nginx/html/index.html寫數據,nginx容器掛載對應文件到網頁目錄;
應用資源清單
[root@master01 ~]# kubectl apply -f emptyDir-demo.yaml pod/vol-emptydir-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 0/2 ContainerCreating 0 8s <none> node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 16s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-emptydir-demo Name: vol-emptydir-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 00:46:56 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.94 IPs: IP: 10.244.3.94 Containers: nginx: Container ID: docker://58af9ef80800fb22543d1c80be58849f45f3d62f3b44101dbca024e0761cead5 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 00:46:57 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from web-cache-dir (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) alpine: Container ID: docker://327f110a10e8ef9edb5f86b5cb3dad53e824010b52b1c2a71d5dbecab6f49f05 Image: alpine Image ID: docker-pullable://alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 Port: <none> Host Port: <none> Command: /bin/sh -c Args: while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done State: Running Started: Thu, 24 Dec 2020 00:47:07 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /nginx/html from web-cache-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: web-cache-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: 10Mi default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/vol-emptydir-demo to node03.k8s.org Normal Pulled 51s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 51s kubelet Created container nginx Normal Started 50s kubelet Started container nginx Normal Pulling 50s kubelet Pulling image "alpine" Normal Pulled 40s kubelet Successfully pulled image "alpine" in 10.163157508s Normal Created 40s kubelet Created container alpine Normal Started 40s kubelet Started container alpine [root@master01 ~]#
提示:能夠看到對應pod已經正常運行起來,其內部有2個容器;其中nginx容器一隻讀方式掛載名爲web-cache-dir的存儲卷,alpine以讀寫方式掛載web-cache-dir的存儲卷;對應存儲卷類型爲emptyDir;
訪問對應pod,看看是否可以訪問到對應存儲卷中index.html的內容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 4m38s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 77m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.94 vol-emptydir-demo Wed Dec 23 16:47:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:47 UTC 2020 [root@master01 ~]#
提示:能夠看到可以訪問到index.html文件內容,而且該文件內容是alpine容器動態生成的內容;從上面的示例,不難理解,在同一個pod內部能夠共享同一存儲卷;
示例:建立使用nfs類型的存儲卷pod
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]#
提示:定義nfs類型存儲卷,對應spec.volumes.nfs字段下必須定義path字段,該字段用於指定其nfs文件系統的導出文件路徑;server字段是用於指定其nfs服務器地址;在使用nfs存儲做爲pod的後端存儲,首先咱們要準備好nfs服務器,並導出對應的目錄;
準備nfs服務器,在192.168.0.99這臺服務器上安裝nfs-utils包
[root@docker_registry ~]# ip a|grep 192.168.0.99 inet 192.168.0.99/24 brd 192.168.0.255 scope global enp3s0 [root@docker_registry ~]# yum install nfs-utils -y Loaded plugins: fastestmirror, langpacks Repository epel is listed more than once in the configuration Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 epel | 4.7 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes/signature | 844 B 00:00:00 kubernetes/signature | 1.4 kB 00:00:00 !!! mariadb-main | 2.9 kB 00:00:00 mariadb-maxscale | 2.4 kB 00:00:00 mariadb-tools | 2.9 kB 00:00:00 mongodb-org | 2.5 kB 00:00:00 proxysql_repo | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/6): docker-ce-stable/x86_64/primary_db | 51 kB 00:00:00 (2/6): kubernetes/primary | 83 kB 00:00:01 (3/6): mongodb-org/primary_db | 26 kB 00:00:01 (4/6): epel/x86_64/updateinfo | 1.0 MB 00:00:02 (5/6): updates/7/x86_64/primary_db | 4.7 MB 00:00:01 (6/6): epel/x86_64/primary_db | 6.9 MB 00:00:02 Determining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com kubernetes 612/612 Resolving Dependencies --> Running transaction check ---> Package nfs-utils.x86_64 1:1.3.0-0.66.el7_8 will be updated ---> Package nfs-utils.x86_64 1:1.3.0-0.68.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================= Updating: nfs-utils x86_64 1:1.3.0-0.68.el7 base 412 k Transaction Summary ============================================================================================================================================= Upgrade 1 Package Total download size: 412 k Downloading packages: No Presto metadata available for base nfs-utils-1.3.0-0.68.el7.x86_64.rpm | 412 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Cleanup : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Verifying : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Verifying : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Updated: nfs-utils.x86_64 1:1.3.0-0.68.el7 Complete! [root@docker_registry ~]#
建立/data/html目錄
[root@docker_registry ~]# mkdir /data/html -pv mkdir: created directory ‘/data/html’ [root@docker_registry ~]#
配置該目錄可以被k8s集羣節點所訪問
[root@docker_registry ~]# cat /etc/exports /data/html 192.168.0.0/24(rw,no_root_squash) [root@docker_registry ~]#
提示:以上配置表示把/data/html這個目錄以讀寫,不壓榨root權限共享給192.168.0.0/24這個網絡中的全部主機使用;
啓動nfs
[root@docker_registry ~]# systemctl start nfs [root@docker_registry ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:1514 *:* LISTEN 0 128 *:111 *:* LISTEN 0 128 *:20048 *:* LISTEN 0 64 *:42837 *:* LISTEN 0 5 192.168.122.1:53 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 192.168.0.99:631 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 64 *:2049 *:* LISTEN 0 128 *:59396 *:* LISTEN 0 128 :::34922 :::* LISTEN 0 128 :::111 :::* LISTEN 0 128 :::20048 :::* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::443 :::* LISTEN 0 128 :::4443 :::* LISTEN 0 64 :::2049 :::* LISTEN 0 64 :::36997 :::* [root@docker_registry ~]#
提示:nfs監聽在tcp的2049端口,啓動請確保該端口可以正常處於監聽狀態;到此nfs服務器就準備好了;
在k8s節點上安裝nfs-utils包,爲其使用nfs提供所需驅動
yum install nfs-utils -y
驗證:在node01上,看看能不能正常掛載nfs服務器共享出來的目錄
[root@node01 ~]# showmount -e 192.168.0.99 Export list for 192.168.0.99: /data/html 192.168.0.0/24 [root@node01 ~]# mount -t nfs 192.168.0.99:/data/html /mnt [root@node01 ~]# mount |grep /data/html 192.168.0.99:/data/html on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.44,local_lock=none,addr=192.168.0.99) [root@node01 ~]# umount /mnt [root@node01 ~]# mount |grep /data/html [root@node01 ~]#
提示:能夠看到在node01上可以正常看到nfs服務器共享出來的目錄,而且也能正常掛載使用;等待其餘節點把nfs-utils包安裝完成之後,接下來就能夠在master上應用配置清單了;
應用資源清單
[root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d1h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d1h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 141m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 10s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-nfs-demo Name: vol-nfs-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 01:55:51 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.101 IPs: IP: 10.244.3.101 Containers: nginx: Container ID: docker://72227e3a94622a4ea032a1ab0d7d353aef167d5a0e80c3739e774050eaea3914 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 01:55:52 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.0.99 Path: /data/html/ ReadOnly: false default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28s default-scheduler Successfully assigned default/vol-nfs-demo to node03.k8s.org Normal Pulled 27s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 27s kubelet Created container nginx Normal Started 27s kubelet Started container nginx [root@master01 ~]#
提示:能夠看到對應pod已經正常運行,而且其內部容器已經正常掛載對應目錄;
在nfs服務器對應目錄,建立一個index.html文件
[root@docker_registry ~]# cd /data/html [root@docker_registry html]# echo "this is test file from nfs server ip addr is 192.168.0.99" > index.html [root@docker_registry html]# cat index.html this is test file from nfs server ip addr is 192.168.0.99 [root@docker_registry html]#
訪問對應pod,看看是否可以訪問到對應文件內容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 145m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 4m6s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:能夠看到對應文件內容可以經過pod訪問到;
刪除pod
[root@master01 ~]# kubectl delete -f nfs-demo.yaml pod "vol-nfs-demo" deleted [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 149m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
綁定pod運行在node02.k8s.org上,從新應用配置文件建立pod,再次訪問對應pod,看看對應文件是否可以正常訪問到呢?
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 151m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 8s 10.244.2.101 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.2.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:能夠看到把對應pod綁定到node02上,訪問對應pod也能正常訪問到nfs服務器上的文件;從上述測試過程來看,nfs這種類型的存儲卷可以脫離pod的生命週期,跨節點將pod裏容器產生的數據持久化到對應的nfs文件系統服務器上;固然nfs此時是單點,一旦nfs服務器宕機掛掉,對應pod運行時產生的數據將所有丟失;因此對應外部存儲系統,咱們應該選擇一個對數據有冗餘,且k8s集羣支持的類型的存儲系統,好比cephfs,glusterfs等等;