Kubernetes --(k8s)volume 數據管理

容器的磁盤的生命週期是短暫的,這就帶來了許多問題;第一:當一個容器損壞了,kubelet會重啓這個容器,可是數據會隨着container的死亡而丟失;第二:當不少容器在同一Pod中運行的時候,常常須要數據共享。kubernets Volume解決了這些問題html

kubernets volume的四種類型node

  1. emtyDir
  2. hostPath
  3. NFS
  4. pv/pvc

https://www.kubernetes.org.cn/kubernetes-volumes


emtyDir

第一步:編寫yml文件nginx

╭─root@node1 ~  
╰─➤  vim nginx-empty.yml

apiVersion: v1
kind: Pod
metadata:
   name: nginx
spec:
   containers:
   - name: nginx
     image: nginx
     imagePullPolicy: IfNotPresent
     volumeMounts:
     - name: du   # 對應
       mountPath: /usr/share/nginx/html
   volumes:
   - name: du  # 對應
     emptyDir: {}

第二步:運行yml文件docker

╭─root@node1 ~  
╰─➤  kubectl apply -f nginx-empty.yml

第三步:查看podvim

╭─root@node1 ~  
╰─➤  kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          7m18s   10.244.2.14   node3   <none>           <none>

第四步:到node3節點查看容器詳細信息api

# docker ps 

╭─root@node3 ~  
╰─➤  docker inspect  9c3ed074fb29| grep "Mounts" -A 8
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du",
                "Destination": "/usr/share/nginx/html",
                "Mode": "Z",
                "RW": true,
                "Propagation": "rprivate"
            },

第五步:寫入內容app

╭─root@node3 ~  
╰─➤  cd /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du 
╭─root@node3 /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du  
╰─➤  ls
╭─root@node3 /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du  
╰─➤  echo "empty test" >> index.html

第六步:訪問curl

╭─root@node1 ~  
╰─➤  curl 10.244.2.14
empty test

第七步:停掉容器ide

╭─root@node3 ~  
╰─➤  docker stop 9c3ed074fb29
9c3ed074fb29

第八步:查看新起來的容器測試

╭─root@node3 ~  
╰─➤  docker ps
CONTAINER ID        IMAGE                                                COMMAND                  CREATED              STATUS              PORTS               NAMES
14ca410ad737        5a3221f0137b                                         "nginx -g 'daemon of…"   About a minute ago   Up About a minute                       k8s_nginx_nginx_default_2ab6183c-eddd-44eb-9e62-ded5106d1d1a_1

第九步:查看pod 信息 並訪問

╭─root@node1 ~  
╰─➤  kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   1          40m   10.244.2.14   node3   <none>           <none>
╭─root@node1 ~  
╰─➤  curl 10.244.2.14       
empty test

第十步:刪除pod

╭─root@node1 ~  
╰─➤  kubectl delete pod nginx
pod "nginx" deleted

第十一步:查看emptyDir

╭─root@node3 ~  
╰─➤  ls /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du

ls: cannot access /var/lib/kubelet/pods/2ab6183c-eddd-44eb-9e62-ded5106d1d1a/volumes/kubernetes.io~empty-dir/du: No such file or directory

emptyDir實驗總結

  1. 寫入文件能夠訪問說明文件已經同步
  2. 停到原容器後,kubelet新建一容器。
  3. 訪問仍成功說明數據沒有隨容器的死亡而丟失
  4. emptyDir 壽命與pods 同步

hostPath

效果至關於執行: docker run -v /tmp:/usr/share/nginx/html

第一步:編寫yml文件

apiVersion: v1
kind: Pod
metadata:
   name: nginx
spec:
   containers:
   - name: nginx
     image: nginx
     imagePullPolicy: IfNotPresent
     volumeMounts:
     - name: du   # 對應
       mountPath: /usr/share/nginx/html
   volumes:
   - name: du  # 對應
     hostPath:
       path: /tmp

第二步:運行yml文件

╭─root@node1 ~  
╰─➤  kubectl apply -f nginx-hostP.yml         
pod/nginx2 created

第三步:查看pods

╭─root@node1 ~  
╰─➤  kubectl get pod -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx2   1/1     Running   0          55s   10.244.2.15   node3   <none>           <none>

第四步:pod所在節點寫入測試文件

╭─root@node3 ~  
╰─➤  echo "hostPath test " >> /tmp/index.html

第五步: 訪問

╭─root@node1 ~  
╰─➤  curl 10.244.2.15
hostPath test

第六步:刪除pods

╭─root@node1 ~  
╰─➤  kubectl delete -f nginx-hostP.yml
pod "nginx2" deleted

第七步:查看測試文件

╭─root@node3 ~  
╰─➤  cat /tmp/index.html 
hostPath test

hostPath實驗總結

  1. 一個hostPath類型的磁盤就是掛在了主機的一個文件或者目錄
  2. 注意:從模版文件中建立的pod可能會由於主機上文件夾目錄的不一樣而致使一些問題,由於掛載的目錄是pod宿主機的某個目錄。

NFS

第一步:部署nfs (每一個節點安裝nfs)

╭─root@node1 ~  
╰─➤  yum install nfs-utils rpcbind -y
╭─root@node1 ~  
╰─➤  cat /etc/exports
/tmp *(rw)
╭─root@node1 ~  
╰─➤  chown -R nfsnobody: /tmp    
╭─root@node1 ~  
╰─➤  systemctl restart nfs rpcbind

----------------------------------------------
╭─root@node2 ~  
╰─➤  yum install nfs-utils -y
----------------------------------------------
╭─root@node3 ~  
╰─➤  yum install nfs-utils -y

第二步:編寫yml文件

apiVersion: v1
kind: Pod
metadata:
   name: nginx2
spec:
   containers:
   - name: nginx
     image: nginx
     imagePullPolicy: IfNotPresent
     volumeMounts:
     - name: du   # 對應
       mountPath: /usr/share/nginx/html
   volumes:
   - name: du  # 對應
     nfs:
       path: /tmp
       server: 192.168.137.3

第三步:運行yml文件

╭─root@node1 ~  
╰─➤  kubectl apply -f nginx-nfs.yml 
pod/nginx2 created

第四步:查看pod信息

╭─root@node1 ~  
╰─➤  kubectl get po -o wide
NAME     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx2   1/1     Running   0          5m46s   10.244.2.16   node3   <none>           <none>

第五步:編寫測試文件

╭─root@node1 ~  
╰─➤  echo "nfs-test" >> /tmp/index.html

第六步:訪問

╭─root@node1 ~  
╰─➤  curl 10.244.2.16
nfs-test

pv/pvc

第一步:部署NFS

第二步:編寫pv的yml文件

apiVersion: v1
kind: PersistentVolume
metadata:
   name: mypv
spec:
   capacity:
      storage: 1Gi
   accessModes:
   -  ReadWriteMany
   nfs:
      path: /tmp
      server: 192.168.137.3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: mypvc1
spec:
   accessModes:
   -  ReadWriteMany
   volumeName: mypv
   resources:
      requests:
         storage: 1Gi

accessModes有三類

  • ReadWriteOnce – 能夠被單個節點進行讀寫掛載
  • ReadOnlyMany – 能夠被多個節點進行只讀掛載
  • ReadWriteMany – 能夠被多個節點進行讀寫掛載

reclaim policy有三類

  • retain – pvc被刪除以後,數據保留
  • recyle- 刪除pvc以後會刪除數據(被廢棄)
  • delete – 刪除pvc以後會刪除數據

第三步:執行yml文件 建立pv/pvc

╭─root@node1 ~  
╰─➤  vim pv-pvc.yml
╭─root@node1 ~  
╰─➤  kubectl apply -f pv-pvc.yml 
persistentvolume/mypv created
persistentvolumeclaim/mypvc1 created

第四步:查看

╭─root@node1 ~  
╰─➤  kubectl get pv        
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
mypv   1Gi        RWX            Retain           Bound    default/mypvc1                           68s
╭─root@node1 ~  
╰─➤  kubectl get pvc 
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc1   Bound    mypv     1Gi        RWX                           78s

使用pvc

第五步:編寫nginx的yml文件

apiVersion: v1
kind: Pod
metadata:
   name: nginx3
spec:
   containers:
   - name: nginx
     image: nginx
     imagePullPolicy: IfNotPresent
     volumeMounts:
     - name: du   # 對應
       mountPath: /usr/share/nginx/html
   volumes:
   - name: du  # 對應
     persistentVolumeClaim:
        claimName: mypvc1

第六步:執行yml文件

╭─root@node1 ~  
╰─➤  kubectl apply -f nginx-pv.yml
pod/nginx3 created

第七步:查看pod

╭─root@node1 ~  
╰─➤  kubectl get pod -o wide  
NAME     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx3   1/1     Running   0          3m45s   10.244.2.17   node3   <none>           <none>

第八步:編寫訪問文件並訪問

╭─root@node1 ~  
╰─➤  echo "pv test"  >  /tmp/index.html

╭─root@node1 ~  
╰─➤  curl 10.244.2.17                 
pv test

刪除一直處於terminating狀態的pod

我這裏的pod是與nfs有關,nfs掛載有問題致使pod有問題,執行完刪除命令之後看到pod一直處於terminating的狀態。

這種狀況下能夠使用強制刪除命令:

kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]

# 注意:必須加-n參數指明namespace,不然可能報錯pod not found

演示:

╭─root@node1 ~  
╰─➤  kubectl get pod
NAME        READY   STATUS        RESTARTS   AGE
nginx3      0/1     Terminating   0          7d17h

╭─root@node1 ~  
╰─➤  kubectl delete pod nginx3 --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nginx3" force deleted

相關文章
相關標籤/搜索