k8s的存儲卷

存儲卷查看:kubectl explain pods.spec.volumeshtml

1、簡單的存儲方式node

1)2個容器之間共享存儲.。(刪除則數據消失)nginx

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels: 
    app: myapp
    tier: frontend
  annotations:
    magedu.com/created-by: "clusten admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /data/web/html/
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
  volumes:
  - name: html
    emptyDir: {}
pod-vol-demo.yaml

建立容器web

[root@master volume]# kubectl apply -f pod-vol-demo.yaml 
pod/pod-demo created
View Code

進入容器測試算法

進入其中一個建立數據
[root@master volume]# kubectl exec -it pod-demo  -c busybox -- /bin/sh
/ # 
/ # echo $(date) >> /data/index.html
/ # echo $(date) >> /data/index.html
/ # cat /data/index.html
Sun Jun 9 03:48:49 UTC 2019
Sun Jun 9 03:49:10 UTC 2019
進入第二查看數據
[root@master volume]# kubectl exec -it pod-demo  -c myapp -- /bin/sh
/ # cat /data/web/html/index.html 
Sun Jun 9 03:48:49 UTC 2019
Sun Jun 9 03:49:10 UTC 2019
View Code

 2)根據2個容器的存儲,一個容器創造數據,一個數據讀取數據。(刪除則數據消失)api

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels: 
    app: myapp
    tier: frontend
  annotations:
    magedu.com/created-by: "clusten admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: html
      mountPath: /data/
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date) >> /data/index.html; sleep 2; done"]
  volumes:
  - name: html
    emptyDir: {}
pod-vol-demo.yaml

讀取測試服務器

[root@master volume]# kubectl apply -f pod-vol-demo.yaml
pod/pod-demo created
[root@master volume]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-demo   2/2     Running   0          10s   10.244.2.8   node01   <none>           <none>
[root@master volume]# curl 10.244.2.8
Sun Jun 9 04:09:46 UTC 2019
Sun Jun 9 04:09:48 UTC 2019
Sun Jun 9 04:09:50 UTC 2019
Sun Jun 9 04:09:52 UTC 2019
Sun Jun 9 04:09:54 UTC 2019
Sun Jun 9 04:09:56 UTC 2019
Sun Jun 9 04:09:58 UTC 2019
View Code

 3)將內容存儲與節點機器上app

apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    hostPath:
      path: /data/pod/volume1
      type: DirectoryOrCreate
pod-hostpath-vol.yaml

建立容器frontend

[root@master volume]# kubectl apply -f pod-hostpath-vol.yaml 
pod/pod-vol-hostpath created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          51s   10.244.2.9   node01   <none>           <none>

進入容器內建立數據,則可在存儲卷中查看到數據內容dom

[root@master volume]# kubectl exec -it pod-vol-hostpath /bin/sh
/ # cd /usr/share/nginx/html
/usr/share/nginx/html # echo "hello world" >> index.html
[root@master volume]# curl 10.244.2.9
hello world

登錄到 node01 服務器
[root@node01 ~]# cat /data/pod/volume1/index.html 
hello world

3.1)測試刪除了容器,內容是否還在

[root@master volume]# kubectl delete -f pod-hostpath-vol.yaml 
pod "pod-vol-hostpath" deleted
[root@master volume]# kubectl get pods
No resources found.
[root@master volume]# kubectl apply -f pod-hostpath-vol.yaml 
pod/pod-vol-hostpath created
[root@master volume]# kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-vol-hostpath   1/1     Running   0          39s   10.244.2.10   node01   <none>           <none>
[root@master volume]# curl 10.244.2.10
hello world
View Code

2、使用nfs來作存儲卷

1)先測試 nfs 共享存儲的可用性

1.1)node 機器和pv機器編輯hosts主機解析

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.5 master
192.168.1.6 node01 n1
192.168.1.7 node02 n2
192.168.1.8 pv01 p1
cat /etc/hosts

1.2)pv 機器安卓nfs 並啓動

[root@pvz01 ~]# yum install -y nfs-utils
[root@pvz01 ~]# mkdir -pv /data/volumes
mkdir: created directory ‘/data’
mkdir: created directory ‘/data/volumes’
[root@pvz01 ~]# cat /etc/exports
/data/volumes 192.168.1.0/24(rw,no_root_squash)
[root@pvz01 ~]# systemctl start nfs
[root@pvz01 ~]# ss -tnl|grep 2049
LISTEN     0      64           *:2049                     *:*                  
LISTEN     0      64          :::2049                    :::
View Code

節點機器也須要安裝nfs,並測試掛載服務

[root@node01 ~]# yum install -y nfs-utils
[root@node01 ~]# mount -t nfs pv01:/data/volumes /mnt
[root@node01 ~]# df -h|grep mnt
pv01:/data/volumes        19G  1.1G   18G   6% /mnt
[root@node01 ~]# df -h|grep mnt
pv01:/data/volumes        19G  1.1G   18G   6% /mnt
View Code

1.3)掛在服務正常可用。取消掛載

[root@node01 ~]# umount /mnt
[root@node01 ~]# df -h|grep mnt

2)使用k8s來管理節點調用 nfs的存儲功能

[root@master ~]# cat /etc/hosts|grep 192.168.1.8
192.168.1.8 pv01 p1 pv01.test.com

2.1)master機器編輯 相應的yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    nfs:
      path: /data/volumes
      server: pv01.test.com
pod-vol-nfs.yaml

2.2)建立該pod

[root@master nfs_volume]# kubectl apply -f pod-vol-nfs.yaml 
pod/pod-vol-nfs created
[root@master nfs_volume]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-vol-nfs   1/1     Running   0          10s   10.244.2.11   node01   <none>           <none>

2.3)進入pv服務,在掛載卷下隨便建立內容,進入訪問測試

[root@pvz01 ~]# echo $(date) >  /data/volumes/index.html 
[root@pvz01 ~]# echo $(date) >>  /data/volumes/index.html 
[root@pvz01 ~]# echo $(date) >>  /data/volumes/index.html 

[root@master nfs_volume]# curl 10.244.2.11
Sun Jun 9 17:42:25 CST 2019
Sun Jun 9 17:42:29 CST 2019
Sun Jun 9 17:42:38 CST 2019
View Code

 3、pv 和 pvc的使用

1)在pv機器建立好存儲卷

[root@pvz01 volumes]# ls
index.html
[root@pvz01 volumes]# mkdir v{1,2,3,4,5}
[root@pvz01 volumes]# ls
index.html  v1  v2  v3  v4  v5
[root@pvz01 volumes]# cat /etc/exports
/data/volumes/v1 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v2 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v3 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v4 192.168.1.0/24(rw,no_root_squash)
/data/volumes/v5 192.168.1.0/24(rw,no_root_squash)
[root@pvz01 volumes]# exportfs -arv
exporting 192.168.1.0/24:/data/volumes/v5
exporting 192.168.1.0/24:/data/volumes/v4
exporting 192.168.1.0/24:/data/volumes/v3
exporting 192.168.1.0/24:/data/volumes/v2
exporting 192.168.1.0/24:/data/volumes/v1
[root@pvz01 volumes]# showmount -e
Export list for pvz01:
/data/volumes/v5 192.168.1.0/24
/data/volumes/v4 192.168.1.0/24
/data/volumes/v3 192.168.1.0/24
/data/volumes/v2 192.168.1.0/24
/data/volumes/v1 192.168.1.0/24
View Code

2)建立pv 綁定pv機器上的存儲卷

[root@master ~]# kubectl explain pv    # pv 相關參數

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/volumes/v1
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/volumes/v2
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/volumes/v3
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/volumes/v4
    server: pv01.test.com
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/volumes/v5
    server: pv01.test.com
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 5Gi
pv-demo.yaml

建立可利用的pv

[root@master volume]# kubectl apply -f pv-demo.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volume]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                   37s
pv002   2Gi        RWO,RWX        Retain           Available                                   37s
pv003   1Gi        RWO,RWX        Retain           Available                                   37s
pv004   5Gi        RWO            Retain           Available                                   37s
pv005   5Gi        RWO,RWX        Retain           Available                                   37s

3)定義pod資源, 使用pvc調用可利用的pv

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-vol-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: mypvc
pod-vol-pvc.yaml

建立pod的資源查看 pv 的狀況

[root@master volume]# kubectl apply -f pod-vol-pvc.yaml 
persistentvolumeclaim/mypvc unchanged
pod/pod-vol-pvc created
[root@master ~]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-vol-pvc   1/1     Running   0          6m16s
[root@master ~]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   2Gi        RWO,RWX        Retain           Available                                           22m
pv002   2Gi        RWO,RWX        Retain           Available                                           22m
pv003   1Gi        RWO,RWX        Retain           Available                                           22m
pv004   5Gi        RWO            Retain           Available                                           22m
pv005   5Gi        RWO,RWX        Retain           Bound       default/mypvc                           22m
[root@master ~]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv005    5Gi        RWO,RWX                       8m33s

此時根據調度算法,數據已經綁定在了pv005中

相關文章
相關標籤/搜索