容器中持久化的文件生命週期是短暫的,若是容器中程序崩潰宕機,kubelet 就會從新啓動,容器中的文件將會丟失,因此對於有狀態的應用容器中持久化存儲是相當重要的一個環節;另外不少時候一個 Pod 中可能包含多個 Docker 鏡像,在 Pod 內數據也須要相互共享,Kubernetes 中 Pod 也能夠增長副本數量,遇到故障時 Pod 能夠轉移到其它節點,爲了浮動節點都可以訪問統一的持久化存儲以及容器間共享數據,Kubernetes 中定義了 Volume 來解決這些問題 ,從本質上講,Volume 只是一個目錄,可能包含一些數據,Pod 中的容器能夠訪問它。該目錄是何種形式,是由所使用的 Volume 類型決定的。html
Kubernetes 支持不少 Volume(https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes)類型:node
若是使用公有云,根據不一樣的雲廠商提供的服務能夠選擇以下類型mysql
awsElasticBlockStore
azureDisk
azureFile
如下介紹一些經常使用的類型linux
emptyDirnginx
通常適用與臨時文件場景,如上傳圖片運行時生成的流文件,Pod 中的容器都可以徹底讀寫,可是 Pod 若是被移除,數據也就被刪除,容器宕機不會刪除 Pod ,所以不會形成數據丟失。要使用 Volume ,Pod 中須要使用.spec.volumes 配置定義類型,而後使用 .spec.containers.volumeMounts 配置定義掛載的信息。git
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
hostPathgithub
hostPath Volume 爲 Pod 掛載宿主機上的目錄或文件,使得容器可使用宿主機的高速文件系統進行存儲。缺點是,Pod 是動態在各個節點上調度。當一個 Pod 在當前節點上啓動並經過 hostPath存儲了文件到本地之後,下次調度到另外一個節點上啓動時,就沒法使用在以前節點上存儲的文件。 web
apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - image: test-container name: test-name volumeMounts: - name: test-volume mountPath: /cache volumes: - name: test-volume hostPath: path: /data
nfsredis
咱們前面使用的 hostPath 和 emptyDir 類型的 Volume 有可能被 kubelet 清理掉,也不能被「遷移」到其餘節點上,不具有持久化特性。 NFS(網絡文件系統)服務須要搭建好,共享到 Pod 中,與 emptyDir
移除 Pod 時刪除的內容不一樣,NFS
卷的內容將被保留,下次運行 Pod 能夠繼續使用,對於一些 IO 與網絡要求不高的的場景可使用。 sql
apiVersion: v1 kind: PersistentVolume metadata: name: test--nfs-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteMany flexVolume: driver: "k8s/nfs" fsType: "nfs" options: server: "192.168.10.100" # NFS 服務器地址 path: "/"
cephfs
Cephfs 是一個分佈式存儲系統,誕生於2004年,最先致力於開發下一代高性能分佈式文件系統的項目。提早是也須要提早搭建好存儲集羣服務,也可使用 Rook (支持 Ceph),現屬於 CNCF 孵化項目。
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/rbd parameters: monitors: 10.16.153.105:6789 adminId: kube adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-user userSecretNamespace: default fsType: ext4 imageFormat: "2" imageFeatures: "layering"
glusterfs
GlusterFS 是一個開源的分佈式文件系統,具備強大的橫向擴展能力,經過擴展可以支持數 PB 存儲容量和處理數千客戶端。
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" clusterid: "630372ccdc720a92c681fb928f27b53f" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3"
準備三臺服務器
172.23.216.48 gfs_node_1 172.23.216.49 gfs_node_2 172.23.216.50 gfs_node_3
安裝 Glusterfs
$ yum install centos-release-gluster $ yum install glusterfs-server $ systemctl start glusterd.service $ systemctl enable glusterd.service $ systemctl status glusterd.service
建立存儲目錄
$ mkdir /opt/gfs_data
添加節點
$ gluster peer probe node2
$ gluster peer probe node3
查看節點
$ gluster peer status
Number of Peers: 2 Hostname: 172.23.216.49 Uuid: 4dcfad42-e327-4a79-8a5a-a55dc92982ba State: Peer in Cluster (Connected) Hostname: 172.23.216.50 Uuid: 84e90bcf-af22-4cac-a6b1-e3e0d87d7eb4 State: Peer in Cluster (Connected)
建立數據卷(測試使用分佈式模式,生產勿用)
# 複製模式 $ gluster volume create k8s-volume replica 3 transport tcp gfs_node_1:/opt/gfs_data gfs_node_2:/opt/gfs_data gfs_node_3:/opt/gfs_data force # 分佈卷(默認模式) $ gluster volume create k8s-volume transport tcp 172.23.216.48:/opt/gfs_data 172.23.216.49:/opt/gfs_data 172.23.216.50:/opt/gfs_data force
備註:其餘卷模式
CentOS7安裝GlusterFS。
啓動數據卷
$ gluster volume start k8s-volume
$ gluster volume info
Volume Name: k8s-volume Type: Distribute Volume ID: 1203a7ab-45c5-49f0-a920-cbbe8968fefa Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 172.23.216.48:/opt/gfs_data Brick2: 172.23.216.49:/opt/gfs_data Brick3: 172.23.216.50:/opt/gfs_data Options Reconfigured: transport.address-family: inet nfs.disable: on
相關命令
#爲存儲池添加/移除服務器節點 $ gluster peer probe $ gluster peer detach $ gluster peer status #建立/啓動/中止/刪除卷 $ gluster volume create [stripe | replica ] [transport [tcp | rdma | tcp,rdma]] ... $ gluster volume start $ gluster volume stop $ gluster volume delete 注意,刪除卷的前提是先中止卷。 #查看卷信息 $ gluster volume list $ gluster volume info [all] $ gluster volume status [all] $ gluster volume status [detail| clients | mem | inode | fd] #查看本節點的文件系統信息: $ df -h [] #查看本節點的磁盤信息: $ fdisk -l
要在一個 Pod 裏聲明 Volume,只要在 Pod 里加上 spec.volumes 字段便可。而後在這個字段裏定義一個具體 Volume 的類型, 參考官方文檔:https://github.com/kubernetes/examples/tree/master/staging/volumes/glusterfs
修改以下:
glusterfs-endpoints
{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "172.23.216.48" } ], "ports": [ { "port": 1000 } ] }, { "addresses": [ { "ip": "172.23.216.49" } ], "ports": [ { "port": 1000 } ] }, { "addresses": [ { "ip": "172.23.216.50" } ], "ports": [ { "port": 1000 } ] } ] }
glusterfs-service.json
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1000} ] } }
glusterfs-pod.json(建立測試 Pod)
{ "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "glusterfs" }, "spec": { "containers": [ { "name": "glusterfs", "image": "nginx", "volumeMounts": [ { "mountPath": "/mnt/glusterfs", "name": "glusterfsvol" } ] } ], "volumes": [ { "name": "glusterfsvol", "glusterfs": { "endpoints": "glusterfs-cluster", "path": "k8s-volume", "readOnly": true } } ] } }
依次執行
$ kubectl apply -f glusterfs-endpoints.json $ kubectl get ep $ kubectl apply -f glusterfs-service.json $ kubectl get svc
# 查看測試 Pod
$ kubectl apply -f glusterfs-pod.json $ kubectl get pods $ kubectl describe pods/glusterfs $ kubectl exec glusterfs -- mount | grep gluster
在 Kubernetes 中還引入了一組叫做 Persistent Volume Claim(PVC)和 Persistent Volume(PV)的 API 對象,很大程度簡化了用戶聲明和使用持久化 Volume 的門檻。好比 PV 是羣集中已由管理員配置的一塊存儲,通常系統管理員建立 endpoint、Service、PV ;PVC 是由開發人員進行配置 ,Pod 掛載到 PVC 中,PVC 能夠向 PV 申請指定大小的存儲資源並設置訪問模式,而不須要關注存儲卷採用何種技術實現。
定義 PV
$ vi glusterfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: gluster-dev-volume spec: capacity: storage: 8Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "k8s-volume" readOnly: false
執行
$ kubectl apply -f glusterfs-pv.yaml
$ kubectl get pv
定義 PVC
$ cat glusterfs-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi
執行
$ kubectl apply -f glusterfs-pvc.yaml
$ kubectl get pvc
備註:訪問模式
ReadWriteOnce – the volume can be mounted as read-write by a single node ReadOnlyMany – the volume can be mounted read-only by many nodes ReadWriteMany – the volume can be mounted as read-write by many nodes
能夠在 Dashboard 中查看存儲卷的信息
測試數據卷
$ wget https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/deployment.yaml $ vi deployment.yaml apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: gluster-dev-volume mountPath: "/usr/share/nginx/html" volumes: - name: gluster-dev-volume persistentVolumeClaim: claimName: glusterfs-nginx
執行
$ kubectl apply -f deployment.yaml $ kubectl describe deployment nginx-deployment $ kubectl get pods -l app=nginx
$ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-5c689d88bb-7rx7d 1/1 Running 0 2d21h nginx-deployment-5c689d88bb-hfqzm 1/1 Running 0 2d21h nginx-deployment-5c689d88bb-tlwmn 1/1 Running 0 2d21h
建立文件
$ kubectl exec -it nginx-deployment-5c689d88bb-7rx7d -- touch index.html
最後查詢 GlusterFS數據捲上是否有數據驗證下便可。
前面介紹 PV 和 PVC 的時候,通常 PV 這個對象的建立,是由運維人員完成,PVC 多是開發人員定義。在大規模的生產環境裏,可能有不少 PVC ,這意味着運維人員必須得事先建立出成千上萬個 PV,這實際上是一個很是麻煩的工做,因此在 Kubernetes 還引入了能夠自動建立 PV 的機制 Dynamic Provisioning 概念,Dynamic Provisioning 機制工做的核心,在於一個名叫 StorageClass 對象。管理員能夠定義 Storage Class 來描述他們提供的存儲類型,根據 Storage Class 對象中設置的參數自動分配 PV ,好比能夠分別定義兩種 Storage Class :slow 和 fast。slow 對接 sc1(機械硬盤),fast 對接 gp2(固態硬盤)。應用能夠根據業務的性能需求,分別選擇不一樣的存儲方式。下面建立 StorageClass 使用 StorageClass 鏈接 Heketi,根據須要自動建立 GluserFS 的 Volume,StorageClass 仍是要系統管理員建立,不一樣的 PVC 能夠用同一個 StorageClass 配置。
官方的文檔與視頻介紹:
備註:並非全部的存儲方式都支持 Dynamic Provisioning 特性,官方文檔列出了默認支持 Dynamic Provisioning 的內置存儲插件,固然也能夠擴展第三方的存儲組件 kubernetes-incubator/external-storage 實現。
配置 Heketi
GlusterFS 是個開源的分佈式文件系統,而 Heketi 在其上提供了 REST 形式的 API,兩者協同爲 Kubernetes 提供了存儲卷的自動供給能力。按照官方的 persistent-volume-provisioning 示列,這裏須要配置 Heketi 提供了一個管理 GlusterFS 集羣的 RESTTful 服務,提供 API 接口供 Kubernetes 調用 。
$ yum install epel-release $ yum install heketi heketi-client
查看版本
$ heketi --version Heketi 7.0.0 $ heketi --help Heketi is a restful volume management server Usage: heketi [flags] heketi [command] Examples: heketi --config=/config/file/path/ Available Commands: db heketi db management help Help about any command Flags: --config string Configuration file -h, --help help for heketi -v, --version Show version Use "heketi [command] --help" for more information about a command.
修改 Heketi 配置文件
vi /etc/heketi/heketi.json #修改端口,默認 8080(服務器 8080 佔用了) "port": "8000", ...... # 容許認證 "use_auth": true, ...... # 修改admin用戶的key "key": "testtoken" ...... # 配置ssh的所需證書,對集羣中的機器免密登錄 "executor": "ssh", "sshexec": { "keyfile": "/etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/etc/fstab" }, ...... # 定義heketi數據庫文件位置 "db": "/var/lib/heketi/heketi.db" ...... #修改日誌級別 "loglevel" : "info"
配置 SSH 密鑰
#生成 rsa ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N '' chmod 700 /etc/heketi/heketi_key.pub # 複製 ssh 公鑰上傳到 GlusterFS 三臺服務器(heketi 也能夠單獨部署) ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.48 ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.49 ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.50 # 驗證是否能經過ssh密鑰正常鏈接到 glusterfs 節點 ssh -i /etc/heketi/heketi_key root@172.23.216.49
啓動 Heketi
$ nohup heketi --config=/etc/heketi/heketi.json & nohup: ignoring input and appending output to ‘nohup.out’ $ cat nohup.out Heketi 7.0.0 [heketi] INFO 2018/11/09 15:50:36 Loaded ssh executor [heketi] INFO 2018/11/09 15:50:36 GlusterFS Application Loaded [heketi] INFO 2018/11/09 15:50:36 Started Node Health Cache Monitor Authorization loaded Listening on port 8000
測試 Heketi 服務端$ curl http://localhost:8000/hello Hello from Heketi
Heketi 要求在每一個 GlusterFS 節點上配備裸磁盤 device,不支持文件系統,通常以下配置,能夠經過 fdisk –l 命令查看。系統盤:/dev/vda 數據盤:/dev/vdb 雲硬盤:/dev/vdc
查看磁盤$ fdisk -l Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000d3387 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 83 Linux /dev/sda2 411648 8800255 4194304 82 Linux swap / Solaris /dev/sda3 8800256 104857599 48028672 83 Linux $ df -lh Filesystem Size Used Avail Use% Mounted on /dev/sda3 46G 5.1G 41G 11% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 172M 3.7G 5% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 197M 167M 30M 85% /boot overlay 46G 5.1G 41G 11% /var/lib/docker/containers/1c3c53802122a9ce7e3044e83f22934bb700baeda1bedc249558e9a068e892a7/mounts/shm overlay 46G 5.1G 41G 11% /var/lib/docker/overlay2/bbda116e3a230e59710afd2d9ec92817d65d71b82ccebf4d71bfc589c3605b75/merged tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/fb62839c-dc19-11e8-90ea-0050569f4a19/volumes/kubernetes.io~secret/coredns-token-v245h tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/fb638fce-dc19-11e8-90ea-0050569f4a19/volumes/kubernetes.io~secret/coredns-token-v245h overlay 46G 5.1G 41G 11% /var/lib/docker/overlay2/a85cbca8be37d9e00565d83350721091105b74e1609d399a0bb1bb91a2c56e09/merged shm 64M 0 64M 0% tmpfs 783M 0 783M 0% /run/user/0 vdb 3.9G 0 3.9G 0% /mnt/disks/vdb vdc 3.9G 0 3.9G 0% /mnt/disks/vdc
配置 topology-sample.json
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "172.23.216.48" ], "storage": [ "172.23.216.48" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "172.23.216.49" ], "storage": [ "172.23.216.49" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "172.23.216.50" ], "storage": [ "172.23.216.50" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] } ] } ] }
添加節點
$ heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" topology load --json=topology-sample.json Creating cluster ... ID: c2834ba9a3b5b6975150ad396b5ed7ca Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node 172.23.216.48 ... ID: 8c5cbad748520b529ea20f5296921928 Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found. Found node 172.23.216.49 on cluster c13ecf0a70808a3dc8abcd8de908c1ea Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found. Found node 172.23.216.50 on cluster c13ecf0a70808a3dc8abcd8de908c1ea Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found.
其餘命令
#建立 cluster
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" topology load --json=topology-sample.json
#建立 volume
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" volume create --size=3 --replica=2
#查看節點
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" node list
!!!因爲沒有多餘的掛載磁盤,參考其餘文章吧。
參考文章:
建立 StorageClass
vi glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-sc provisioner: kubernetes.io/glusterfs parameters: resturl: "http://172.23.216.48:8000" restauthenabled: "true" restuser: "admin" restuserkey: "testtoken" volumetype: "replicate:2"
上述 provisioner: kubernetes.io/glusterfs 是 Kubernetes 內置的存儲插件的名字,不一樣的存儲方式不一樣。
執行
$ kubectl apply -f glusterfs-storageclass.yaml $ kubectl get sc NAME PROVISIONER AGE glusterfs-sc kubernetes.io/glusterfs 59s
上述是 Gluster github 上的示列使用 restuserkey 的方式 , Kubernetes 官方推薦的方式把 key 使用 secret 保存apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" clusterid: "630372ccdc720a92c681fb928f27b53f" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3" volumeoptions: "client.ssl on, server.ssl on" volumenameprefix: "dept-dev" snapfactor: "10" --- apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
示列:
建立 PVCvi glusterfs-mysql-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-mysql-pvc annotations: volume.beta.kubernetes.io/storage-class: glusterfs-sc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
執行
$ kubectl apply -f glusterfs-mysql-pvc.yaml
persistentvolumeclaim/glusterfs-mysql-pvc created
有了 Dynamic Provisioning 機制,運維人員只須要在 Kubernetes 集羣裏建立出數量有限的 StorageClass 對象就能夠了。運維人員在 Kubernetes 集羣裏建立出了各類各樣的 PV 模板。開發人員提交了包含 StorageClass 字段的 PVC 以後,Kubernetes 就會根據這個 StorageClass 建立出對應的 PV。
實際運用中還有一種特殊的場景,好比在容器中部署數據庫(主從同步數據,寫數據頻率高)對 IO 的性能與網絡都會要求很高,用戶但願 Kubernetes 可以直接使用宿主機上的本地磁盤目錄,而不依賴於遠程存儲服務,來提供「持久化」的容器 Volume。這樣作的好處很明顯,因爲這個 Volume 直接使用的是本地磁盤,尤爲是 SSD 盤,讀寫性能相比於大多數遠程存儲來講要好不少。相比分佈式存儲缺點是數據一旦損壞,不具備備份與恢復的能力,須要定時備份到其餘地方。
官方文檔與資源:
有兩種方式解決上述需求:
模擬建立兩塊數據盤(vdb、vdc)
$ mkdir /mnt/disks $ for vol in vdb vdc; do mkdir /mnt/disks/$vol mount -t tmpfs $vol /mnt/disks/$vol done
vi local-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/vdb nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kubernetes-node-1 #指定固定 node 節點
查看
$ kubectl create -f local-pv.yaml persistentvolume/example-pv created $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv 2Gi RWO Delete Available local-storage 12s $ kubectl describe pv example-pv Name: example-pv Labels: <none> Annotations: <none> Finalizers: [kubernetes.io/pv-protection] StorageClass: local-storage Status: Available Claim: Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [kubernetes-node-1] Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /mnt/disks/vdb Events: <none>
上述 PV 中 local 字段,指定了它是一個 Local Persistent Volume;而 path 字段,指定的是這個 PV 對應的本地磁盤的路徑 ,意味着若是 Pod 要想使用這個 PV,那它就必須運行在 kubernetes-node-1 節點上。
建立 StorageClass
vi local-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
執行
$ kubectl create -f local-sc.yaml
$ kubectl get sc
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 6s
建立 PVC(聲明 storageClassName 是 local-storage)
vi local-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-local-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: local-storage #指定 sc
執行
$ kubectl apply -f local-pvc.yaml persistentvolumeclaim/example-local-claim created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-local-claim Bound example-pv 2Gi RWO local-storage 2s
查看
上圖顯示 PV 與 PVC 已是 Bound 狀態,因爲個人 Node 節點有2個,可是模擬的磁盤只在 Node-1 節點,因此咱們要指定 Pod 運行的固定的 Node-1 節點上,咱們經過打標籤的方式,這樣 Kubernetes 會調度 Pod 到指定的 Node上。
打標籤
$ kubectl label nodes kubernetes-node-1 zone=node-1 $ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready master 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-master,node-role.kubernetes.io/master= kubernetes-node-1 Ready <none> 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-node-1,zone=node-1 kubernetes-node-2 Ready <none> 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-node-2
部署 Nginx 測試
vi nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: #nodeSelector: #zone: node-1 nodeName: kubernetes-node-1 #指定調度節點爲 kubernetes-node-1 containers: - name: nginx-pv-container image: nginx:1.10.3 imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: example-pv-storage mountPath: "/usr/share/nginx/html" volumes: - name: example-pv-storage persistentVolumeClaim: claimName: example-local-claim
執行
$ kubectl create -f nginx-deployment.yaml $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-56bc98977b-jqg44 1/1 Running 0 104s 10.40.0.4 kubernetes-node-1 <none> nginx-deployment-56bc98977b-tbkxr 1/1 Running 0 56s 10.40.0.5 kubernetes-node-1 <none>
建立文件
$ kubectl exec -it nginx-deployment-56bc98977b-jqg44 -- /bin/sh # cd /usr/share/nginx/html # touch test.html
[root@kubernetes-node-1 vdb]# ll total 0 -rw-r--r--. 1 root root 0 Nov 10 02:17 test.html
REFER:
https://kubernetes.io/docs/concepts/storage/volumes/
https://github.com/kubernetes/examples/tree/master/staging/volumes
https://www.ibm.com/developerworks/cn/opensource/os-cn-glusterfs-docker-volume/index.html
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
https://docs.gluster.org/en/latest/
https://jimmysong.io/posts/kubernetes-with-glusterfs/