k8s volume存儲卷

介紹

volume存儲卷是Pod中可以被多個容器訪問的共享目錄,kubernetes的volume概念,用途和目的與docker的volume比較相似,但二者不能等價,首先,kubernetes中的volume被定義在Pod上,而後被一個Pod裏的多個容器掛在到具體的文件目錄下;其次,kubenetes中的volume與Pod的生命週期相同,但與容器生命週期不相關,當容器終止或者重啓時,volume中的數據也不會丟失,最後Volume支持多種數據類型,好比:GlusterFS,Ceph等吸納進的分佈式文件系統html

 

emptyDir

emptyDir Volume是在Pod分配到node時建立的,從他的名稱就能看得出來,它的出事內容爲空,而且無需指定宿主機上對應的目錄文件,由於這是kubernetes自動分配的一個目錄,當Pod從node上移除時,emptyDir中的數據也會被永久刪除emptyDir的用途有:node

  • 臨時空間,例如用於某些應用程序運行時所需的臨時目錄,且無需永久保留
  • 長時間任務的中間過程checkpoint的臨時保存目錄
  • 一個容器須要從另外一個容器中獲取數據庫的目錄(多容器共享目錄)

emptyDir的使用也比較簡單,在大多數狀況下,咱們先在Pod生命一個Volume,而後在容器裏引用該Volume並掛載到容器裏的某個目錄上,好比,咱們在一個Pod中定義2個容器,一個容器運行nginx,一個容器運行busybox,而後咱們在這個Pod上定義一個共享存儲卷,裏面的內容兩個容器應該均可以看獲得,拓撲圖以下:nginx

 

 

如下標紅的要注意,共享卷的名字要一致web

[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: serivce-mynginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: mynginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy
  namespace: default
spec:
  replicas: 1
  selector: 
    matchLabels:
      app: mynginx
  template:
    metadata:
      labels:
        app: mynginx
    spec:
      containers:
      - name: mynginx
        image: lizhaoqwe/nginx:v1
        volumeMounts: - mountPath: /usr/share/nginx/html/ name: share
        ports:
        - name: nginx
          containerPort: 80
      - name: busybox
        image: busybox
        command:
        - "/bin/sh"
        - "-c"
        - "sleep 4444"
        volumeMounts:
        - mountPath: /data/ name: share
     volumes: - name: share
        emptyDir: {}

 

建立Poddocker

[root@master ~]# kubectl create -f test.yaml

查看Pod數據庫

[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
deploy-5cd657dd46-sx287   2/2     Running   0          2m1s

查看servicevim

[root@master ~]# kubectl get svc
NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes        ClusterIP   10.96.0.1      <none>        443/TCP        6d10h
serivce-mynginx   NodePort    10.99.110.43   <none>        80:30080/TCP   2m27s

咱們進入到busybox容器當中建立一個index.html後端

[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c busybox -- /bin/sh

容器內部:
/data # cd /data
/data # echo "fengzi" > index.html

打開瀏覽器驗證一下api

 

 

到nginx容器中看一下有沒有index.html文件瀏覽器

[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c nginx -- /bin/sh  
容器內部:     
# cd /usr/share/nginx/html
# ls -ltr
total 4
-rw-r--r-- 1 root root 7 Sep  9 17:06 index.html

ok,說明咱們在busybox裏寫入的文件被nginx讀取到了!

 

hostPath

hostPath爲在Pod上掛載宿主機上的文件或目錄,它一般能夠用於如下幾方面:

  • 容器應用程序生成的日誌文件須要永久保存時,可使用宿主機的告訴文件系統進行存儲
  • 須要訪問宿主機上Docker引擎內部數據結構的容器應用時,能夠經過定義hostPath爲宿主機/var/lib/docker目錄,使容器內部應用能夠直接訪問Docker的文件系統

在使用這種類型的volume時,須要注意如下幾點:

  • 在不一樣的node上具備相同配置的Pod時,可能會由於宿主機上的目錄和文件不一樣而致使對volume上的目錄和文件訪問結果不一致
  • 若是使用了資源配置,則kubernetes沒法將hostPath在宿主機上使用的資源歸入管理

 

hostPath類型存儲卷架構圖

 

 

那麼下面咱們就定義一個hostPath看一下效果:

[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: lizhaoqwe/nginx:v1
        volumeMounts: - mountPath: /usr/share/nginx/html
          name: persistent-storage
        ports:
        - containerPort: 80
      volumes:
      - name: persistent-storage
        hostPath:
          type: DirectoryOrCreate
          path: /mydata

在hostPath下的type要注意一下,咱們能夠看一下幫助信息以下

[root@master data]# kubectl explain deploy.spec.template.spec.volumes.hostPath.type
KIND:     Deployment
VERSION:  extensions/v1beta1

FIELD:    type <string>

DESCRIPTION:
     Type for HostPath Volume Defaults to "" More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

能夠看到幫助信息並無太多信息,可是給我留了一個參考網站,咱們打開這個網站

 

 

能夠看到hostPath下的type能夠有這麼多的選項,意思不在解釋了,能夠本身谷歌,咱們這裏選第一個

 

 

執行yaml文件

[root@master ~]# kubectl create -f test.yaml 
service/nginx-deploy created
deployment.apps/mydeploy created

而後咱們能夠去兩個節點去查看是否有/mydata這個目錄

 

 

 

 能夠看到兩邊的mydata目錄都已經建立完畢,接下來咱們在目錄裏寫點東西

 

 

 

 兩邊節點都寫了一些東西,好了,如今咱們能夠驗證一下

 

 能夠看到訪問沒有問題,而且仍是負載均衡

 

NFS

相信NFS你們已經不陌生了,因此在這裏我就不詳細說明什麼NFS,我只說如何在k8s集羣當中掛在nfs文件系統

基於NFS文件系統掛載的卷的架構圖爲

 

 

開啓集羣之外的另外一臺虛擬機,安裝nfs-utils安裝包

note:這裏要注意的是須要在集羣每一個節點都安裝nfs-utils安裝包,否則掛載會失敗!

[root@master mnt]# yum install nfs-utils
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.cn99.com
 * extras: mirrors.cn99.com
 * updates: mirrors.cn99.com
軟件包 1:nfs-utils-1.3.0-0.61.el7.x86_64 已安裝而且是最新版本
無須任何處理

編輯/etc/exports文件添加如下內容

[root@localhost share]# vim /etc/exports
    /share  192.168.254.0/24(insecure,rw,no_root_squash)

重啓nfs服務

[root@localhost share]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service

在/share目錄中寫一個index.html文件而且寫入內容

[root@localhost share]# echo "nfs server" > /share/index.html

在kubernetes集羣的master節點中建立yaml文件並寫入

[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: lizhaoqwe/nginx:v1
        volumeMounts: - mountPath: /usr/share/nginx/html
          name: nfs
        ports:
        - containerPort: 80
      volumes:
      - name: nfs
        nfs:
          server: 192.168.254.11       #nfs服務器地址 
          path: /share          #nfs服務器共享目錄

建立yaml文件

[root@master ~]# kubectl create -f test.yaml               
service/nginx-deploy created
deployment.apps/mydeploy created

驗證

 

 OK,沒問題!!!

pvc

以前的volume是被定義在Pod上的,屬於計算資源的一部分,而實際上,網絡存儲是相對獨立於計算資源而存在的一種實體資源。好比在使用虛擬機的狀況下,咱們一般會先定義一個網絡存儲,而後從中劃出一個網盤並掛載到虛擬機上,Persistent Volume(pv)和與之相關聯的Persistent Volume Claim(pvc)也起到了相似的做用

pv能夠理解成爲kubernetes集羣中的某個網絡存儲對應的一塊存儲,它與Volume相似,但有如下區別:

  • pv只能是網絡存儲,不屬於任何Node,但能夠在每一個Node上訪問
  • pv並非被定義在Pod上的,而是獨立於Pod以外定義的

 

 

 

在nfs server服務器上建立nfs卷的映射並重啓

[root@localhost ~]# cat /etc/exports
/share_v1  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v2  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v3  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v4  192.168.254.0/24(insecure,rw,no_root_squash)
/share_v5  192.168.254.0/24(insecure,rw,no_root_squash)
[root@localhost
~]# service nfs restart

在nfs server服務器上建立響應目錄

[root@localhost /]# mkdir /share_v{1,2,3,4,5}

 

在kubernetes集羣中的master節點上建立pv,我這裏建立了5個pv對應nfs server當中映射出來的5個目錄

[root@master ~]# cat createpv.yaml 
apiVersion: v1 kind: PersistentVolume metadata: name: pv01 spec: nfs:               #存儲類型 path: /share_v1       #要掛在的nfs服務器的目錄位置
    server: 192.168.254.11   #nfs server地址,也能夠是域名,前提是能被解析
  accessModes:          #訪問模式:
  - ReadWriteMany          ReadWriteMany:讀寫權限,容許多個Node掛載 | ReadWriteOnce:讀寫權限,只能被單個Node掛在 | ReadOnlyMany:只讀權限,容許被多個Node掛載
  - ReadWriteOnce          
  capacity:            #存儲容量            
    storage: 10Gi         #pv存儲卷爲10G
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02
spec:
  nfs:
    path: /share_v2
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  capacity:
    storage: 20Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03
spec:
  nfs:
    path: /share_v3
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 30Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv04
spec:
  nfs:
    path: /share_v4
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 40Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv05
spec:
  nfs:
    path: /share_v5
    server: 192.168.254.11
  accessModes: 
  - ReadWriteMany
  - ReadWriteOnce
  capacity:
    storage: 50Gi

 

執行yaml文件

[root@master ~]# kubectl create -f createpv.yaml 
persistentvolume/pv01 created
persistentvolume/pv02 created
persistentvolume/pv03 created
persistentvolume/pv04 created
persistentvolume/pv05 created

查看pv

[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv01   10Gi       RWO,RWX        Retain           Available                                   5m10s
pv02   20Gi       RWX            Retain           Available                                   5m10s
pv03   30Gi       RWO,RWX        Retain           Available                                   5m9s
pv04   40Gi       RWO,RWX        Retain           Available                                   5m9s
pv05   50Gi       RWO,RWX        Retain           Available                                   5m9s


來一波解釋:
ACCESS MODES:
  RWO:ReadWriteOnly
  RWX:ReadWriteMany
  ROX:ReadOnlyMany
RECLAIM POLICY:
  Retain:保護pvc釋放的pv及其上的數據,將不會被其餘pvc綁定
  recycle:保留pv但清空數據
  delete:刪除pvc釋放的pv及後端存儲volume
STATUS:
  Available:空閒狀態
  Bound:已經綁定到某個pvc上
  Released:對應的pvc已經被刪除,可是資源沒有被集羣回收
  Failed:pv自動回收失敗
CLAIM:
  被綁定到了那個pvc上面格式爲:NAMESPACE/PVC_NAME
  





  

有了pv以後咱們就能夠建立pvc了

[root@master ~]# cat test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  namespace: default
spec:
  selector:
    app: mynginx
  type: NodePort
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 31111

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx
  template:
    metadata:
      name: web
      labels:
        app: mynginx
    spec:
      containers:
      - name: mycontainer
        image: nginx
        volumeMounts: - mountPath: /usr/share/nginx/html
          name: html
        ports:
        - containerPort: 80
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: mypvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessMode:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

執行yaml文件

[root@master ~]# kubectl create -f test.yaml 
service/nginx-deploy created
deployment.apps/mydeploy created
persistentvolumeclaim/mypvc created

再次查看pv,已經顯示pvc被綁定到了pv02上

[root@master ~]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv01   10Gi       RWO,RWX        Retain           Available                                           22m
pv02 20Gi RWX Retain Bound default/mypvc 22m
pv03   30Gi       RWO,RWX        Retain           Available                                           22m
pv04   40Gi       RWO,RWX        Retain           Available                                           22m
pv05   50Gi       RWO,RWX        Retain           Available                                           22m

查看pvc

[root@master ~]# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv02     20Gi       RWX                           113s

 

驗證

在nfs server服務器上找到相應的目錄執行如下命令

[root@localhost share_v1]# echo 'test pvc' > index.html

而後打開瀏覽器

 OK,沒問題

configMap 

應用部署的一個最佳實戰是將應用所需的配置信息與程序進行分離,這樣可使應用程序被更好的複用,經過不一樣的配置也能實現更靈活的功能,將應用打包爲容器鏡像後,能夠經過環境變量或者外掛文件的方式在建立容器時進行配置注入,但在大規模容器集羣的環境中,對多個容器進行不一樣的配置講變得很是複雜,Kubernetes 1.2開始提供了一種統一的應用配置管理方案-configMap

ConfigMap供容器使用的典型用法以下:

  • 生成爲容器內的環境變量
  • 設置容器啓動命令的啓動參數(需設置爲環境變量)
  • 以volume的形式掛載爲容器內部的文件或者目錄

 

configMap編寫變量注入pod中

好比咱們用configmap建立兩個變量,一個是nginx_port=80,一個是nginx_server=192.168.254.13

[root@master ~]# kubectl create configmap nginx-var --from-literal=nginx_port=80 --from-literal=nginx_server=192.168.254.13
configmap/nginx-var created

查看configmap

[root@master ~]# kubectl get cm
NAME        DATA   AGE
nginx-var   2      5s


[root@master ~]# kubectl describe cm nginx-var
Name:         nginx-var
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nginx_port:
----
80
nginx_server:
----
192.168.254.13
Events:  <none>

而後咱們建立pod,把這2個變量注入到環境變量當中

[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html
            mountPath: /user/share/nginx/html/
        env:
        - name: TEST_PORT
          valueFrom:
            configMapKeyRef:
              name: nginx-var
              key: nginx_port
        - name: TEST_HOST
          valueFrom:
            configMapKeyRef:
              name: nginx-var key: nginx_server
      volumes:
      - name: html
        emptyDir: {}

執行pod文件

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created

查看pod

[root@master ~]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
mydeploy-d975ff774-fzv7g   1/1     Running   0          19s
mydeploy-d975ff774-nmmqt   1/1     Running   0          19s

進入到容器中查看環境變量

[root@master ~]# kubectl exec -it mydeploy-d975ff774-fzv7g -- /bin/sh


# printenv
SERVICE_NGINX_PORT_80_TCP_PORT=80
KUBERNETES_PORT=tcp://10.96.0.1:443
SERVICE_NGINX_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
HOSTNAME=mydeploy-d975ff774-fzv7g
SERVICE_NGINX_SERVICE_PORT_NGINX=80
HOME=/root
PKG_RELEASE=1~buster
SERVICE_NGINX_PORT_80_TCP=tcp://10.99.184.186:80
TEST_HOST=192.168.254.13
TEST_PORT=80
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
NGINX_VERSION=1.17.3
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
NJS_VERSION=0.3.5
KUBERNETES_PORT_443_TCP_PROTO=tcp
SERVICE_NGINX_SERVICE_HOST=10.99.184.186
SERVICE_NGINX_PORT=tcp://10.99.184.186:80
SERVICE_NGINX_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
SERVICE_NGINX_PORT_80_TCP_ADDR=10.99.184.186

能夠發現configMap當中的環境變量已經注入到了pod容器當中

這裏要注意的是,若是是用這種環境變量的注入方式,pod啓動後,若是在去修改configMap當中的變量,對於pod是無效的,若是是以卷的方式掛載,是可的實時更新的,這一點要清楚

用configMap以存儲卷的形式掛載到pod中

上面說到了configMap以變量的形式雖然能夠注入到pod當中,可是若是在修改變量的話pod是不會更新的,若是想讓configMap中的配置跟pod內部的實時更新,就須要以存儲卷的形式掛載

[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html-config
            mountPath: /nginx/vars/
            readOnly: true
      volumes:
      - name: html-config
        configMap:
          name: nginx-var

執行yaml文件

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created
deployment.apps/mydeploy created

查看pod

[root@master ~]# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mydeploy-6f6b6c8d9d-pfzjs   1/1     Running   0          90s
mydeploy-6f6b6c8d9d-r9rz4   1/1     Running   0          90s

進入到容器中

[root@master ~]# kubectl exec -it mydeploy-6f6b6c8d9d-pfzjs -- /bin/bash

在容器中查看configMap對應的配置

root@mydeploy-6f6b6c8d9d-pfzjs:/# cd /nginx/vars
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# ls
nginx_port  nginx_server
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 
80
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# 

修改configMap中的配置,把端口號從80修改爲8080

[root@master ~]# kubectl edit cm nginx-var
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  nginx_port: "8080"
  nginx_server: 192.168.254.13
kind: ConfigMap
metadata:
  creationTimestamp: "2019-09-13T14:22:20Z"
  name: nginx-var
  namespace: default
  resourceVersion: "248779"
  selfLink: /api/v1/namespaces/default/configmaps/nginx-var
  uid: dfce8730-f028-4c57-b497-89b8f1854630

修改完稍等片刻查看文件檔中的值,已然更新成8080

root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 
8080
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# 

configMap建立配置文件注入到pod當中

這裏以nginx配置文件爲例子,咱們在宿主機上配置好nginx的配置文件,建立configmap,最後經過configmap注入到容器中

建立nginx配置文件

[root@master ~]# vim www.conf 
server {
    server_name: 192.168.254.13;
    listen: 80;
    root /data/web/html/;
}

建立configMap

[root@master ~]# kubectl create configmap nginx-config --from-file=/root/www.conf 
configmap/nginx-config created

查看configMap

[root@master ~]# kubectl get cm
NAME           DATA   AGE
nginx-config   1      3m3s
nginx-var      2      63m

建立pod並掛載configMap存儲卷

[root@master ~]# cat test2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mydeploy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: web
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - name: nginx
          containerPort: 80
        volumeMounts:
          - name: html-config
            mountPath: /etc/nginx/conf.d/
            readOnly: true
      volumes:
      - name: html-config
        configMap:
          name: nginx-config

啓動容器,並讓容器啓動的時候就加載configMap當中的配置

[root@master ~]# kubectl create -f test2.yaml 
service/service-nginx created
deployment.apps/mydeploy created

查看容器

[root@master ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
mydeploy-fd46f76d6-jkq52   1/1     Running   0          22s   10.244.1.46   node1   <none>           <none>

訪問容器當中的網頁,80端口是沒問題的,8888端口訪問不一樣

[root@master ~]# curl 10.244.1.46
this is test web


[root@master ~]# curl 10.244.1.46:8888
curl: (7) Failed connect to 10.244.1.46:8888; 拒絕鏈接

接下來咱們去修改configMap當中的內容,吧80端口修改爲8888

[root@master ~]# kubectl edit cm nginx-config
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  www.conf: |
    server {
        server_name 192.168.254.13;
        listen 8888;
        root /data/web/html/;
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2019-09-13T15:22:22Z"
  name: nginx-config
  namespace: default
  resourceVersion: "252615"
  selfLink: /api/v1/namespaces/default/configmaps/nginx-config
  uid: f1881f87-5a91-4b8e-ab39-11a2f45733c2

進入到容器查看配置文件,能夠發現配置文件已經修改過來了

root@mydeploy-fd46f76d6-jkq52:/usr/bin# cat /etc/nginx/conf.d/www.conf 
server {
    server_name 192.168.254.13;
    listen 8888;
    root /data/web/html/;
}

在去測試訪問,發現仍是報錯,這是由於配置文件雖然已經修改了,可是nginx服務並無加載配置文件,咱們手動加載一下,之後能夠用腳本形式自動完成加載文件

[root@master ~]# curl 10.244.1.46
this is test web
[root@master ~]# curl 10.244.1.46:8888
curl: (7) Failed connect to 10.244.1.46:8888; 拒絕鏈接

在容器內部手動加載配置文件

root@mydeploy-fd46f76d6-jkq52:/usr/bin# nginx -s reload
2019/09/13 16:04:12 [notice] 34#34: signal process started

再去測試訪問,能夠看到80端口已經訪問不通,反而是咱們修改的8888端口能夠訪問通

[root@master ~]# curl 10.244.1.46
curl: (7) Failed connect to 10.244.1.46:80; 拒絕鏈接
[root@master ~]# curl 10.244.1.46:8888
this is test web

完結!!

相關文章
相關標籤/搜索