Kubernetes理論02

 轉載:https://blog.51cto.com/hmtk520/2428519html

1、Pod簡介
2、標籤
3、Pod控制器:Deployment、ReplicaController、StatefuleSet、DaemonSet、Job、CronJob等
4、Service
5、Ingress
6、ServiceAccount/UserAccount
7、Secret&Configmap前端

 

1、Pod簡介

k8s的常見資源:node

workload(對外提供服務的):Pod、ReplicaSet、Deployment、StaefulSet、Job、CronJob...
Service發現及均衡:service、ingress
配置與存儲:Volume、CSI(存儲接口)
      ConfigMap、Secret、DownwardAPI
集羣級資源:Namespace、Node、Role、ClusterRole、RoleBinding、ClusterRoleBinding
元數據資源:HPA、PodTemplate、LimitRange、

一個Pod(就像一羣鯨魚,或者一個豌豆夾)至關於一個共享context的配置組,在同一個context下,應用可能還會有獨立的cgroup隔離機制,一個Pod是一個容器環境下的「邏輯主機」,它可能包含一個或者多個緊密相連的應用,這些應用多是在同一個物理主機或虛擬機上。mysql

Pod 的context能夠理解成多個linux命名空間的聯合
  PID 命名空間(同一個Pod中應用能夠看到其它進程)
  網絡 命名空間(同一個Pod的中的應用對相同的IP地址和端口有權限)
  IPC 命名空間(同一個Pod中的應用能夠經過VPC或者POSIX進行通訊)
  UTS 命名空間(同一個Pod中的應用共享一個主機名稱)linux

一、Pod種類:

  自主式pod。不接受pod控制器管理,刪除後不會自動建立
  Pod控制器管理的Pod。刪除後會自動建立,若是確實要修改數量,可使用scale調整nginx

二、資源清單格式: 

[root@registry metrics-server]# kubectl get pods $pod_name -o yaml #能夠查看當前pod的yaml格式定義
YAML格式的基本格式:apiVersion:(group/version)、metadata、kind、spec、status //amkss

[root@registry metrics-server]# kubectl explain pod  #能夠查pod的yaml定義方法
apiVersion:定義使用的api版本信息
kind: 資源類型,好比pod 
metadatab:元數據信息
    name: 名稱
    annotations <map[string]string>:註解
    labels  <map[string]string>:標籤,字符串映射格式
    namespace   <string>:namespace名稱
status:當前pod的狀態,動態生成
sepc:
    containers:pod內容器定義
    restartPolicy:重啓策略OnFailure,Never,Default to Always
    nodeSelector:根據node標籤調度pod
    nodeName:指定node
    hostname:pod的名稱
    hostPID:使用host PID
    hostNetwork:使用hostNetwork 
    affinity:親和性調度 //nodeAffinity,podAffinity,podAntiAffinity
    serviceAccountName:pod使用的sa,在serviceAccount重點說明
    volumes:要建立的volume,在pv詳細說明
    tolerations:容忍度Taints與tolerations一塊兒工做確保pod不會被調度到不合適的節點上

 

三、containers

 

[root@registry metrics-server]# kubectl explain pods.spec.containers
args    <[]string>  #Arguments to the entrypoint
command <[]string> #Entrypoint array, The docker image's ENTRYPOINT is used if this is not provided
env <[]Object> #環境變量
envFrom <[]Object> #
image   <string>
imagePullPolicy <string> #鏡像拉取策略Always, Never(本地沒有就不下載,須要用戶手動pull), IfNotPresent. 若是鏡像tag是latest,默認策略是Defaults to Always(由於latest可能會變,latest會指向一個新的標籤),其餘的標籤則默認是IfNotPresent;而且這個策略不能修改
lifecycle   <Object>    
livenessProbe   <Object>
name    <string> -required-
ports   <[]Object>  #kubectl explain pods.spec.containers.ports 此處的port
    containerPort
    hostIP  #由於容器運行在哪一個node不肯定,所以若是確實須要綁定hostip,建議0.0.0.0
    hostPort  #對應主機的端口
    protocol //Must be UDP, TCP, or SCTP. Defaults to "TCP"
readinessProbe  <Object>  #存活性
resources   <Object>  #資源限制
    limits  #最大資源使用量
    requests #最小資源使用量
securityContext <Object>  #安全上下文
dockerfile中若是隻有cmd,就運行cmd,若是cmd和entrypoint都有,cmd的內容將做爲參數傳遞給entrypoint
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

============================================    
Description                         Docker field name           Kubernetes field name
The command run by the container    Entrypoint                  command
The arguments passed to the         command                     Cmd args
============================================    
    If you do not supply command or args for a Container, the defaults defined in the Docker image are used.
    If you supply a command but no args for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored.
    If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.
    If you supply a command and args, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args.

Image   Entrypoint  Image Cmd       Container command   Container args  Command run
[/ep-1]             [foo bar]       <not set>           <not set>       [ep-1 foo bar]
[/ep-1]             [foo bar]       [/ep-2]             <not set>       [ep-2]
[/ep-1]             [foo bar]       <not set>           [zoo boo]       [ep-1 zoo boo]
[/ep-1]             [foo bar]       [/ep-2]             [zoo boo]       [ep-2 zoo boo]
====================================

 

示例:git

[root@registry work]# cat broker.yaml
apiVersion: v1
kind: Pod
metadata:
  name: appv1
  namespace: default
  labels:
    name: broker
    version: latest
spec:
  containers:
  - name: broker
    args:
    env: 
    - name: verbase
      value: testv1
    - name: MY_NODE_NAME
      valueFrom:
        fieldRef: 
          fieldPath: spec.nodeName
    command:
    - "ping"
    - "-i 2"
    - "127.0.0.1" 
    image: "192.168.192.234:888/broker:latest"
    imagePullPolicy: Always
  - name: nginx
    image: 192.168.192.234:888/nginx:latest
    imagePullPolicy: IfNotPresent 
    ports: 
    - containerPort: 80
      hostPort: 8888
      protocol: TCP
  dnsPolicy: Default
  restartPolicy: Always

 


四、lifecycle

容器生命週期鉤子(Container Lifecycle Hooks)監聽容器生命週期的特定事件,並在事件發生時執行已註冊的回調函數
支持2種鉤子:
  postStart:容器啓動後執行,注意因爲是異步執行,它沒法保證必定在ENTRYPOINT以後運行。若是失敗,容器會被殺死,並根據RestartPolicy決定是否重啓
  preStop:容器中止前執行,經常使用於資源清理。若是失敗,容器一樣也會被殺死
回調函數支持兩種方式
  exec:在容器內執行命令
  httpGet:向指定URL發起GET請求
示例:github

[root@registry work]#  cat v2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: nginx
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
      preStop:
        exec:
          command: ["/usr/sbin/nginx","-s","quit"]

 


五、容器生命週期探測

k8s支持2中類型的Pod生命週期檢測:
  1)liveness Probe存活,容器狀態
  2)readiness Probe就緒型檢測,是否服務Readyweb

Pod的狀態:pending、running、failed、succeed、Unknown
探針類型三種:Exec、TCPSocketAction、HTTPGetActionsql

[root@registry work]# kubectl explain pods.spec.containers.readinessProbe
[root@registry work]#   kubectl explain pods.spec.containers.livenessProbe
exec    <Object>
failureThreshold    <integer>       探測成功後,最少連續探測失敗多少次才被認定爲失敗,默認是 3
httpGet <Object>
initialDelaySeconds <integer>   容器啓動後第一次執行探測是須要等待多少秒
periodSeconds   <integer>       執行探測的頻率,默認是10秒。
successThreshold    <integer>   探測失敗後,最少連續探測成功多少次才被認定爲成功,默認是 1
tcpSocket   <Object>        
timeoutSeconds  <integer>   探測超時時間,默認1秒。

 

apiVersion: v1 
kind: Pod
metadata:
  name: liveness-exec-pod 
  namespace: default 
spec:
  containers:
  - name: live-ness-container 
    image: 192.168.192.234:888/nginx
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/healthy ;sleep 60;rm -rf /tmp/healthy; sleep 30"]
    livenessProbe:
      exec: 
        command: ["test","-e","/tmp/healthy"]
      initialDelaySeconds: 1
      periodSeconds: 3
[root@master1 yaml]# kubectl create -f livenessl.yaml 
[root@master1 yaml]# cat httpget.yaml 
apiVersion: v1 
kind: Pod
metadata:
  name: liveness-httpget-pod 
  namespace: default 
spec:
  containers:
  - name: live-ness-container 
    image: 192.168.192.234:888/nginx
    imagePullPolicy: IfNotPresent
    livenessProbe:
      httpGet: 
        port: 80
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3

[root@master1 ~]# kubectl exec -it  liveness-httpget-pod  -- /bin/bash
root@liveness-httpget-pod:/# 
root@liveness-httpget-pod:/usr/share/nginx/html# mv index.html index.html.bak
[root@master1 yaml]# kubectl describe pods liveness-httpget-pod 
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  5m56s                default-scheduler  Successfully assigned default/liveness-httpget-pod to master2
  Normal   Pulled     13s (x2 over 5m56s)  kubelet, master2   Container image "192.168.192.234:888/nginx" already present on machine
  Normal   Created    13s (x2 over 5m56s)  kubelet, master2   Created container live-ness-container
  Warning  Unhealthy  13s (x3 over 19s)    kubelet, master2   Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    13s                  kubelet, master2   Container live-ness-container failed liveness probe, will be restarted
  Normal   Started    12s (x2 over 5m56s)  kubelet, master2   Started container live-ness-container

 

 

爲何要作readnessProbe和livenessProbe:
好比service根據label關聯容器,一個新建的容器服務啓動可能要10s,在剛建立成功後就被service關聯
收到的請求就會失敗,最好是在服務運行就緒(ready)後再提供服務

[root@master1 ~]# kubectl explain pods.spec.containers.readinessProbe
[root@master1 yaml]# cat readness.yaml
apiVersion: v1 
kind: Pod
metadata:
  name: readiness-exec-pod 
  namespace: default 
spec:
  containers:
  - name: readiness-container 
    image: 192.168.192.234:888/nginx
    imagePullPolicy: IfNotPresent
    ports: 
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet: 
        port: http 
        path: /index.html 
      initialDelaySeconds: 1
      periodSeconds: 3

 


六、資源限制

1)resources

[root@registry work]# kubectl explain pods.spec.containers.resources
spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
[root@registry work]# cat v3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - image: nginx
      name: nginx
      resources:
        requests:
          cpu: "300m"
          memory: "56Mi"
        limits:
          cpu: "500m"
          memory: "128Mi"
mem支持單位:Ki | Mi | Gi | Ti | Pi 等

 

2)限制網絡帶寬
能夠經過給Pod增長kubernetes.io/ingress-bandwidth和kubernetes.io/egress-bandwidth這兩個annotation來限制Pod的網絡帶寬
目前只有kubenet網絡插件支持限制網絡帶寬,其餘CNI網絡插件暫不支持這個功能。

apiVersion: v1
kind: Pod
metadata:
  name: qos
  annotations:
    kubernetes.io/ingress-bandwidth: 3M
    kubernetes.io/egress-bandwidth: 4M
spec:
  containers:
  - name: iperf3
    image: networkstatic/iperf3
    command:
    - iperf3
    - -s

 


七、Init Container

Init Container在全部容器運行以前執行(run-to-completion),經常使用來初始化配置。

[root@registry ~]# cat v3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: init-demo
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: workdir
      mountPath: /usr/share/nginx/html
  initContainers:
  - name: install
    image: busybox
    command:
    - wget
    - "-O"
    - "/work-dir/index.html"
    - http://kubernetes.io
    volumeMounts:
    - name: workdir
      mountPath: "/work-dir"
  dnsPolicy: Default
  volumes:
  - name: workdir
    emptyDir: {}

 

2、標籤

一、標籤

一個資源resouces可存在多個label,一個label也能夠應用於多個resources
每一個標籤均可以被label selector進行匹配
label能夠在資源建立時yaml或者命令建立資源時定義,也能夠在建立後命令添加
key=value
key:字母、數字、_、-、.
value:能夠爲空、只能以字母或者數字開頭或者結尾,中間可以使用

[root@master1 ~]# kubectl get pods --show-labels
NAME                           READY   STATUS    RESTARTS   AGE     LABELS
client                         1/1     Running   0          5d23h   run=client
nginx-deloy-67f8c9dc5c-bgx9k   1/1     Running   0          5d22h   pod-template-hash=67f8c9dc5c,run=nginx-deloy

[root@master1 ~]# kubectl get pods -L=run,nginx   //輸出增長run和nginx兩個列、
NAME                           READY   STATUS    RESTARTS   AGE     RUN           NGINX
client                         1/1     Running   0          5d23h   client        
nginx-deloy-67f8c9dc5c-bgx9k   1/1     Running   0          5d22h   nginx-deloy   

[root@master1 ~]# kubectl get pods -l pod-template-hash   //具備標籤 pod-template-hash 的pod 
[root@master1 ~]# kubectl get pods -l run=client   //標籤run爲client的
NAME     READY   STATUS    RESTARTS   AGE
client   1/1     Running   0          5d23h
添加標籤:

[root@master1 ~]# kubectl label pods client name=label1,run 
[root@master1 ~]# kubectl label pods client name=label2 --overwrite  //強制修改標籤
刪除標籤:

[root@master1 ~]#   kubectl label client client name-   //key- 便可刪除標籤

 

二、標籤選擇器:

等值關係:=,==,!=
集合關係:
KEY in (value1,value2,...) # kubectl get pods --show-labels -l "name notin (label1,label2)
Key not in (value1,value2...)# kubectl get pods --show-labels -l "name in (label1,label2)"
Key 存在
!key 不存在 # kubectl get pods --show-labels -l "! name"
許多資源支持內嵌字段定義其使用的標籤選擇器:

[root@registry ~]# kubectl explain deployment.spec.selector
    matchLabels:直接給定鍵值
    mathExpressions:{key:"KEY",operator:"OPERATOR",value:[VAL1,VAL2,VAL3..]}
        操做符:
            In,NotIn:value字段的值必須爲非空列表
            Exists,NotExists:value字段的值必須爲空列表
標籤的對象能夠是:pods、node...
annotations:與label不一樣的地方在於、他不能用於挑選資源對象,僅用於對象提供「元數據」
    能夠在yaml文件中定義,也能夠在
[root@registry ~]# kubectl describe pods $pod_name 查看 

 


3、Pod控制器




Pod控制器:
  ReplicaSet:確保副本處於用戶期待狀態,新的RC,支持動態擴容,(無狀態pod資源) #核心概念:標籤選擇器、用戶指望的副本數、Pod資源模板
  Deployment工做在ReplicaSet之上,Deploement支持滾動更新等等,Deployments是一個更高層次的概念,它管理ReplicaSets,並提供對pod的聲明性更新以及許多其餘的功能,所以通常建議使用Deployment
  DaemonSet:集羣的全部同一個label的node都運行一個pod副本
  Job:只操做一次的Pod,Job負責保證任務正常運行結束,而不是異常任務
  CronJob:週期性job,
  StatefuleSet:有狀態Pod,須要自定義操做內容
  TPR:Third party resource,1.2+ 1.7廢棄
  CDR:custom defined resource,1.8+

一、Pod控制器ReplicationSet

ReplicaSet是下一代複本控制器。ReplicaSet和 Replication Controller之間的惟一區別是如今的選擇器支持。注:Replication Controller被下一代ReplicaSet副本控制器替代
Replication Controller 保證了在全部時間內,都有特定數量的Pod副本正在運行,若是太多了,Replication Controller就殺死幾個,若是太少了,Replication Controller會新建幾個

Deployments是一個更高層次的概念,它管理ReplicaSets,並提供對pod的聲明性更新以及許多其餘的功能,所以通常建議使用Deployment

[root@master1 ~]# kubectl explain ReplicaSet  //kubectl explain rs
[root@master1 yaml]# kubectl create -f rs.yaml 
replicaset.apps/myrs created
[root@master1 yaml]# cat rs.yaml 
apiVersion: apps/v1
kind: ReplicaSet
spec:
  replicas: 2
  selector: 
    matchLabels: 
      app: myapp
      release: beta 
  template: 
    metadata:
      name: myapp-pod
      labels:
         app: myapp
         release: beta
         environment: dev
    spec:
      containers: 
      - name: myrs-container
        image: 192.168.192.234:888/nginx:latest
        ports: 
        - name: http 
          containerPort: 80
metadata: 
  name: myrs
  namespace: default
[root@master1 yaml]# 

[root@master1 yaml]# kubectl get pods -l app
NAME         READY   STATUS    RESTARTS   AGE
myrs-fnhwd   1/1     Running   0          61s
myrs-x5zgh   1/1     Running   0          61s
若是爲已有的pod添加上pod標籤 app: myapp release: beta ,pod控制器會選取其中一個進行幹掉

[root@master1 yaml]# kubectl get pods --show-labels
NAME                           READY   STATUS    RESTARTS   AGE     LABELS
client                         1/1     Running   0          10d     run=client
liveness-httpget-pod           1/1     Running   1          4d      <none>
myrs-fnhwd                     1/1     Running   0          5m5s    app=myapp,environment=dev,release=beta
myrs-x5zgh                     1/1     Running   0          5m5s    app=myapp,environment=dev,release=beta
nginx-deloy-67f8c9dc5c-bgx9k   1/1     Running   0          10d     pod-template-hash=67f8c9dc5c,run=nginx-deloy
readiness-exec-pod             1/1     Running   0          3d23h   <none>
[root@master1 yaml]# kubectl label pods liveness-httpget-pod app=myapp release=beta  --overwrite
pod/liveness-httpget-pod labeled

[root@master1 ~]# kubectl get pods  --show-labels -l app,release  //發現myrs被幹掉一個
NAME                   READY   STATUS    RESTARTS   AGE     LABELS
liveness-httpget-pod   1/1     Running   1          4d      app=myapp,release=beta
myrs-x5zgh             1/1     Running   0          7m28s   app=myapp,environment=dev,release=beta
rs 小粒子
1)kubectl scale命令 
2)修改yaml方式 ,而後kubectl apply -f yaml文件 
3)Kubectl edit ReplicaSet $rs名稱

更新升級:更改容器iamge版本
kubectl edit rs $rs名稱也能夠,可是隻修改了控制器(rs)的版本,只有在pod被刪除後纔會使用新image
Deployment是創建在RS之上的,支持滾動升級,支持控制更新邏輯和更新策略(最多/最少)多個pods
Pod數量調整
[root@registry ~]# cat v3.yaml 
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: frontend-scaler
spec:
  scaleTargetRef:
    kind: ReplicaSet
    name: frontend
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
ReplicaSet做爲Horizontal Pod Autoscalers(HPA)

 

 


二、Pod控制器Deployment

一、deployment的鏡像更新

[root@registry ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
[root@registry ~]# kubectl edit deployment/nginx-deployment
[root@registry ~]# kubectl rollout status deployment/nginx-deployment   #查看 rollout 的狀態,只要執行
Deployment 能夠保證在升級時只有必定數量的 Pod 是 down 的。默認的,它會確保至少有比指望的Pod數量少一個是up狀態(最多一個不可用)。
Deployment 同時也能夠確保只建立出超過時望數量的必定數量的 Pod。默認的,它會確保最多比指望的Pod數量多一個的 Pod 是 up 的(最多1個 surge )。
[root@registry work]# kubectl describe deployment $deployment_名稱 
RollingUpdateStrategy:  1 max unavailable, 25% max surge  #查看更新策略

 


二、更新策略

[root@registry work]# kubectl explain deployment.spec.strategy  #鏡像更新策略
rollingUpdate  #僅到type位RollingUpdate的時候有效
    maxSurge    #最大超出pod個數,百分比或者更個數
    maxUnavailable  #最大不可用
type  #"Recreate" or "RollingUpdate"默認RollingUpdate

[root@master1 yaml]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: dpv1
  namespace: default
spec:
  replicas: 2
  selector: 
    matchLabels:
      name: mydp
      version: dpv1
  template: 
    metadata:
      labels: 
        name: mydp
        version: dpv1 
    spec: 
      containers: 
      - name: mydp
        image: 192.168.192.234:888/nginx:latest
        ports:
          - name: dpport
            containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 1
[root@master1 yaml]# kubeclt apploy -f mydp.yaml //apply能夠執行屢次,自動更新
label:pod、replicaSet、deployment均可以有label
[root@master1 yaml]# kubectl explain deployment.metadata.labels  //deployment能夠簡寫爲deploy(限kubectl命令行)
[root@master1 yaml]# kubectl explain replicaSet.metadata.labels
[root@master1 yaml]# kubectl explain pods.metadata.labels

[root@master1 yaml]# kubectl get deployment --show-labels
NAME   READY   UP-TO-DATE   AVAILABLE   AGE    LABELS
dpv1   2/2     2            2           6m2s   <none>
[root@master1 yaml]# kubectl get replicaSet --show-labels
NAME              DESIRED   CURRENT   READY   AGE    LABELS
dpv1-7b7df4f86c   2         2         2       6m5s   name=mydp,pod-template-hash=7b7df4f86c,version=dpv1
[root@master1 yaml]# kubectl get pods --show-labels  -l version=dpv1
NAME                    READY   STATUS    RESTARTS   AGE     LABELS
dpv1-7b7df4f86c-9sdk8   1/1     Running   0          6m11s   name=mydp,pod-template-hash=7b7df4f86c,version=dpv1
dpv1-7b7df4f86c-nv6xv   1/1     Running   0          6m11s   name=mydp,pod-template-hash=7b7df4f86c,version=dpv1
Pod-template-hash label:
當 Deployment 建立或者接管 ReplicaSet 時,Deployment controller 會自動爲 Pod 添加 pod-template-hash label。這樣作的目的是防止 Deployment 的子ReplicaSet 的 pod 名字重複
經過將 ReplicaSet 的 PodTemplate 進行哈希散列,使用生成的哈希值做爲 label 的值,並添加到 ReplicaSet selector 裏、 pod template label 和 ReplicaSet 管理中的 Pod 上。

 

三、修改Pod

修改yaml的image的版本號會看到整個更新過程

[root@master1 ~]# kubectl get pods --show-labels -l version -w 
[root@master1 ~]# kubectl apply -f deployment.yaml 
deployment.apps/dpv1 configured
查看image更新生效
[root@master1 ~]# kubectl describe pods $pods 
[root@master1 ~]# kubectl get rs -o wide 
NAME              DESIRED   CURRENT   READY   AGE   CONTAINERS       IMAGES                            SELECTOR
dpv1-5778f9d958   2         2         2       11m   mydp             192.168.192.234:888/nginx:v2       name=mydp,pod-template-hash=5778f9d958,version=dpv1
dpv1-7b7df4f86c   0         0         0       26m   mydp             192.168.192.234:888/nginx:latest   name=mydp,pod-template-hash=7b7df4f86c,version=dpv1
能夠看到保存了2個模板,一個爲0,一個爲2

[root@master1 yaml]# kubectl rollout history deployment dpv1   //查看版本歷史
deployment.extensions/dpv1 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
修改副本數量 
方法1:scale
方法2:kubectl edit 
方法3:kubectl apply -f *.yaml 
方法4:kubectl patch 以打補丁方式操做
[root@master1 yaml]# kubectl patch deployment dpv1 -p '{"spec":{"replicas":5}}'
deployment.extensions/dpv1 patched

若是隻修改鏡像還可使用:kubectl set image deployment dbv1 $image名稱

 

四、pause

 

暫停容器,用戶批量修改操做,容器再次啓動會應用全部更新策略

[root@master1 yaml]# kubectl rollout pause $資源名稱  
[root@master1 yaml]# kubectl set image  deployment dpv1 mydp=192.168.192.234:888/nginx:v3 && kubectl rollout pause deployment dpv1
[root@master1 ~]# kubectl get pods -o wide -l version -w  //會在建立一個新的pod後中止
NAME                    READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
dpv1-5778f9d958-2jn9b   1/1     Running   0          44m   172.30.200.5   master3   <none>           <none>
dpv1-5778f9d958-bsctb   1/1     Running   0          22m   172.30.200.3   master3   <none>           <none>
dpv1-5778f9d958-tq64f   1/1     Running   0          44m   172.30.72.5    master1   <none>           <none>
dpv1-6fb8f5799f-286bg   0/1     Pending   0          0s    <none>         <none>    <none>           <none>
dpv1-6fb8f5799f-286bg   0/1     Pending   0          0s    <none>         master2   <none>           <none>
dpv1-6fb8f5799f-286bg   0/1     ContainerCreating   0          0s    <none>         master2   <none>           <none>
dpv1-6fb8f5799f-286bg   1/1     Running             0          1s    172.30.56.4    master2   <none>           <none>
新開一個終端://會發現有4個,desired爲 3

[root@master1 yaml]# kubectl get pods -l version
NAME                    READY   STATUS    RESTARTS   AGE
dpv1-5778f9d958-2jn9b   1/1     Running   0          46m
dpv1-5778f9d958-bsctb   1/1     Running   0          24m
dpv1-5778f9d958-tq64f   1/1     Running   0          46m
dpv1-6fb8f5799f-286bg   1/1     Running   0          78s

[root@master1 yaml]# kubectl rollout status deployment dpv1  //查看當前rollout狀態
Waiting for deployment "dpv1" rollout to finish: 1 out of 3 new replicas have been updated...

從新resume deployment:
[root@master1 ~]# kubectl rollout resume deployment dpv1
deployment.extensions/dpv1 resumed

回滾:undo ,默認回滾到上一個版本
[root@master1 ~]# kubectl rollout undo deployment dpv1 --to-revision=1 #第一個版本,不加參數--to-revision默認回滾到上一個版本
[root@registry work]# kubectl explain deploy.spec.revisionHistoryLimit #deployment 最多保留多少 revision 歷史記錄

 

三、Pod控制器DaemonSet

 

DaemonSet保證在每一個Node上都運行一個容器副本,經常使用來部署一些集羣的日誌、監控或者其餘系統管理應用。典型的應用包括:
日誌收集,好比fluentd,logstash等
系統監控,好比Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等
系統程序,好比kube-proxy, kube-dns, glusterd, ceph等

[root@master1 yaml]# cat daemon.yaml 
[root@registry prometheus]# cat node-exporter-ds.yml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.15.2
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
      version: v0.15.2
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        k8s-app: node-exporter
        version: v0.15.2
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-node-critical
      containers:
        - name: prometheus-node-exporter
          image: "prom/node-exporter:v0.15.2"
          imagePullPolicy: "IfNotPresent"
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
          ports:
            - name: metrics
              containerPort: 9100
              hostPort: 9100
          volumeMounts:
            - name: proc
              mountPath: /host/proc
              readOnly:  true
            - name: sys
              mountPath: /host/sys
              readOnly: true
          resources:
            limits:
              cpu: 10m
              memory: 50Mi
            requests:
              cpu: 10m
              memory: 50Mi
      hostNetwork: true
      hostPID: true
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys

[root@registry prometheus]# kubectl explain daemonset.spec
updateStrategy  #Pod更新策略
template    #Pod模板
selector    #node標籤選擇器,根據matchExpressions,matchLabels
revisionHistoryLimit  #歷史保留個數
minReadySeconds #最少多少秒必須保證可用

 

Job負責批量處理短暫的一次性任務 (short lived one-off tasks),即僅執行一次的任務,它保證批處理任務的一個或多個Pod成功結束。
Kubernetes支持如下幾種Job:
1)非並行Job:一般一個Pod對應一個Job,除非Pod異常纔會重啓Pod,一旦此Pod正常結束,Job將結束
2)固定結束次數的Job:啓動多個Pod,設置.spec.parallelism控制並行度,直到.spec.completions個Pod成功結束,Job結束
3)帶有工做隊列的並行Job:設置.spec.Parallelism但不設置.spec.completions,當全部Pod結束而且至少一個成功時,Job就認爲是成功Kubernetes理論介紹系列(二)

Job Controller負責根據Job Spec建立Pod,並持續監控Pod的狀態,直至其成功結束。若是失敗,則根據restartPolicy(只支持OnFailure和Never,不支持Always)決定是否建立新的Pod再次重試任務。
Kubernetes理論介紹系列(二)

1、Job yaml定義

[root@registry work]# cat job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  completions: 2
  parallelism: 3
  template:
    metadata:
      name: nginx
    spec:
      containers:
      - name: nginx
        image: "192.168.192.234:888/nginx:latest"
        command: ["sh","-c","echo test for nginx &&  sleep 5"]
      restartPolicy: Never
2、CronJob

[root@registry work]# cat cronjob.yaml 
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: 192.168.192.234:888/nginx:latest
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster && sleep 5
          restartPolicy: OnFailure
支持的時間格式;分 時 天 月 周 //支持的字符: "*"匹配該域的任意值,"/"每隔多久, 
spec.schedule指定任務運行週期,格式同Cron
spec.jobTemplate指定須要運行的任務,格式同Job
spec.startingDeadlineSeconds指定任務開始的截止期限
spec.concurrencyPolicy指定任務的併發策略,支持Allow、Forbid和Replace三個選項
spec.suspend 設置爲true,後續全部執行被掛起
Job/CronJob

 

 

 

4、Service

 

爲了給客戶端提供一個固定的訪問地址:service; k8s提供的三種類型的ip:(node,pod,clusterIP),service的域名解析,強依賴與CoreDNS
kube-proxy始終監聽着api-server 獲取任何一個與service相關的資源變更,並在本地添加規則 ;api server---[watch]-----kube-proxy 
service實現的三種模型:
1、user namespace
Client Pod[user空間]-->service(iptables kernel空間)-->kube-proxy-->轉發到服務所在節點的kube-proxy->對應的服務pod
由kube-proxy負責調度
2、iptables 
Client Pod-->servie(iptables) ->服務端
再也不依賴kube-proxy調度
3、ipvs //version 1.11+以後 ipvs模塊:ip_vs_rr,ip_vs_wrr,ip_vs_sh,nf_contrack_ipv4(鏈接追蹤)須要添加專門的選項 KUBE_RPXOY_MODE=ipvs,ipvs
Client Pod-->servie(ipvs) ->服務端
service類型:
clusterip 
nodeport :Client->node_ip:node_port->cluster_ip:cluster_port-->pod_ip:contaier_port 
loadbalancer //雲產品的lb,
ExtraName 集羣外部的域名
FQDN:
CNAME->FQDN //集羣外部的域名解析爲集羣內使用的域名
注:這幾個都是須要clusterip的
無頭服務:麼有clusterip
headless service :
ServiceNmae-->PodIP

1、yaml示例service

[root@registry work]# cat v3.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      name: nginx
      department: dev
  replicas: 2
  template:
    metadata:
      name: nginx
      labels:
        name: nginx
        department: dev
    spec:
      containers:
      - name: nginx
        image: 192.168.192.234:888/nginx:latest
        imagePullPolicy: IfNotPresent
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: test-nginx
spec:
  ports: 
  - name: http
    port: 8080
    targetPort: 80
    protocol: TCP
  - name: ssh
    port: 2222
    targetPort: 22
    protocol: TCP

  selector:
    name: nginx
    department: dev
  type: ClusterIP
[root@registry work]# kubectl explain svc.spec

clusterIP  #服務ip,集羣內有效
externalIPs  #引入外部的IP
healthCheckNodePort #健康檢查端口
ports  #nodePort使用node的端口,port服務端口,targetPort pod端口
selector #標籤選擇器
type #服務類型 ExternalName, ClusterIP, NodePort, and LoadBalancer
    ExternalName:集羣外部的服務引用到集羣內部使用
    ClusterIP:集羣內使用
    NodePort:物理機網段
2、命令行方式

[root@master1 ~]# kubectl run nginx-deloy --image=192.168.192.234:888/nginx:latest --port=80 --replicas=1
[root@master1 ~]# kubectl get pods -o wide   //該pod在Replicas和Deployment中均可以看到,describe 的controlled by Replicas
NAME                           READY   STATUS    RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
nginx-deloy-67f8c9dc5c-4nk68   1/1     Running   0          3m5s   172.30.72.3   master1   <none>           <none>
能夠看到pod運行在master1上 
直接curl $pod_ip:80 是能夠的 
問題1:Kubectl delete pods $podname 後新建的pod名稱會發生改變
解決方法:kubectl expose service_ip:service_port dnat到 pod_ip:pod_port

Usage:
  kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name]
[--name=name] [--external-ip=external-ip-of-service] [--type=type] [options]    
  --type='': Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.
     (-f FILENAME | TYPE NAME)  //TYPE:控制器類型 NAME:控制器名稱,expose以後 service提供一個固定的ip,可是僅限集羣內部pod客戶端使用
肯定pod控制器:kubectel describe pods $pod名稱

[root@master1 ~]# kubectl expose deployment nginx-deloy  --name nginx1 --port=80 --target-port=80  --protocol=TCP 
service/nginx1 exposed
[root@master1 ~]# kubectl get services
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
dnsutils-ds   NodePort    10.254.98.197   <none>        80:31707/TCP   2d5h
kubernetes    ClusterIP   10.254.0.1      <none>        443/TCP        2d9h
my-nginx      ClusterIP   10.254.1.216    <none>        80/TCP         2d5h
nginx1        ClusterIP   10.254.63.79    <none>        80/TCP         11s
pod網段內:直接curl $service_ip:80 也能夠訪問 //不在集羣內的節點是沒法訪問的

3、域名解析

kubectl get svc -n kube-system //可使用kubedns提供的域名解析功能,直接解析 service
建立的pod,默認容器內/etc/resolv.conf配置中會有nameserver爲 $kubedns的ip 記錄 
配置search 域 //域名不全的狀況下,會自動補全

[root@client /home/admin]
#cat /etc/resolv.conf 
nameserver 10.244.0.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
    //client機器內ping nginx1會自動補全 nginx1.default.svc.cluster.local (10.254.63.79),解析的地址爲clusterip
[root@master1 ~]# dig -t A nginx1.default.svc.cluster.local @10.254.0.2 這種是能夠的
    kubectl describe service nginx //能夠看到實際的狀況
    kubectl get pods --show-labels 
    kubectl edit service $服務名  //能夠直接修改服務信息
[root@master1 ~]# kubectl describe svc  nginx1  //刪除pod後ip會發生變化,可是使用cluster ip人仍然能夠訪問,
        //svc和pod 經過label selector 關聯
Name:              nginx1
Namespace:         default
Labels:            run=nginx-deloy
Annotations:       <none>
Selector:          run=nginx-deloy   //關聯 pod的標籤爲nginx-deploy 
Type:              ClusterIP
IP:                10.254.63.79
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         172.30.200.3:80
Session Affinity:  None
Events:            <none>

[root@master1 ~]# kubectl get pods --show-labels
NAME                           READY   STATUS    RESTARTS   AGE   LABELS
client                         1/1     Running   0          24m   run=client
nginx-deloy-67f8c9dc5c-zpv4q   1/1     Running   0          38m   pod-template-hash=67f8c9dc5c,run=nginx-deloy
service-->endpoints-->pod //k8s有endpoints資源的概念
使用curl $cluster_ip:80的方式就能夠訪問了
資源記錄:
SVC_NAME.NS_NAME.DOMAIN.LTD. #舉例:mysvc.default.svc.cluter.local.

4、Headless service

沒有clusterIP,客戶端根據service能夠獲取 Label selector後的po列表,由客戶端自行決定如何處理這個Pod列表
定義:headless service

[root@master1 yaml]# kubectl apply -f heaness.yaml 
service/headness created
[root@master1 yaml]# cat heaness.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headness
  namespace: default
spec:
  clusterIP: ""
  ports: 
    - name: mysrvport 
      port: 80
      targetPort: 80
  selector: 
    name: mydp
    version: dpv1
[root@master1 yaml]# 
[root@master1 yaml]# cat heaness.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headness
  namespace: default
spec:
  selector: 
    name: mydp
    version: dpv1
  clusterIP: "None"
  ports: 
    - name: mysrvport 
      port: 80
      targetPort: 80
[root@master1 yaml]# kubectl get svc
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
dnsutils-ds   NodePort    10.254.98.197   <none>        80:31707/TCP   12d
headness      ClusterIP   None            <none>        80/TCP         9s   //這裏必定要是"None"
kubernetes    ClusterIP   10.254.0.1      <none>        443/TCP        13d
mysvc         ClusterIP   10.254.0.88     <none>        80/TCP         38m

[root@master1 yaml]# dig -t A headness.default.svc.cluster.local  @172.30.200.2  //能夠獲取三個地址
headness.default.svc.cluster.local. 5 IN A  172.30.72.6
headness.default.svc.cluster.local. 5 IN A  172.30.56.4
headness.default.svc.cluster.local. 5 IN A  172.30.200.3

[root@master1 yaml]# dig -t A mysvc.default.svc.cluster.local  @172.30.200.2  //有clusterip的只會被解析會clusterip 
mysvc.default.svc.cluster.local. 5 IN   A   10.254.0.88
View Code

 

 

 

5、Ingress

1、ingress

K8s暴露服務的方式:LoadBlancer Service、ExternalName、NodePort Service、Ingress
Ingress Controller經過與Kubernetes API交互,動態的去感知集羣中Ingress 規則變化,而後讀取他,按照他本身模板生成一段 Nginx 配置,再寫到 Nginx Pod 裏,最後 reload 一下
四層 調度器 不負責創建會話(看工做模型nat,dr,fullnat,tunn) client須要與後端創建會話
七層的調度器: client 只須要和調度器創建鏈接調度器管理會話
Ingress的資源類型有如下幾種: 
1、單Service資源型Ingress #只設置spec.backend,不設置其餘的規則
2、基於URL路徑進行流量轉發 #根據spec.rules.http.paths 區分對同一個站點的不一樣的url的請求,並轉發到不一樣主機
3、基於主機名稱的虛擬主機 #spec.rules.host 設置不一樣的host來區分不一樣的站點
4、TLS類型的Ingress資源 #經過Secret獲取TLS私鑰和證書 (名爲 tls.crt 和 tls.key)
Ingress controller #HAproxy/nginx/Traefik/Envoy (服務網格) 
要調度的確定不止一個服務,url 區分不一樣的虛擬主機,server,一個server定向不一樣的一組podKubernetes理論介紹系列(二)

service使用label selector始終關心 watch本身的pod,一旦pod發生變化,本身也理解做出相應的改變
ingress controller 藉助於service(headless)關注pod的狀態變化,service會把狀態變化及時反饋給ingress 
service對後端pod進行分類(headless),ingress在發現service分類的pod資源發生改變的時候,及時做出反應
ingress基於service對pod的分類,獲取分類的pod ip列表,並注入ip列表信息到ingress中

建立ingress須要的步驟:
1、ingress controller
2、配置前端,server虛擬主機
3、根據service收集到的pod 信息,生成upstream server,反映在ingress並註冊到ingress controller中

2、安裝

安裝步驟 #https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
介紹:https://github.com/kubernetes/ingress-nginx: ingress-nginx 
默認會監聽全部的namespace,若是想要特定的監聽--watch-namespace
若是單個host定義了不一樣路徑,ingress會 合併配置 
[root@master1 yaml]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
內容包含:
1)建立Namespace[ingress-nginx]
2)建立ConfigMap[nginx-configuration]、ConfigMap[tcp-services]、ConfigMap[udp-services]
3)建立RoleBinding[nginx-ingress-role-nisa-binding]=Role[nginx-ingress-role]+ServiceAccount[nginx-ingress-serviceaccount]
4)建立ClusterRoleBinding[nginx-ingress-clusterrole-nisa-binding]=ClusterRole[nginx-ingress-clusterrole]+ServiceAccount[nginx-ingress-serviceaccount]
5)Deployment[nginx-ingress-controller]應用ConfigMap[nginx-configuration]、ConfigMap[tcp-services]、ConfigMap[udp-services]做爲配置,

[root@master1 yaml]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
[root@master1 yaml]# cat service-nodeport.yaml  #修改後
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
     app.kubernetes.io/name: ingress-nginx
     app.kubernetes.io/part-of: ingress-nginx

[root@master1 yaml]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.254.90.250   <none>        80:30080/TCP,443:30443/TCP   10m

驗證: 
[root@master1 ingress]# kubectl describe svc  ingress-nginx -n ingress-nginx 
Name:                     ingress-nginx
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     NodePort
IP:                       10.254.90.250
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  30080/TCP
Endpoints:                172.30.72.7:80
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  30443/TCP
Endpoints:                172.30.72.7:443       //這個endpoint必定有內容
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

[root@master1 ingress]# curl $ingress_nginx_srv_Ip:80 #會提示404這就表明已經正常解析
若是要在集羣內全部節點或者集羣內部分節點上部署,能夠修改該yaml的 deployment部分爲DaemonSet,設置爲共享物理機的network namespace
[root@master1 yaml]# kubectl explain DaemonSet.spec.template.spec.hostNetwork
[root@master1 yaml]# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
nginx-ingress-controller-6c777cd89c-65xr4   1/1     Running   0          6m53s   172.30.72.7   master1   <none>           <none>
[root@master1 yaml]# curl -I  172.30.72.7/healthz  #返回200
3、驗證安裝

[root@registry ~]# kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
[root@registry ~]# POD_NAMESPACE=ingress-nginx
[root@registry ~]# POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
[root@registry ~]# kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
4、建立ingress

1)建立service和deployment

[root@master1 ingress]# kubectl apply -f ingress.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysrv-v2
  namespace: default
spec:
  selector: 
    name: mydpv2 
    version: dpv2
  ports:
    - name: http
      port: 80 
      targetPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: dpv2
  namespace: default
spec:
  replicas: 2
  selector: 
    matchLabels:
      name: mydpv2
      version: dpv2
  template: 
    metadata:
      labels: 
        name: mydpv2
        version: dpv2
    spec: 
      containers: 
      - name: mydpv2
        image: 192.168.192.234:888/nginx:v2
        ports:
          - name: dpport
            containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 1
[root@master1 ingress]# kubectl get pods -l name=mydpv2 --show-labels
NAME                    READY   STATUS    RESTARTS   AGE   LABELS
dpv2-697556c88f-pbjm9   1/1     Running   0          66s   name=mydpv2,pod-template-hash=697556c88f,version=dpv2
dpv2-697556c88f-sxr84   1/1     Running   0          66s   name=mydpv2,pod-template-hash=697556c88f,version=dpv2
2)發佈服務爲ingress

[root@master1 ingress]# kubectl apply -f ingress-myapp.yaml
[root@master1 ingress]# cat ingress-myapp.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp   
  namespace: default    #要和發佈的deployment在同一個名稱空間中
  annotations:
    kubernetes.io/ingress.class: "nginx"    #說明本身使用的ingress controller是哪個
spec:
  rules: 
  - host: www.mt.com
    http: 
      paths: 
        - path:
          backend: 
            serviceName: mysrv-v2 
            servicePort: 80 
查看nginx配置文件

[root@master1 ingress]# kubectl describe ingress
Name:             ingress-myapp
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  www.mt.com  
                 mysrv-v2:80 (172.30.56.6:80,172.30.72.8:80)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-myapp","namespace":"default"},"spec":{"rules":[{"host":"www.mt.com","http":{"paths":[{"backend":{"serviceName":"mysrv-v2","servicePort":80},"path":null}]}}]}}

  kubernetes.io/ingress.class:  nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  21s   nginx-ingress-controller  Ingress default/ingress-myapp

[root@master1 ingress]# curl www.mt.com:30080   //返回正常
3)https

[root@master1 ingress]# kubectl apply -f tomcat.yaml 
service/tomcat created
deployment.apps/tomcat-deploy created
[root@master1 ingress]# cat tomcat.yaml 
apiVersion: v1
kind: Service
metadata: 
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
    - name: http
      targetPort: 8080
      port: 8080
    - name: ajp
      targetPort: 8009
      port: 8009

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:  
  replicas: 3
  selector: 
    matchLabels: 
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: myapp
        image: 192.168.192.234:888/tomcat:latest
        ports: 
        - name: http
          containerPort: 8080 
        - name: ajp
          containerPort: 8009 
[root@master1 ingress]# kubectl describe svc tomcat
[root@master1 ingress]# cat ingress-tomcat.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules: 
  - host: www.mt.com
    http: 
      paths: 
        - path:
          backend: 
            serviceName: tomcat
            servicePort: 8080 

[root@master1 ingress]# curl www.mt.com:30080  驗證,30080是nginx暴露出去的
https:(使用secret注入到pod中)

[root@master1 ssl]# openssl genrsa -out tls.key 2048
[root@master1 ssl]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=HangZhou/L=HangZhou/O=DevOps/CN=www.mt.com
[root@master1 ssl]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key 
[root@master1 ingress]# kubectl describe secret tomcat-ingress-secret 
修改nginx爲https模式:修改

[root@master1 ingress]# kubectl apply -f  ingress-tomcat-tls.yaml
[root@master1 ingress]# cat ingress-tomcat-tls.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - www.mt.com
    secretName: tomcat-ingress-secret
  rules: 
  - host: www.mt.com
    http: 
      paths: 
        - path:
          backend: 
            serviceName: tomcat
            servicePort: 8080 
登錄ingress的pod查看是否正常生成ssl配置
View Code

 

 

 

 

6、ServiceAccount/UserAccount

K8s 2套獨立的帳號系統:User Account帳號是給別人用的,Service Account是給Pod中的進程使用的,面對的對象不通
User Account是全局性的,Service Account有namespace的限制
每一個namespace都會自動建立一個default service account
Token controller檢測service account的建立,併爲它們建立secret
Pod客戶端訪問API Server的https安全端口: 
1)controller-manager使用api-server的私鑰爲Pod建立token
2)pod訪問api server的時候,傳遞Token到HTTP Header中
3)api server使用本身的私鑰驗證該Token是否合法

[root@master1 merged]# kubectl exec -it nginx-587764dd97-29n2g  -- ls -l /run/secrets/kubernetes.io/serviceaccount
total 0
lrwxrwxrwx 1 root root 13 Aug  9 10:02 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Aug  9 10:02 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Aug  9 10:02 token -> ..data/token
1、建立serviceaccount

[root@master1 merged]# kubectl create serviceaccount nginx
serviceaccount/nginx created
[root@master1 merged]# kubectl get serviceaccount nginx -o yaml  #自動建立sercrets 
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2019-08-10T13:37:38Z"
  name: nginx
  namespace: default
  resourceVersion: "1066770"
  selfLink: /api/v1/namespaces/default/serviceaccounts/nginx
  uid: 04af83c1-bb74-11e9-9c2a-00163e000999
secrets:
- name: nginx-token-pthgt

[root@master1 merged]# kubectl get secret nginx-token-pthgt -o yaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR4akNDQXE2Z0F3SUJBZ0lVSXJ4Q2diY2lGZXNUVGxuVU1heDlla3JDSmxjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2FURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVJFd0R3WURWUVFIRXdoSQpZVzVuV21odmRURU1NQW9HQTFVRUNoTURhemh6TVJFd0R3WURWUVFMRXdoR2FYSnpkRTl1WlRFVE1CRUdBMVVFCkF4TUthM1ZpWlhKdVpYUmxjekFlRncweE9UQTRNREl4TVRNNU1EQmFGdzB5TWpBNE1ERXhNVE01TURCYU1Ha3gKQ3pBSkJnTlZCQVlUQWtOT01SRXdEd1lEVlFRSUV3aElZVzVuV21odmRURVJNQThHQTFVRUJ4TUlTR0Z1WjFwbwpiM1V4RERBS0JnTlZCQW9UQTJzNGN6RVJNQThHQTFVRUN4TUlSbWx5YzNSUGJtVXhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURKN1hueEVjWFAKS1hqZFpVY3VETVhQdFE5NzkxQjlvNDFFV05FWGNJbFZqM0NueVVuVmIxaHk1WittQzNGMGRtcGxGbTE3eTI2UQp6bThSV2RNZlh4MnlJZS9DSHJlV2o5Y3hMc0JLSm9xM0JvRmNLdmZqc1Z2dzIwaUZ1ZEEwektHREZEc01US3JYClQ3UUdGTEtveWM1N3BTS08yUGt5aG9OeDc2cHdDYllCMndpam4vRnRmMEYvTXpiTXBlR3E0WXkzK3VEUm9ubTIKeUI5TG1uOFRRT2NLb1ZWZHBPWDRIb1E5MGpkdy9EcUlzMUk4OXg3ZjNIZTBISDBGWkpzN3JJQ01TbG50QlR5RApWZkpaZ3N6YldYeHdzb25IeitzaVh3cnBTUDN6RGlmaWJLWHlmdUM1KzBreEJxWDNOQUZUVVFwVDErbTFmNEJwCnRCNTI3ZUpDdXdPakFnTUJBQUdqWmpCa01BNEdBMVVkRHdFQi93UUVBd0lCQmpBU0JnTlZIUk1CQWY4RUNEQUcKQVFIL0FnRUNNQjBHQTFVZERnUVdCQlQ3dmtPOUZOUGl2T0hhMjl2ODAzZFlrZGcvMERBZkJnTlZIU01FR0RBVwpnQlQ3dmtPOUZOUGl2T0hhMjl2ODAzZFlrZGcvMERBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVJFdTlSZmZMCmdDYS9kS3kvTU1sNmd5dHZMMzJPMlRnNko4RGw5L0ZJanJtaXpVeC9xV256b29wTGNzY0hRcC9rcEQ1QURLTDUKcmJkTXZCcFNicUVncTlaT2hYeHg5SCtVYnZiY3IwV3h3N2xEbTdKR0lUb2FhckVrckk2ZExXcVpqczl4bTBPdAowb3RETjcwaFkxaG0rWTNqQXQzanV3WVByUFcxb2RJM013L3ArWG9DWTR3bkcycDMyc1grNzdCVUl0eHhkeGY1CkJJWHoyWURMdzNLbXNScCtXdW1DaTNwSUV3bXFHbnJDNFhQUzhXbDdleGFZZkxDRWgwRjVQU2NYb1MxdjZGYjYKYlNrM0h2ZVRLNzduNDgwUXZ0blJFOVZVSGFMRkdBUm5SRkh0NDVRbS9teUc1dXdKbW9zVnlRajVZdVRSVjdTbwpSZE1DVEVXOHFESGp1UT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  namespace: ZGVmYXVsdA==
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltNW5hVzU0TFhSdmEyVnVMWEIwYUdkMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltNW5hVzU0SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWRXbGtJam9pTURSaFpqZ3pZekV0WW1JM05DMHhNV1U1TFRsak1tRXRNREF4TmpObE1EQXdPVGs1SWl3aWMzVmlJam9pYzNsemRHVnRPbk5sY25acFkyVmhZMk52ZFc1ME9tUmxabUYxYkhRNmJtZHBibmdpZlEuVUpzV3pHTEloVHBkVWc4LTVweXlya0pkWnBJWEhpbFB6Nzk3Xy1qLW5nQVNxNFRjYkVrSFFkXzR1d1J6X2lkaGZvc0NUY2NxYW1oR1pzcU9PWldWWXBzeGdZbHY4aTRuQjJWNTJtaXN2OTZHTVFBb3FUVlE5aXBMNWN6VFY0cXlicGVyZ2w4WHpXU3B1SDhVTUVfQk5YRFZuZ0d4UGU2RUNTcU1qaEpWUDZjczVWTVFVZlp2dEl1dHRuRHhvWHlwdFN4RXNyaWJEUVNxNDQyeVYwU3hHRGpOMDZYZThtblNxamlpc0R2MmQ0c1NYNmpaczU0R3hlMlBQdUtWZjFoLUtRRzNKdDVwRTZic3FBcnJkcm83TDlJZHJRbkl6dUQ2QlJ5ZnRqTWdGcXVXVk9CeWNJcVkzaGY3Q0pmeThmNmJ0QXZrVi1XNUZFdE5TY3pKZXQ4SWZ3
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: nginx
    kubernetes.io/service-account.uid: 04af83c1-bb74-11e9-9c2a-00163e000999
  creationTimestamp: "2019-08-10T13:37:38Z"
  name: nginx-token-pthgt
  namespace: default
  resourceVersion: "1066769"
  selfLink: /api/v1/namespaces/default/secrets/nginx-token-pthgt
  uid: 04b16e50-bb74-11e9-a28c-00163e000318
type: kubernetes.io/service-account-token
[root@master1 merged]# kubectl describe secret nginx-token-pthgt 
Name:         nginx-token-pthgt
Namespace:    default
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: nginx
              kubernetes.io/service-account.uid: 04af83c1-bb74-11e9-9c2a-00163e000999

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1371 bytes
namespace:  7 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im5naW54LXRva2VuLXB0aGd0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im5naW54Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDRhZjgzYzEtYmI3NC0xMWU5LTljMmEtMDAxNjNlMDAwOTk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6bmdpbngifQ.UJsWzGLIhTpdUg8-5pyyrkJdZpIXHilPz797_-j-ngASq4TcbEkHQd_4uwRz_idhfosCTccqamhGZsqOOZWVYpsxgYlv8i4nB2V52misv96GMQAoqTVQ9ipL5czTV4qybpergl8XzWSpuH8UME_BNXDVngGxPe6ECSqMjhJVP6cs5VMQUfZvtIuttnDxoXyptSxEsribDQSq442yV0SxGDjN06Xe8mnSqjiisDv2d4sSX6jZs54Gxe2PPuKVf1h-KQG3Jt5pE6bsqArrdro7L9IdrQnIzuD6BRyftjMgFquWVOBycIqY3hf7CJfy8f6btAvkV-W5FEtNSczJet8Ifw
Service Account 爲服務提供了一種方便的認證機制,但它不關心受權的問題。能夠配合 RBAC 來爲 Service Account 鑑權:
Secret從屬於Service Account資源對象,屬於Service Account的一部分,一個Service Account能夠包含多個不一樣的Secret對象

Secret說明:
1)名爲Token的secret用於訪問API Server的Secret,也被稱做Service Account Secret 
2)名爲imagePullSecrets的Secret用於下載容器鏡像時的認證過程
3)用戶自定義的其餘Secret,用於用戶的進程

2、建立userAccount

大致步驟以下:
1)生成我的私鑰,和證書籤署請求
2)k8s集羣ca對csr進行認證
3)認證後的內容寫入kubeconfig方便使用
View Code

 

參考:https://blog.51cto.com/hmtk520/2423253

7、Secret&Configmap

 

ConfigMap用於保存配置數據的鍵值對,能夠用來保存單個屬性,也能夠用來保存配置文件。ConfigMap跟secret很相似,但它能夠更方便地處理不包含敏感信息的字符串。
配置容器化應用的方式:
1、自定義命令行參數
command 
args 
2、把配置文件直接放在鏡像中
3、環境變量
1)Cloud Native的應用程序通常可直接經過env var加載配置
2)經過entrypoint 腳原本預處理配置文件中的信息
4、存儲卷
configmap 
sercret

[root@master1 yaml]# kubectl explain pods.spec.containers 
[root@master1 yaml]# kubectl explain pods.spec.containers.envFrom  能夠是configMap
    configMapRef
    prefix
    secretRef
[root@master1 yaml]# kubectl explain pods.spec.containers.env.valueFrom
    configMapKeyRef  使用configmap
    fieldRef  字段,能夠是pod自身的字段,好比:metadata.name,metadata.namespace....
    resourceFieldRef    
    secretKeyRef secret 
1、建立 configmap

[root@master1 yaml]# kubectl create configmap nginx-configmap --from-literal=nginx_port=80 --from-literal=server_name=www.mt.com 
[root@master1 configmap]# kubectl create configmap configmap-1 --from-file=www=./www.conf 
configmap/configmap-1 created
[root@master1 configmap]# cat www.conf 
server {
    server_name www.mt.com;
    listen  80;
    root /data/web/html;
}
[root@master1 configmap]# kubectl create configmap special-config --from-file=config/  #從目錄建立
[root@master1 configmap]# kubectl describe configmap configmap-1  #查看
2、configmap使用

可用於:設置環境變量、設置容器命令行參數、在Volume中建立配置文件等 #configmap要在Pod建立前建立

[root@registry work]# cat configmap.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-nginx
spec:
  containers:
    - name: test-container
      image: 192.168.192.234:888/nginx:latest
      env:
        - name: pod-port #這個name爲pod內環境變量,引用的是nginx-configmap.nginx_port這個變量
          valueFrom:
            configMapKeyRef:
              name: nginx-configmap #引用的nginx-configmap這個configmap的nginx_port這個變量
              key: nginx_port
              optional: True    #是否爲可選
        - name: pod_name
          valueFrom:
            configMapKeyRef:
              name: nginx-configmap
              key: server_name
      envFrom:  #引用的configmap,可設置多個,將引用nginx-configmap的全部內容
        - configMapRef:
            name: nginx-configmap
  restartPolicy: Never
修改configmap:

[root@master1 configmap]# kubectl edit configmap nginx-configmap
[root@master1 configmap]# kubectl describe configmap nginx-configmap  會發現已經修改
可是pod內的仍然沒有改變,環境變量方式只在系統啓動時生效。存儲卷方式
3、Volume方式使用configmap

[root@master1 configmap]# kubectl apply -f conf2.yaml 
[root@master1 configmap]# cat conf2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm1
  namespace: default
  labels: 
    app: cm1
    release: centos
  annotations:
    www.mt.com/created-by: "mt" 
spec:
  containers:
  - name: pod-cm1
    image: 192.168.192.225:80/csb-broker:latest
    ports:
    - name:
      containerPort: 80
    volumeMounts:
    - name: nginxconf
      mountPath: /etc/nginx/config.d/
      readOnly: True 
  volumes:
  - name: nginxconf
    configMap:
      name: configmap-1

[root@pod-cm2 /home/admin]
#cat /etc/nginx/config.d/nginx_port 
8080
[root@pod-cm2 /home/admin]
#cat /etc/nginx/config.d/server_name 
www.mt.com
使用kubectl edit configmap 容器內會發生改變
若是隻須要掛載configmap中的部分key:value (一個configmap 可能有多個key/value)

[root@master1 configmap]# kubectl explain pods.spec.volumes.configMap
4、secret

Secret和configMap類型,用於保存敏感配置信息
Secret 有三種類型:
Opaque:base64 編碼格式的 Secret,用來存儲密碼、密鑰等;但數據也經過 base64 --decode 解碼獲得原始數據,全部加密性很弱。
kubernetes.io/dockerconfigjson:用來存儲私有 docker registry 的認證信息。
kubernetes.io/service-account-token: 用於被 serviceaccount 引用。serviceaccout 建立時Kubernetes 會默認建立對應的 secret。Pod 若是使用了 serviceaccount,對應的 secret 會自動掛載到 Pod 的 /run/secrets/kubernetes.io/serviceaccount 目錄中。

[root@master1 configmap]# kubectl create secret 
Create a secret using specified subcommand.
Available Commands:
  docker-registry Create a secret for use with a Docker registry //鏈接私有倉庫須要的認證信息
  generic         Create a secret from a local file, directory or literal value  
  tls             Create a TLS secret  //祕鑰信息
[root@master1 configmap]# kubectl create secret generic mysql-root-password --from-literal=password=password123
secret/mysql-root-password created
[root@master1 configmap]# kubectl get secret mysql-root-password
NAME                           TYPE                                  DATA   AGE
mysql-root-password            Opaque                                1      8s
[root@master1 configmap]# kubectl describe secret mysql-root-password
[root@master1 configmap]# kubectl get secret mysql-root-password -o yaml
apiVersion: v1
data:
  password: cGFzc3dvcmQxMjM=  //base64編碼;echo cGFzc3dvcmQxMjM= | base64 -d 解碼
kind: Secret
metadata:
  creationTimestamp: "2019-07-05T08:55:48Z"
  name: mysql-root-password
  namespace: default
  resourceVersion: "2794776"
  selfLink: /api/v1/namespaces/default/secrets/mysql-root-password
  uid: af3180b4-9f02-11e9-8691-00163e000bdd
type: Opaque
Secret引用:以 Volume 方式 或者 以環境變量方式,參考ConfigMap
Docker_registry

[root@registry ~]# kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD 
View Code

 

參考博客: https://www.kubernetes.org.cn/kubernetes-podhttp://docs.kubernetes.org.cn/317.html#Pod-template-hash_labelhttps://kubernetes.io/docs/concepts/configuration/secret/https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/https://kubernetes.io/docs/concept

相關文章
相關標籤/搜索