Kubernetes

Kubernetes

K8s概念和術語

k8s中,主機分爲master和nodes,客戶端請求首先發給master,由master分析各node資源狀態,分配一個最佳node,而後由node主機經過docker把容器啓動起來html

Master

  • API server:負責接收並處理請求
  • scheduler:調度器,存在於master上,負責檢測node上的資源,根據用戶請求的資源量在node節點上建立容器,先作預選,而後作優選前端

  • Controller-Manager(控制器管理器):在master上,負責監控每個控制器是否健康,控制器管理器自身作冗餘。node

Node

  • kubelet:用於和Master進行通訊,接收master調度過來的各類任務,而後執行,經過容器引擎(docker)啓動容器,也負責在本機上檢查pod的健康
  • Pod控制器:週期性探測管理的容器是不是健康的
  • service:經過標籤選擇器來關聯pod,動態探測pod的IP地址,客戶端的請求首先到達service,經過service調度到pod,每個pod都須要經過service來訪問,service不是一個實體組件或應用程序,只是一個DNAT規則(1.11以後使用IPVS來調度),經過DNS中的解析找到service,修改service名稱,DNS自動會修改DNS中的記錄。
  • kube-proxy:負責隨時與master API通訊,當發現service後的pod發生變更時,通知APIserver,自動改變iptables規則

Pod

  • 用於運行容器,一個pod中能夠運行多個容器,同一個pod中的容器共享NET空間、IPC空間、MOUNT空間、存儲卷,至關於虛擬機,是k8s上的最小控制單元,能夠理解爲是容器的一個外殼,各node主要用來運行pod,通常來講一個pod中只運行一個容器,其它容器都是來輔助該主容器的

label

  • 用於爲每個pod都須要打上標籤,用來識別pod,是key,value格式的數據

label select

  • 標籤選擇器:根據標籤來過濾符合條件的資源對象的機制

AddOns

  • k8s的附加組件,好比DNS Pod、flannel、ingressController,Dashboard

etcd(共享存儲)

  • 負責存儲整個集羣的狀態數據(須要作好冗餘),配置爲https通訊,保證安全(須要CA,證書)

k8s的網絡模型

客戶端訪問經過節點網絡轉發到service網絡而後經過service網絡到達pod網絡mysql

  • 節點網絡
  • service網絡(也叫集羣網絡)
  • pod網絡:
    1. 同一Pod內的多個容器間經過lo通訊
    2. 各pod之間通訊經過:Overlay Network (疊加網絡,隧道)
    3. Pod與service之間的通訊:Pod的網關指向docke0橋

k8s名稱空間

  • 隔離各個pod間直接通訊,也爲了安全
    1. flannel:網絡配置(簡單,可是不支持網絡策略)
    2. calico:網絡配置,網絡策略(配置比較複雜)
    3. canel:上面兩種的結合

k8s網絡結構

k8s的安裝部署

centos版本信息說明linux

[root@master ~]# uname -r
3.10.0-862.el7.x86_64
[root@master ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)

部署方式:經過kubeadm安裝步驟(一個master節點和兩個node節點)nginx

  1. master,nodes:安裝kubelet,kubeadm,docker
  2. master:kubeadm init
  3. nodes:kubeadm join(文檔:https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md)
  4. 關閉firewall和iptablesgit

  5. 建立docker-ce和kubernetes的yum倉庫:github

    [root@master ~]# cd /etc/yum.repo.d/
    [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@master ~]# cat > kubernetes.repo <<EOF
    [kubernetes]
    name=Kubernetes Repo
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    enabled=1
    EOF
  6. 安裝docker-ce kubelet kubeadm kubectlweb

    [root@master ~]# yum -y install docker-ce kubelet kubeadm kubectl
    [root@master ~]# systemctl stop firewalld #關閉防火牆
    [root@master ~]# systemctl disable firewalld
    [root@master ~]# systemctl enable docker kubelet
  7. 建立/etc/sysctl.d/k8s.conf文件,並配置kubelet不加載swapredis

    [root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    [root@master ~]# cat > /etc/sysconfig/kubelet<<EOF
    KUBELET_EXTRA_ARGS=--fail-swap-on=false
    EOF

    以上命令在master和node節點都須要執行

  8. 由於天朝防火牆的關係,在中國訪問不了google的docker倉庫,可是咱們能夠在阿里雲上找到須要的鏡像,下載下來,而後從新打上標籤便可,可使用下面的腳本下載所需鏡像

    #!/bin/bash
    image_aliyun=(kube-apiserver-amd64:v1.12.1 kube-controller-manager-amd64:v1.12.1 kube-scheduler-amd64:v1.12.1 kube-proxy-amd64:v1.12.1 pause-amd64:3.1 etcd-amd64:3.2.24)
    for image in ${image_aliyun[@]}
    do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$image
    docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/$image k8s.gcr.io/${image/-amd64/}
    done
  9. 初始化

    kubeadm init --apiserver-advertise-address=192.168.175.4 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap
    
    保存節點加入的命令:#kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0 保存代用
    若是忘記可使用kubeadm token create --print-join-command從新建立
  10. 完成以後執行一些初始化工做

    [root@master ~]# mkdir -p $HOME/.kube
    [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  11. 查看信息

    kubectl get cs
    kubectl get nodes
  12. 部署網絡插件flannel

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml若是鏡像下載不了,可使用上面的方式在aliyun下載xiangy
  13. 將node節點加入到集羣:

    [root@node1 ~]# systemctl enable docker kubelet
    [root@node1 ~]# kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0

Kubernetes集羣的管理方式:

  • 命令式:create,run,expose,delete,edit...
  • 命令式配置文件:create -f /PATH/TO/RESOURCE_CONFIGURATION_FILE,delete -f, replace -f
  • 聲明式配置文件:apply -f, patch...

基本命令

  • kubectl是apiserver的客戶端程序,kubectl是整個kubernetes集羣的惟一管理入口,kubectl可以鏈接到apiserver實現管理集羣各項資源
  • kubectl 能夠管理的對象:pod,service,replicaset.deployment,statefulet,daemonset,job,cronjob,node
  • kubectl describe node node1 :顯示節點1詳細信息
  • kubectl cluster-info:查看集羣信息
  • deployment,job:pod控制器
  • kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1:建立一個deployment
  • kubectl get deployment :查看當前已經建立的deployment信息
  • kubectl get pods:查看pod信息,加上-o wide顯示更多信息 –show-labels 顯示label信息
  • kubectl delete pod <pod名稱> :刪除一個pod
  • kubectl get svc:查看service信息
  • kubectl delete svc nginx:刪除nginx這個服務
  • kubectl expose deployment nginx-deploy --name nginx --port=80 --target-port=80 --protocol=TCP:建立一個對外的服務service將80端口開放出去
  • kubectl edit svc nginx:編輯nginx服務
  • kubectl describe deployment nginx-deploy:查看pod控制器詳細信息
  • kubectl scale --replicas=5 deployment myapp:將myapp擴展到五個
  • kubectl set image deployment myapp myapp=ikubernetes/myapp:v2 :升級鏡像版本
  • kubectl rollout status deployment myapp:查看myapp的滾動更新過程
  • kubectl rollout undo deployment myapp:回滾
  • kubectl get pods --show-label : 顯示label信息
  • kubectl label pod PODNAME app=myapp,release=canary : 給pod打上標籤,能夠是使對應控制器來管理pod
  • kubectl delete pv --all 刪除所有pv

資源清單的定義

kubernetes把全部內容都抽象爲資源,把這些資源實例化之後稱之爲對象

  • workload:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet, Job,Cronjob…工做負載型資源
  • 服務發現及均衡:Service,Ingress..
  • 配置與存儲:Volume,CSI(擴展第三方存儲卷)
    1. ConfigMap,Secret(容器化應用)
    2. DownwardAPI
  • 集羣級資源:Namespace,Node,Role,ClusterRole, ClusterRoleBinding,RoleBinding
  • 元數據型資源:
    1. HPA
    2. PodTemplate(控制器建立容器時使用的模板)
    3. LimitRange
[root@master manifests]# kubectl get pods myapp-6946649ccd-2tjs8 -o yaml
apiVersion: v1   #聲明對應的對象屬於k8s的哪個api羣組的版本
kind: Pod #資源類別(service,deloyment都是類別)
metadata: #元數據,是一個嵌套的字段
  creationTimestamp: 2018-10-22T15:08:38Z
  generateName: myapp-6946649ccd-
  labels:
    pod-template-hash: 6946649ccd
    run: myapp
  name: myapp-6946649ccd-2tjs8
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-6946649ccd
    uid: 0e9fe6e8-d608-11e8-b847-000c29e073ed
  resourceVersion: "36407"
  selfLink: /api/v1/namespaces/default/pods/myapp-6946649ccd-2tjs8
  uid: 5abff320-d60c-11e8-b847-000c29e073ed
spec: #規格,定義建立的資源對象應該具備什麼樣的特性,靠控制器來知足對應的狀態,用戶定義
  containers:
  - image: ikubernetes/myapp:v1  
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-962mh
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: node2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-962mh
    secret:
      defaultMode: 420
      secretName: default-token-962mh
status: #顯示當前這個資源當前的狀態,若是當前資源狀態和目標狀態不一致,須要向目標狀態轉移,只讀
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:38Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:40Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:40Z
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:38Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://f9a63dc33340082c3a78196f624bc52c193d3f2694c05f91ecb82aa143a9e369
    image: ikubernetes/myapp:v1
    imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    lastState: {}
    name: myapp
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-10-22T15:08:39Z
  hostIP: 192.168.175.5
  phase: Running
  podIP: 10.244.2.15
  qosClass: BestEffort
  startTime: 2018-10-22T15:08:38Z
  • 建立資源的方法:

    1. APIserver僅接收JSON格式的資源定義,run命令會自動將配置轉換爲yaml格式
    2. yaml格式提供配置清單,apiserver可自動將其轉爲JSON格式,而後再提交
  • 大部分資源的配置清單由5個1級字段組成:

    1. apiVersion:group/version

      [root@master manifests]# kubectl api-versions
      admissionregistration.k8s.io/v1beta1
      apiextensions.k8s.io/v1beta1
      apiregistration.k8s.io/v1
      apiregistration.k8s.io/v1beta1
      apps/v1
      apps/v1beta1 #alpha內測版,beta公測版,stable穩定版
      apps/v1beta2
      authentication.k8s.io/v1
      authentication.k8s.io/v1beta1
      authorization.k8s.io/v1
      authorization.k8s.io/v1beta1
      autoscaling/v1
      autoscaling/v2beta1
      autoscaling/v2beta2
      batch/v1
      batch/v1beta1
      certificates.k8s.io/v1beta1
      coordination.k8s.io/v1beta1
      events.k8s.io/v1beta1
      extensions/v1beta1
      networking.k8s.io/v1
      policy/v1beta1
      rbac.authorization.k8s.io/v1
      rbac.authorization.k8s.io/v1beta1
      scheduling.k8s.io/v1beta1
      storage.k8s.io/v1
      storage.k8s.io/v1beta1
      v1
    2. kind:資源類別,標記打算建立一個怎麼樣的資源類別

    3. metadata:元數據

      • name
      • namespace:名稱空間
      • labels:標籤
      • annotations:資源註解,與label不一樣的地方在於,它不能用於挑選資源對象,僅用於爲對象提供元數據
      • uid:惟一標識(系統自動生成)
      • 每一個資源的引用PATH /api/GROUP/namespace/NAMESPACE/TYPE/NAME
    4. spec:指望的狀態,不一樣的資源類型,spec各不相同,disired state

      • status:當前狀態,current state,本字段由kubernetes集羣維護。
      • 可使用kubectl explain pods.sepc.containers來查看如何定義
    5. Pod的生命週期:狀態:Pending,Running,Failed,Succeeded,Unknown

    6. 建立Pod:

      • Pod生命週期中的重要行爲:
        1. 初始化容器
        2. 容器探測:liveness、readiness
        3. restartPolicy:Always,OnFailure,Never,Default to Always
    7. 自定義資源示例

      [root@master manifests]# cat pod-demo.yaml 
      apiVersion: v1
      kind: Pod #注意大小寫
      metadata:
        name: pod-demo #pod名稱
        namespace: default #可不定義,默認爲default
        labels:
          app: myapp
          tier: frontend #所屬層次
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
        - name: busybox
          image: busybox:latest
          command:
          - "/bin/sh"
          - "-c"
          - "sleep 3600"
    8. 建立pod資源

      kubectl create -f pod-demo.yaml #建立資源
      kubectl describe pods pod-demo #顯示pod詳細信息
      [root@master manifests]# kubectl describe pod pod-demo
      Name:               pod-demo
      Namespace:          default
      Priority:           0
      PriorityClassName:  <none>
      Node:               node2/192.168.175.5 
      Start Time:         Tue, 23 Oct 2018 02:33:51 +0800
      Labels:             app=myapp
                          tier=frontend
      Annotations:        <none>
      Status:             Running
      IP:                 10.244.2.20
      Containers: #內部容器
        myapp:
          Container ID:   docker://20dabd0d998f5ebd2a7ad1b875e3517831b100f1df9340eefa9e18d89941a8ac
          Image:          ikubernetes/myapp:v1
          Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
          Port:           <none>
          Host Port:      <none>
          State:          Running
            Started:      Tue, 23 Oct 2018 02:33:52 +0800
          Ready:          True
          Restart Count:  0
          Environment:    <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro)
        busybox:
          Container ID:  docker://d69f788cdf8772497c0afc19b469c3553167d3d5ccf03ef4876391a7ed532aa9
          Image:         busybox:latest
          Image ID:      docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
          Port:          <none>
          Host Port:     <none>
          Command:
            /bin/sh
            -c
            sleep 3600
          State:          Running
            Started:      Tue, 23 Oct 2018 02:33:56 +0800
          Ready:          True
          Restart Count:  0
          Environment:    <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro)
      Conditions:
        Type              Status
        Initialized       True 
        Ready             True 
        ContainersReady   True 
        PodScheduled      True 
      Volumes:
        default-token-962mh:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  default-token-962mh
          Optional:    false
      QoS Class:       BestEffort
      Node-Selectors:  <none>
      Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                       node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
        Type    Reason     Age   From               Message
        ----    ------     ----  ----               -------
        Normal  Pulled     48s   kubelet, node2     Container image "ikubernetes/myapp:v1" already present on machine
        Normal  Created    48s   kubelet, node2     Created container
        Normal  Started    48s   kubelet, node2     Started container
        Normal  Pulling    48s   kubelet, node2     pulling image "busybox:latest"
        Normal  Pulled     44s   kubelet, node2     Successfully pulled image "busybox:latest"
        Normal  Created    44s   kubelet, node2     Created container
        Normal  Started    44s   kubelet, node2     Started container
        Normal  Scheduled  18s   default-scheduler  Successfully assigned default/pod-demo to node2 #成功調度到node2
      kubectl  logs pod-demo myapp #查看pod的日誌
      kubectl logs pod-demo busybox
      kubectl get pods -w #-w持續監控
      kubectl exec -it pod-demo -c myapp -- /bin/sh
      kubectl delete -f pod-demo.yaml #經過配置清單文件刪除對應資源

存活、生存探測

探針類型有三種:ExceAction、TCPSocketAction、HTTPGetAction

liveness exec示例 存活狀態探測

[root@master manifests]# cat liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh", "-c", "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"]
    livenessProbe:
      exec:
        command: ["test", "-e", "/tmp/healthy"]
      initialDelaySeconds: 2
      periodSeconds: 3

liveness HTTPGetAction探針示例 存活狀態探測

[root@master manifests]# cat liveness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index.html  
      initialDelaySeconds: 1
      periodSeconds: 3

readiness 就緒型探測示例:

[root@master manifests]# cat rediness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: http
        path: /index.html  
      initialDelaySeconds: 1
      periodSeconds: 3
[root@master manifests]#kubectl describe pod readiness-httpget-pod

lifecycle

postStart:容器建立後執行命令

[root@master manifests]# cat poststart-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: poststart-pod
  namespace: default
spec:
  containers:
  - name: busybox-httpd
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command: ["mkdir", "-p", "/data/web/html"]
    command: ["/bin/sh", "-c","sleep 3600"]

Pod控制器

  • ReplicaSet:代用戶建立指定數量的pod副本,並確保pod副本知足用戶數量,支持自動擴縮容,主要有三個組件組成:用戶指望的pod副本數、標籤選擇器、pod資源模板,完成pod資源的新建

    不建議直接使用ReplicaSet

  • Deployment:建構在ReplicaSet之上,支持擴縮容、滾動更新、回滾,支持聲明式配置,通常用於無狀態服務

  • DaemonSet:用於確保集羣中的每個節點只運行一個特定的pod副本,集羣新增節點自動增長此類pod副本,經常使用於實現系統級無狀態服務

  • Job:用來執行特定的任務,任務完成自動退出

  • Cronjob:週期性執行特定的任務

  • StatefulSet:有狀態,持久存儲

建立ReplicaSet控制器示例

[root@master manifests]# cat rs-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80

可使用命令: kubectl edit rs myapp(控制器的name)來編輯控制器來實時更新pods的副本數,以及一些其餘的參數

  • Deployment藉助於ReplicaSet實現滾動更新、藍綠部署以及其餘更新策略

  • Deployment建構於ReplicaSet之上,經過控制ReplicaSet來控制pods

Deployment建立控制器示例配置

[root@master manifests]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80
          
[root@master manifests]# kubectl apply -f deploy-demo.yaml   #apply 和create的不一樣之處在於apply能夠執行屢次,不一樣之處會更新到etcd中,因而咱們就能夠直接修改資源清單配置文件,來實時更新配置,修改完成以後只須要再一次執行一次kubectl apply -f deploy-demo.yaml便可

Deployment的回滾

[root@master manifests]# kubectl rollout undo deploy myapp-deploy [--revision=1 指定回滾到哪個版本]
[root@master manifests]# kubectl get deployment -o wide
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                 SELECTOR
myapp-deploy   3         3         3            3           20m   myapp        ikubernetes/myapp:v2   app=myapp,release=canary  #查看deployment控制器信息,能夠看出deployment是建構在ReplicaSet之上的

kubectl get rs -o wide #查看ReplicaSet控制器信息

給控制器打補丁

[root@master ]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' #給控制器打補丁
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavaliable":0}}}}'
[root@master manifests] kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy ##更新

監控更新過程

[root@master manifests]# kubectl rollout status deploy myapp-deploy
[root@master manifests]# kubectl get pods -l app=myapp -w
kubectl rollout history deployment myapp-deploy #查看歷史更新版本

DaemonSet控制器的示例

[root@master manifests]# cat ds-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---    #隔離資源定義
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info

Service

  • 實現模式:
    1. userspace
    2. iptables
    3. ipvs(須要在kubelet配置文件/etc/sysconfig/kubelet中加入KUBE_PROXY_MODE=ipvs,而且須要安裝內核模塊ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,nf_conntrack_ipv4
  • 四中類型:
    1. ClusetrIP:只能在集羣內部可達
    2. NodePort:client -->NodeIP:NodePort-->ClusterIP:ServicePort-->PodIP:containerPort
    3. LoadBalance:
    4. ExternelName
  • No ClusterIP:Headless Service
    • ServiceName --> PodIP
  • Ingress Controller(一般就是一個擁有七層代理和調度能力的應用程序)
    1. Nginx
    2. Envoy(能夠監控配置文件,一旦發現配置文件發生變更,就重載配置文件)
    3. Traefik

ingress

Ingress

Ingress是什麼

Ingress :簡單理解就是個規則定義;好比說某個域名對應某個 service,即當某個域名的請求進來時轉發給某個 service;這個規則將與 Ingress Controller 結合,而後 Ingress Controller 將其動態寫入到負載均衡器配置中,從而實現總體的服務發現和負載均衡

Ingress Controller

實質上能夠理解爲是個監視器,Ingress Controller 經過不斷地跟 kubernetes API 打交道,實時的感知後端 service、pod 等變化,好比新增和減小 pod,service 增長與減小等;當獲得這些變化信息後,Ingress Controller 再結合Ingress 生成配置,而後更新反向代理負載均衡器,並刷新其配置,達到服務發現的做用

ingress

建立ingress

  • 用來定義如何建立一個前端,能夠經過service對後端某一類型的pod分類,ingress基於這個分類識別後端有幾個pod,並獲取pod ip地址,而後及時注入到前端調度器配置文件中,調度器實時重載配置文件
  • 如何使用七層代理:
    1. 部署一個ingress Controller,ingress定義ingress Control如何建立前端調度器和定義後端服務器組
    2. 根據須要配置ingress,ingress定義的是一組轉發規則
    3. 根據service收集到後端pod信息,定義成upstream server 反應給ingress中,由ingress 動態注入到ingress控制器當中

安裝Ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml #安裝ingress-Controller
  • 建立一後端pod service:

    [root@master ingress]# kubectl apply -f deploy-demo.yaml
    [root@master ingress]# cat deploy-demo.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myapp
      namespace: default
    spec:
      selector:
        app: myapp
        release: canary
      ports:
      - name: http
        targetPort: 80
        port: 80
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deploy
      namespace: default
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: myapp
          release: canary
      template:
        metadata:
          labels:
            app: myapp
            release: canary
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v2
            ports:
            - name: http
              containerPort: 80
  • 建立一個用於暴露端口的service

    [root@master baremetal]# kubectl apply -f service-nodeport.yaml
    [root@master baremetal]# cat service-nodeport.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: ingress-nginx
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 30080
        - name: https
          port: 443
          targetPort: 443
          protocol: TCP
          nodePort: 30443
      selector:
        app.kubernetes.io/name: ingress-nginx
  • 建立Ingress文件

    [root@master ingress]# kubectl apply -f ingress-myapp.yaml
    [root@master ingress]# cat ingress-myapp.yaml 
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-myapp
      namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: myapp.template.com
        http:
          paths:
          - path:
            backend:
              serviceName: myapp
              servicePort: 80
  • 查看信息

    [root@master ingress]# kubectl get ingress
    NAME                 HOSTS                 ADDRESS   PORTS     AGE
    ingress-myapp        myapp.template.com              80        5h55
    [root@master ingress]# kubectl get svc
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    myapp        ClusterIP   10.98.30.144     <none>        80/TCP              4h7m
    [root@master ingress]# kubectl get pods
    NAME                             READY   STATUS    RESTARTS   AGE
    myapp-deploy-7b64976db9-lfnlv    1/1     Running   0          6h30m
    myapp-deploy-7b64976db9-nrfgs    1/1     Running   0          6h30m
    myapp-deploy-7b64976db9-pbqvh    1/1     Running   0          6h30m
    #訪問
    [root@master ingress]# curl myapp.template.com:30080
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

Ingress使用ssl

[root@master ingress]# cat tomcat-deploy.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5-alpine
        ports:
        - name: http
          containerPort: 8080
        - name: ajp
          containerPort: 8009
[root@master ingress]# kubectl apply -f  tomcat-deploy.yaml 

[root@master ingress]# openssl genrsa -out tls.key 2048
[root@master ingress]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.template.com
[root@master ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
[root@master ingress]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
default-token-962mh     kubernetes.io/service-account-token   3      32h
tomcat-ingress-secret   kubernetes.io/tls                     2      66m

[root@master ingress]# cat ingress-tomcat-tls.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
      - tomcat.template.com
    secretName: tomcat-ingress-secret
  rules:
  - host: tomcat.template.com
    http:
      paths:
      - path:
        backend:
          serviceName: tomcat
          servicePort: 8080
[root@master ingress]# kubectl apply -f ingress-tomcat-tls.yaml

[root@master ingress]# curl -k https://tomcat.template.com:30443 #測試訪問

存儲卷

  • 對於kubernetes來講,存儲卷不屬於容器,而是屬於pod,pod共享基礎架構容器pause的存儲卷

  • emptyDir:空目錄、臨時目錄,生命週期同Pod

    示例配置

    [root@master volumes]# cat pod-vol-demo.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-demo
      namespace: default
      labels:
        app: myapp
        tier: frontend
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      - name: busybox
        image: busybox:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: html
          mountPath: /data/
        command: ["/bin/sh"]
        args: ["-c", "while true;do echo $(date) >> /data/index.html;sleep 2;done"]
      volumes:
      - name: html
        emptyDir: {}
  • hostPath:主機目錄,節點級存儲

  • 示例配置:

    [root@master volumes]# cat pod-vol-hostpath.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-hostpath
      namespace: default
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: html
        hostPath:
          path: /data/pod/volume1
          type: DirectoryOrCreate
  • 網絡存儲:

    1. SAN:iSCSI

    2. NAS:nfs、cifs

      NFS示例配置:

      [root@master volumes]# cat pod-vol-nfs.yaml 
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-vol-nfs
        namespace: default
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
          volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        volumes:
        - name: html
          nfs:
            path: /data/volumes
            server: 192.168.175.4
    3. 分佈式存儲:glusterfs、rbd、cephfs…

    4. 雲存儲:EBS、Azure Disk

    5. pvc(建立過程:選擇存儲系統 ,建立pv 定義pvc,定義pod綁定pvc)

      yum -y install nfs-utils #安裝nfs
      [root@master volumes]# cat /etc/exports  定義存儲卷
      /data/volumes/v1 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v2 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v3 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v4 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v5 192.168.175.0/24(rw,no_root_squash)
      [root@master volumes]# cat pv-demo.yaml 
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv001
        labels:
          name: pv001
      spec:
        nfs:
          path: /data/volumes/v1
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 1Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv002
        labels:
          name: pv002
      spec:
        nfs:
          path: /data/volumes/v2
          server: 192.168.175.4
        accessModes: ["ReadWriteOnce"]
        capacity:
          storage: 5Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv003
        labels:
          name: pv003
      spec:
        nfs:
          path: /data/volumes/v3
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 20Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv004
        labels:
          name: pv004
      spec:
        nfs:
          path: /data/volumes/v4
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 10Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv005
        labels:
          name: pv005
      spec:
        nfs:
          path: /data/volumes/v5
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 1Gi
      kubectl apply -f pv-demo.yaml
      NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      pv001   1Gi        RWO,RWX        Retain(回收策略,保留)           Available                                   5m31s
      pv002   5Gi        RWO            Retain           Available                                   5m31s
      pv003   20Gi       RWO,RWX        Retain           Available                                   5m31s
      pv004   10Gi       RWO,RWX        Retain           Available                                   5m31s
      pv005   1Gi        RWO,RWX        Retain           Available                                   5m31s
    6. pvc示例:

      [root@master volumes]# cat pvc-demo.yaml 
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        namespace: default
      spec:
        accessModes: ["ReadWriteMany"]
        resources:
          requests:
            storage: 6Gi
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-vol-pvc
        namespace: default
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
          volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        volumes:
        - name: html
          persistentVolumeClaim:
            claimName: mypvc
    7. [root@master volumes]# kubectl get pv
      NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
      pv001   1Gi        RWO,RWX        Retain           Available                                           33m
      pv002   5Gi        RWO            Retain           Available                                           33m
      pv003   20Gi       RWO,RWX        Retain           Available                                           33m
      pv004   10Gi       RWO,RWX        Retain           Bound       default/mypvc                           33m
      pv005   1Gi        RWO,RWX        Retain           Available                                           33m
      [root@master volumes]# kubectl get pvc
      NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      mypvc   Bound    pv004    10Gi       RWO,RWX                       13m   #顯示已經綁定

configMap

容器化應用配置的方式:

  1. 自定義命令行參數:args
  2. 把配置文件直接備進鏡像
  3. 環境變量:(1)Cloud Native的應用程序通常可直接經過環境變量加載配置 (2)經過entrypoint腳原本預處理變量爲配置文件中的配置信息
  4. 存儲卷:直接掛主機目錄到容器應用的配置文件所在目錄

經過命令行建立configmap:

kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=myapp.template.com
kubectl get cm

建立pod引用configmap中的變量定義

[root@master configmap]# cat pod-configmap.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-1
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    env:
    - name: NGINX_SERVER_PORT
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: nginx_port
    - name: NGINX_SERVER_NAME
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: server_name

還能夠經過edit直接編輯configmap中的鍵值,但不會實時更新到pod中

kubectl edit cm nginx-config

採用掛載卷方式,配置會同步更新到pod中

[root@master configmap]# cat pod-configmap2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-2
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: nginxconf
      mountPath: /etc/nginx/config.d/
      readOnly: true
  volumes:
  - name: nginxconf
    configMap:
      name: nginx-config
[root@master configmap]# cat pod-configmap3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-3
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: nginxconf
      mountPath: /etc/nginx/conf.d/
      readOnly: true
  volumes:
  - name: nginxconf
    configMap:
      name: nginx-www

Secret

命令行建立

kubectl create secret generic mysql-root-password --from-literal=password=myP@ss123 #命令行建立secret(注意這個密碼是僞加密)
[root@master configmap]# kubectl describe secret mysql-root-password #查看信息
Name:         mysql-root-password
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  9 bytes
[root@master configmap]# kubectl get secret mysql-root-password -o yaml  #這個密碼是經過base64加密
apiVersion: v1
data:
  password: bXlQQHNzMTIz
kind: Secret
metadata:
  creationTimestamp: 2018-10-25T06:31:40Z
  name: mysql-root-password
  namespace: default
  resourceVersion: "193886"
  selfLink: /api/v1/namespaces/default/secrets/mysql-root-password
  uid: a1beaf36-d81f-11e8-95d7-000c29e073ed
type: Opaque
[root@master configmap]# echo bXlQQHNzMTIz | base64 -d   #能夠經過base64 -d 解碼
myP@ss123

定義配置清單:

[root@master configmap]# cat pod-secret.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-secret-1
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysql-root-password
          key: password
# 注意,變量注入到pod中以後仍是以明文顯示,因此不安全
[root@master configmap]# kubectl apply -f pod-secret.yaml 
pod/pod-secret-1 created
[root@master configmap]# kubectl exec pod-secret-1 -- /bin/printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=pod-secret-1
MYSQL_ROOT_PASSWORD=myP@ss123

StatefulSet(有狀態應用副本集)

StatefulSet主要用於管理具備如下特性的應用程序:

  • 穩定且須要有惟一的網絡標識符;
  • 穩定且持久的存儲設備;
  • 有序、平滑的部署和擴展;

  • 有序、平滑的終止和刪除
  • 有序的滾動更新;

三個組件:headless service、StatefulSet、volumeClaimTemplate

[root@master statefulset]# showmount -e
Export list for master:
/data/volumes/v5 192.168.175.0/24
/data/volumes/v4 192.168.175.0/24
/data/volumes/v3 192.168.175.0/24
/data/volumes/v2 192.168.175.0/24
/data/volumes/v1 192.168.175.0/24

[root@master statefulset]# cat stateful-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: myapp
spec:
  serviceName: myapp
  replicas: 3
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 2Gi
[root@master statefulset]# kubectl get pods  #顯示pod變成有順序的了
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          2m21s
myapp-1   1/1     Running   0          2m18s
myapp-2   1/1     Running   0          2m15s

[root@master statefulset]# kubectl get sts
NAME    DESIRED   CURRENT   AGE
myapp   3         3         6m14s
[root@master statefulset]# kubectl get pvc
NAME                STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myappdata-myapp-0   Bound    pv002    5Gi        RWO                           6m35s 
myappdata-myapp-1   Bound    pv001    5Gi        RWO,RWX                       6m32s
myappdata-myapp-2   Bound    pv003    5Gi        RWO,RWX                       6m29s
#只要pvc不刪除,使用同一個statefulset建立的後端pod就會一直綁定對應的volume,數據也就不會丟

pod_name.service.ns_name.svc.cluster.local

sts也能支持動態擴縮容

kubectl scale sts myapp --replicas=5

kubernetes的認證與受權

  • token認證
  • ssl認證
  • 受權:node、rbac、webhook、abac
  • 准入控制

k8s的APIserver是分組的,請求的時候不須要標識是向哪一個版本的哪一個組的API發出請求,全部的請求由一個url path標識

Request path:

​ /apis/apps/v1/namespaces/default/deployments/myapp-deploy/

HTTP request verb:

​ get ,post,put,delete

API requests verb:

​ get,list,create,update,patc,watch,proxy,redirect,delete,deletecollection

Resource:

Subresource

Namespace

API group

RBAC : Role Based Access Control

定義role

[root@master manifests]# cat role-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@master manifests]# kubectl create rolebinding template-read-pods --role=pod-reader --user=template --dry-run -o yaml > rolebinding-demo.yaml
[root@master manifests]# cat rolebinding-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding #用來綁定用戶和角色
metadata:
  name: template-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role  #綁定的角色,權限都在這裏定義
  name: pod-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template #綁定的用戶帳號
#生成新context
openssl genrsa -out template.key 2048
openssl req -new -key template.key -out template.csr  -subj "/CN=template"
openssl x509 -req -in template.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out template.crt -days 365
openssl x509 -in template.crt -text
kubectl config set-credentials template --client-certificate=./template.crt --client-key=./template.key --embed-certs=true
kubectl config set-context template@kubernetes --cluster=kubernetes --user=template
kubectl config use-context template@kubernetes #使用新的context
[root@master ~] kubectl create role template --verb=list,get,watch --resource=pods  #建立角色

基於角色的訪問控制:讓一個用戶扮演一個角色,角色擁有權限,因而用戶就有了權限

role定義:

  • operations
  • objects

rolebinding:用來綁用戶和role的關係

  • user account or service account
  • role

clusterrole,clusterrolebinging

ClusterRoleBinding:

clusterRole:可以獲取全部空間中的信息,若是是經過rolebinding的話,仍是隻能獲取當前rolebinding定義的namespace中的信息使用rolebinding綁定clusterrole的好處是不用每個namespace中都定義一個role只須要在各個名稱空間中定義rolebinding而後再綁定clusterrole就行,不用再在每一個名稱空間都建立一個role由於使用rolebinding綁定clusterrole和用戶的時候,用戶仍是隻能訪問綁定的指定空間,而不是訪問整個集羣的namespace

[root@master ~]# kubectl create clusterrolebinding template-read-all-pods --clusterrole=cluster-reader --user=template --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: null
  name: template-read-all-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template
#建立clusterrolebinding 須要先建立clusterrole,這樣角色的權限將是整個集羣的權限

[root@master ~]# cat rolebinding-clusterrole-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: template-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects: 
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template
#建立rolebinding綁定clusterrole,這樣clusterrole將降級爲只能在rolebinding所在的namespace中的權限

Service account是爲了方便Pod裏面的進程調用Kubernetes API或其餘外部服務,它不一樣於User account:

  • User account是爲人設計的,而service account則是爲了Pod中的進程;
  • 開啓ServiceAccount(默認開啓)後,每一個namespace都會自動建立一個Service account,並會相應的secret掛載到每個Pod中
    • 默認ServiceAccount爲default,自動關聯一個訪問kubernetes API的Secret
    • 每一個Pod在建立後都會自動設置spec.serviceAccount爲default(除非指定了其餘ServiceAccout)
    • 每一個container啓動後都會掛載對應的token和ca.crt/var/run/secrets/kubernetes.io/serviceaccount/

dashboard認證及分級受權

  • 安裝及訪問:

    認證方式:

    1. token:

      (1)建立ServiceAccount,根據其管理目標,使用rolebinding或clusterrolebinding綁定至合理role或clusterrole;

      (2)獲取此ServiceAccount的secret,去查看secret的詳細信息,其中就有token

    2. 把ServiceAccount的token封裝爲kubeconfig文件

      (1)建立ServiceAccount,根據其管理目標,使用rolebinding或clusterrolebinding綁定至合理role或clusterrole

      (2)使用DEF_NS_ADMIN_TOKEN=$(kubectl get secret SERVICEACCOUNT_SECRET_NAME -o jsonpath={.data.token} | base64 -d )

      (3)生成kubeconfig文件

      • kubectl config set-cluster –kubeconfig=/PATH/TO/SOMEFILE
      • kubectl config set-credentials NAME –token=$KUBE_TOKEN
      • kubectl config set-context
      • kubectl config use-context
    kubectl create sa dashboard-admin -n kube-system
    kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    [root@master ~]# kubectl get secrets -n kube-system
    NAME                                             TYPE                                  DATA   AGE
    attachdetach-controller-token-vhr7x              kubernetes.io/service-account-token   3      4d15h
    bootstrap-signer-token-fl7bb                     kubernetes.io/service-account-token   3      4d15h
    certificate-controller-token-p4szd               kubernetes.io/service-account-token   3      4d15h
    clusterrole-aggregation-controller-token-hz2pt   kubernetes.io/service-account-token   3      4d15h
    coredns-token-g9gp6                              kubernetes.io/service-account-token   3      4d15h
    cronjob-controller-token-brhtp                   kubernetes.io/service-account-token   3      4d15h
    daemon-set-controller-token-4mmwg                kubernetes.io/service-account-token   3      4d15h
    dashboard-admin-token-kzwk9                      kubernetes.io/service-account-token   3      9
    [root@master ~]# kubectl describe secrets dashboard-admin-token-kzwk9  -n kube-system  #將下面的token備用
    Name:         dashboard-admin-token-kzwk9  
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: dbe9eb4a-d94a-11e8-a89c-000c29e073ed
    
    Type:  kubernetes.io/service-account-token 
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4ta3p3azkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGJlOWViNGEtZDk0YS0xMWU4LWE4OWMtMDAwYzI5ZTA3M2VkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DZ94phOCIWAxxs4l55irm1G_PhkRRilhJUMKMDheqCKOepT0NpZ07vp61q4YMmx0X0iT43R7LvhQSZ5p4fGn7ttjxGrDhox5tFvYpy6rCtdxEsYYeqWP_tHMqUMrF71TgbRBdj-LZWyec0YlshjgxhYJ4FV_hKZRAzidhlBg93fnWzDe31cSdg8H4j_5tRJU-JKajjbHXPVxGWPlN6WPPzd5iK2aDXt79k4PSgiC4czyCOTuRYj9INVGo8ZEUEkTUN3dUnXJKMMF-HUXIR67rHDapvcwjgMfVac6TpUO6HBR5ZPce3YKmstleaa2FbaMmNN-qJ0qKZoaOF245vTeqQ
    
    [root@master ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    #若是不能訪問谷歌倉庫須要先將dashboard的docker鏡像下載來從新tag一下
    
    [root@master ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system #打補丁方便在本機外訪問,認證的帳號必須爲ServiceAccount 被dashboard pod拿來由kubernetes進行認證
    [root@master ~]# kubectl get svc -n kube-system       #訪問https://192.168.175.4:32767 輸入以前複製的token
    NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
    kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   4d15h
    kubernetes-dashboard   NodePort    10.109.188.195   <none>        443:32767/TCP   10m
    
    
    --------------------------------------------------------------------------------
    第二種config方式登陸dashboard

[root@master pki]# kubectl get secret
NAME TYPE DATA AGE
admin-token-rkxvq kubernetes.io/service-account-token 3 35h
default-token-d75z4 kubernetes.io/service-account-token 3 29h
df-ns-admin-token-4rbwg kubernetes.io/service-account-token 3 34m
[root@master pki]# kubectl get secret df-ns-admin-token-4rbwg -o json
{
​ "apiVersion": "v1",
​ "data": {
​ "ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1UQXlNREV5TWpjME5sb1hEVEk0TVRBeE56RXlNamMwTmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTU9tCkVRL3l3TDdZRHVCTE9SZFZWdHl1NSs4dklIWGJEdWNmZ3N0Vy9Gck82emVLdUNXVzRQdjlPbjJwamxXRkxJdXYKZnhEMU15N3ppVzZjTW0xQkFRUjJpUEwrRE4rK0hYZ0V4ZkhDYTdJbkpHcFYzMU9lU3YzazMwMzljZVFQSUU4SQowQVljY2ZVU0w5SjMvdWpLZElPTTJGZDA2cWNUQmJhRyt0KzBGWGxrZ2NzNVRDa21lOE1xWTNVdjZJUkx6WmgzCmFEejBHVFg0VnpxWStFVXY3UHgzZ2JJeE0wR3ZqTnUvYUJvdWZrZ2RnSDRzL3hYNHVGckJsVytmUDRzRlBYYzIKbXJYd2E2NEY0ZHdLVDc5czY4NTBJMXZ3NS9URDFPRzdpcnNjUHdnMHZwUnlyKzlpTStjKzBWS3BiK1RCTnlzQQpjYkZJbWkzdnBpajliU2ZGVENzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKbjF0ZS95eW1RU3djbHh3OFFRSXhBVSs1L1EKYUN0bmMxOXRFT21jTWorQWJ4THdTS1YwNG1ubHRLRVFXSVBkRWF2RG9LeUQ0NFkzYUg5V2dXNXpra2tiQTJQSApUeDkzVWFtWXNVVkFUenFhOVZzd015dkhDM3RoUlFTRHpnYmxwK2grd1lOdTAyYUpreHJSR3ZCRjg1K282c2FoCktwUms2VHlzQWNVRUh1VHlpSVk5T3d4anBPUzVzVkJKV0NBQ1R5ZXYxRzY4SWkzd2xtY0M4UitaakpLSzh4VncKUmorYjNyeTZiL1A5WUdKYkt4Rm4wOU94eDVCNFhFVWduMjcwYjRSclNXeldOdEVFMkRoZkk1ajNnNGRkUHk3OApuQUNidHpBVUtkSzdXQVdOQXkyQzBFNDZOK3VIa3pObnYwdys1NE1HQy94N2R6TGFBampvTS8yZVRlaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
​ "namespace": "ZGVmYXVsdA==",
​ "token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB"
​ },
​ "kind": "Secret",
​ "metadata": {
​ "annotations": {
​ "kubernetes.io/service-account.name": "df-ns-admin",
​ "kubernetes.io/service-account.uid": "fbb59faa-d99b-11e8-94c8-000c29e073ed"
​ },
​ "creationTimestamp": "2018-10-27T03:54:20Z",
​ "name": "df-ns-admin-token-4rbwg",
​ "namespace": "default",
​ "resourceVersion": "303749",
​ "selfLink": "/api/v1/namespaces/default/secrets/df-ns-admin-token-4rbwg",
​ "uid": "fbc27f91-d99b-11e8-94c8-000c29e073ed"
​ },
​ "type": "kubernetes.io/service-account-token"
}

DEF_NS_ADMIN_TOKEN=echo ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB | base64 -d #將token解碼保存至變量中

[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.175.4:6443" --embed-certs=true --kubeconfig=/root/def-ns-admin.conf
Cluster "kubernetes" set.
kubectl config set-credentials def-ns-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/def-ns-admin.conf
kubectl config set-context def-ns-admin@kubernetes --cluster=kubernetes --user=def-ns-admin --kubeconfig=/root/def-ns-admin.conf
kubectl config use-context def-ns-admin@kubernetes --kubeconfig=/root/def-ns-admin.conf
sz /root/def-ns-admin.conf #將conf文件傳到本機,登陸的時候就是用這個conf就能夠了

Kubernetes網絡通訊及配置:

  • 容器間通訊:同一個Pod內的多個容器間的通訊,lo
  • Pod通訊:POD IP <--> POD IP
  • Pod與Service通訊:Pod IP <--> ClusterIP
  • Service與集羣外部客戶端通訊:Ingress,NodePort

CNI:Container Nerwork Interface:
解決方案:

  • 虛擬網橋
  • 多路複用:MacVlan
  • 硬件交換:SR-IOV

flannel的配置參數:

  • Network:flannel使用的CIDR格式的網絡地址,用於爲Pod配置網絡功能
  • SubnetLen:把Network切分子網供網絡各節點使用,使用多長的掩碼進行切分,默認爲24位
  • SubnetMin:10.244.10.0/24
  • SubnetMax:10.244.100.0/24
  • Backend:vxlan,host-gw,udp

使用Calico作訪問控制

安裝Calico
安裝文檔:https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/flannel
~shell
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
~

示例訪問策略配置:

[root@master networkpolicy]# cat ingress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:     #訪問控制策略爲ingress,若是沒有定義ingress的訪問規則,則默認入方向爲拒絕全部,出方向爲容許全部
  - Ingress
[root@master networkpolicy]# cat ingress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
spec:
  podSelector: {}
  ingress:                     #若是設置了ingress,可是沒有寫任何規則,則入方向爲容許全部
  - {}
  policyTypes:     
  - Ingress
[root@master networkpolicy]# cat allow-netpol-demo.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-myapp-ingress
spec:
  podSelector:
    matchLabels:
      app: myapp
  ingress:
  - from: 
    - ipBlock:
        cidr: 10.244.0.0/16    #容許10.244.0.0/16網段的主機訪問有標籤爲myapp的pod 的80端口但10.244.1.2除外
        except:
        - 10.244.1.2/32
    ports:
    - protocol: TCP
      port: 80
[root@master networkpolicy]# cat egress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

網絡策略:名稱空間拒絕全部出站,入站;放行全部出站目標爲本名稱空間內的全部Pod;

Helm

  • helm主要定義了一個應用程序在部署時所須要的全部清單文件
  • chart:一個helm程序包(不包含鏡像,鏡像存在於docker倉庫中)
  • Repository:Charts倉庫,https/http服務器
  • Release:特定的chart部署於目標集羣上的一個實例
  • Chart --> Config -->Release

  • 程序架構:
    1.helm:客戶端,管理本地的Chart,管理Chart,與Tiller服務器交互,發送Chart,實例安裝、查詢、卸載等操做

2.Tiller:服務端,接收helm發來的Charts與Config,合併生成release

  • 安裝helm https://docs.helm.sh/using_helm/#installing-helm
    1.RBAC配置文件示例:
[root@master helm]# cat tiller-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
[root@master helm]# helm init --service-account tiller
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
helm repo update
  • helm官方可用的chart列表(https://hub.kubeapps.com/)

helm經常使用命令:

release管理:

  • helm install
  • helm delete:刪除應用
  • helm upgrade
  • helm list
  • helm rollback

chart管理:

  • helm fetch:從倉庫獲取到本地
  • helm create
  • helm get
  • helm inspect:查看一個chart的詳細信息
  • helm package:打包chart文件
相關文章
相關標籤/搜索