【筆記】7天玩轉容器&CKA管理員實訓

第一部分html

day1,容器基礎知識介紹前端

安裝node

apt-get install docker-engine


[root@cce-7day-fudonghai-24106 01CNL]# docker -v
Docker version 18.09.0, build f897bb1

[root@cce-7day-fudonghai-24106 01CNL]# docker images
REPOSITORY                                                  TAG                   IMAGE ID            CREATED             SIZE
100.125.17.64:20202/hwofficial/storage-driver-linux-amd64   1.0.13                9b1a762c647a        3 weeks ago         749MB
100.125.17.64:20202/op_svc_apm/icagent                      5.11.27               797b45c7e959        5 weeks ago         340MB
canal-agent                                                 1.0.RC8.SPC300.B010   4e31d812d31d        2 months ago        505MB
canal-agent                                                 latest                4e31d812d31d        2 months ago        505MB
100.125.17.64:20202/hwofficial/cce-coredns-linux-amd64      1.2.6.1               614e71c360a5        3 months ago        328MB
swr.cn-east-2.myhuaweicloud.com/fudonghai/tank              v1.0                  77c91a2d6c53        5 months ago        112MB
swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank     v1.0                  77c91a2d6c53        5 months ago        112MB
redis                                                       latest                415381a6cb81        8 months ago        94.9MB
busybox                                                     latest                59788edf1f3e        9 months ago        1.15MB
nginx                                                       latest                06144b287844        10 months ago       109MB
euleros                                                     2.2.5                 b0f6bcd0a2a0        20 months ago       289MB
mirrorgooglecontainers/fluentd-elasticsearch                1.20                  c264dff3420b        2 years ago         301MB
k8s.gcr.io/fluentd-elasticsearch                            1.20                  c264dff3420b        2 years ago         301MB
nginx                                                       1.11.1-alpine         5ad9802b809e        3 years ago         69.3MB
cce-pause                                                   2.0                   2b58359142b0        3 years ago         350kB


[root@cce-7day-fudonghai-24106 01CNL]# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
55cbf04beb70: Pull complete 
1607093a898c: Pull complete 
9a8ea045c926: Pull complete 
1290813abd9d: Pull complete 
8a6b982ad6d7: Pull complete 
abb029e68402: Pull complete 
d068d0a738e5: Pull complete 
42ee47bb0c52: Pull complete 
ae9c861aed25: Pull complete 
60bba9d0dc8d: Pull complete 
15222e409530: Pull complete 
2dcc81b69024: Pull complete 
Digest: sha256:c0f20412acb98efb1af63911d38edca97df76fbf3c0f34de10cc2c56a9f57471
Status: Downloaded newer image for tomcat:latest


[root@cce-7day-fudonghai-24106 01CNL]# docker run -it -d -p 8888:8080 tomcat:latest
ee355280967236ab6eace5f98e5aa53edcb4026dece869f5366a829523beb464


[root@cce-7day-fudonghai-24106 01CNL]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES
ee3552809672        tomcat:latest        "catalina.sh run"        6 seconds ago       Up 5 seconds        0.0.0.0:8888->8080/tcp   vibrant_burnell

 

在瀏覽器裏面輸入:linux

http://122.112.252.69:8888/

發現不能訪問,到ecs控制檯(cce控制檯沒找到相應設置),-->訪問控制菜單,-->安全組菜單,發現有三個安全組,應該都是cce引擎自建的。nginx

cce-7day-fudonghai-cce-node-e77y
Sys-default
cce-7day-fudonghai-cce-control-e77y

點擊進入cce-7day-fudonghai-cce-node-e77y條目,添加訪問控制規則web

TCP : 8000-9000    IPv4    0.0.0.0/0  --
UDP : 8000-9000    IPv4    0.0.0.0/0  --

再刷新瀏覽器就會出現tomcat頁面。最後停掉容器redis

[root@cce-7day-fudonghai-24106 01CNL]# docker stop ee3552809672
ee3552809672

[root@cce-7day-fudonghai-24106 01CNL]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS               NAMES

 

實驗部分docker

1,建立,配置,購買CCE集羣見做業文檔shell

2,經過容器鏡像服務,上傳指定公共鏡像到本身的私有鏡像倉庫json

 0)把以前用過的鏡像強制刪除

[root@cce-7day-fudonghai-24106 01CNL]# docker rmi -f 77c91a2d6c53
Untagged: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
Untagged: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank@sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Untagged: swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0
Untagged: swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank@sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Deleted: sha256:77c91a2d6c53a8134c6838e5e25973eefaacf62548139043dada9ebe5bce4ef0
Deleted: sha256:17ce620a1c4218b5f6bd02e8b399a2501a9822e99c2efc9b1c54ca849a15f5d5

 

 1)拉取鏡像

[root@cce-7day-fudonghai-24106 01CNL]#  docker pull swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0
v1.0: Pulling from hwstaff_m00402518/tank
802b00ed6f79: Already exists 
e9d0e0ea682b: Already exists 
d8b7092b9221: Already exists 
d9bf1d47fd56: Pull complete 
Digest: sha256:c4ecb266f091fdf5ed37e78837f358d650be5d2d160aff00941569a0ac148aad
Status: Downloaded newer image for swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0

發現再次拉取回來的IMAGE ID和以前同樣

 

2)打成本身倉庫的標籤

docker   tag  swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank:v1.0 swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0

實際是一份數據,兩個tag

swr.cn-east-2.myhuaweicloud.com/fudonghai/tank              v1.0                  77c91a2d6c53        5 months ago        112MB
swr.cn-north-1.myhuaweicloud.com/hwstaff_m00402518/tank     v1.0                  77c91a2d6c53        5 months ago        112MB

 

3)登錄本身的docker倉庫。步驟是在cce管理頁面,點擊鏡像倉庫,彈出容器鏡像服務,點擊總覽,裏面的登陸指令,複製出來運行,提示 Login Succeeded表示成功。

docker login -u cn-east-2@AsFCPS2h0jL1JQQ17AHk -p bfd23e9955d00ba3f9953c81d45d4c089b9f88f43f732f36966c7ecd939e0ecf swr.cn-east-2.myhuaweicloud.com    

對應下面的格式

docker login -u 帳號(在swr獲取) -p 密碼(在swr獲取) swr.cn-east-2.myhuaweicloud.com
(這裏面的帳號和密碼都是華爲雲自動產生的)

 

4)推送到本身的鏡像,而後就能夠在個人鏡像頁面查看,多了一個鏡像名爲tank的鏡像。

[root@cce-7day-fudonghai-24106 01CNL]# docker push swr.cn-east-2.myhuaweicloud.com/fudonghai/tank

 

 第二部分

day2,Kubernetes基礎知識介紹

第1課:CKA考綱與K8S基礎概念解讀

二者PPT和內容幾乎同樣

 

課上實驗部分

[root@cce-7day-fudonghai-24106 01CNL]# docker pull swr.cn-south-1.myhuaweicloud.com/kevin-wangzefeng/cce-kubectl:v1
v1: Pulling from kevin-wangzefeng/cce-kubectl
6c3eb4525275: Pull complete 
5f70bf18a086: Pull complete 
07195e1407cb: Pull complete 
91f80218be79: Pull complete 
c16157d8ae47: Pull complete 
9192f9d33ba2: Pull complete 
8cb6c9ac22d1: Pull complete 
aa78cd0bc75c: Pull complete 
8f7c2c7f8d57: Pull complete 
5358690ca7c4: Pull complete 
bc72688d8ec4: Pull complete 
e5ba68ed6b9e: Pull complete 
9a0122677c09: Pull complete 
b2faa4a30ed2: Pull complete 
2e107f3c2172: Pull complete 
ccd7ca8624d3: Pull complete 
Digest: sha256:934fe455e44ef10e850979b9f150867db391b2c9a95a9f1ea4a23c3af1922ba7
Status: Downloaded newer image for swr.cn-south-1.myhuaweicloud.com/kevin-wangzefeng/cce-kubectl:v1

在cce頁面配置一個無狀態Deployment工做負載,創建服務的時候把節點+端口3000暴露出來,進入工做負載裏面的無狀態查看,並點擊訪問

 

經過命令行kubectl創建一個無狀態Deployment工做負載

[root@cce-7day-fudonghai-24106 01CNL]# kubectl run nginx --image nginx --port 80
deployment.apps/nginx created

查看

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy
NAME          READY     UP-TO-DATE   AVAILABLE   AGE
cka-kubectl   1/1       1            1           15m
nginx         1/1       1            1           6m8s

也能夠

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx
NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     1/1       1            1           6m54s

加-owide查看更多的補充信息

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx -owide
NAME      READY     UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES    SELECTOR
nginx     1/1       1            1           7m59s     nginx        nginx     run=nginx
[root@cce-7day-fudonghai-24106 01CNL]# kubectl get pod -owide
NAME                           READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cka-kubectl-6b4cc7f476-bmt64   1/1       Running   0          18m       172.16.0.24   192.168.0.184   <none>           <none>
nginx-57867cc648-4qv6k         1/1       Running   0          8m45s     172.16.0.25   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 01CNL]# kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 24 Jul 2019 16:31:15 +0800
Labels:                 run=nginx
Annotations:            deployment.kubernetes.io/revision=1
Selector:               run=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-57867cc648 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  10m   deployment-controller  Scaled up replica set nginx-57867cc648 to 1

 

擴容到2個POD

[root@cce-7day-fudonghai-24106 01CNL]# kubectl scale deployment nginx --replicas=2
deployment.extensions/nginx scaled

擴容成功

[root@cce-7day-fudonghai-24106 01CNL]# kubectl get deploy/nginx -owide
NAME      READY     UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES    SELECTOR
nginx     2/2       2            2           14m       nginx        nginx     run=nginx

 

小技巧

使用run命令創建yaml文件,而不是真正創建對象

[root@cce-7day-fudonghai-24106 ~]# kubectl run --image=nginx my-deploy -o yaml --dry-run > my-deploy.yaml

 

可使用下列命令啓動自動補全功能

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

若是報錯,升級一下bash-completion軟件

yum install bash-completion

 

查看資源定義

[root@cce-7day-fudonghai-24106 027day]# kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution

 

 

課下實驗部分

day2的課下實驗:使用工具(putty,xshell)登陸node節點,配置kubectl,須要使用的命令

wget https://cce-storage.obs.cn-north-1.myhwclouds.com/kubectl.zip
unzip kubectl.zip 
chmod 750 kubectl/kubectl 
mv kubectl/kubectl /usr/local/bin/

mkdir -p $HOME/.kube
mv -f kubeconfig.json $HOME/.kube/config
kubectl config use-context internal

須要注意的問題是kubeconfig.json不一樣的帳戶和新建的集羣節點會不同,若是用錯則kubectl不能正常工做

使用命令查看集羣狀態

[root@cce-7day-fudonghai-24106 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.0.252:5443
CoreDNS is running at https://192.168.0.252:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.0.184   Ready     <none>    24d       v1.13.7-r0-CCE2.0.24.B001
[root@cce-7day-fudonghai-24106 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-646fc859df-29tkg   0/1       Pending   0          24d
kube-system   coredns-646fc859df-zc6w7   1/1       Running   2          24d
kube-system   icagent-mlzw2              1/1       Running   46         24d
kube-system   storage-driver-q9jcz       1/1       Running   2          24d
[root@cce-7day-fudonghai-24106 ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-4-events        Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2-events        Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-3-events        Healthy   {"health": "true"}   

 

第三部分

day3,Kubernetes pod調度原理分析

第2課:調度管理實訓

 

課上實驗部分

Node定義,關注allocatable字段,意義是能夠分配的系統資源(capacity減去k8s等系統組件佔用的資源)

[root@cce-7day-fudonghai-24106 ~]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.0.184   Ready     <none>    24d       v1.13.7-r0-CCE2.0.24.B001
[root@cce-7day-fudonghai-24106 ~]# kubectl get node 192.168.0.184 -o yaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    huawei.com/gpu-status: '[]'
status:
  addresses:
  - address: 192.168.0.184
    type: InternalIP
  - address: 192.168.0.184
    type: Hostname
  allocatable:
    attachable-volumes-hc-all-mode-disk: "22"
    attachable-volumes-hc-scsi-mode-disk: "22"
    attachable-volumes-hc-vbd-mode-disk: "22"
    cce/eni: "10"
    cpu: 1930m
    ephemeral-storage: "9387421271"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 2151520Ki
    pods: "110"
  capacity:
    attachable-volumes-hc-all-mode-disk: "22"
    attachable-volumes-hc-scsi-mode-disk: "22"
    attachable-volumes-hc-vbd-mode-disk: "22"
    cce/eni: "10"
    cpu: "2"
    ephemeral-storage: 10186004Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 3880032Ki
    pods: "110"

 

pod定義

apiVersion: v1
kind: Pod
metadata:
  name: day3-pod-fudonghai
  labels:
    app:  day3-pod-app
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: my-container
    ports:
    - containerPort: 80
      protocol: TCP
    resources:
      requests:
        memory: "100Mi"
        cpu: "100m"
      limits:
        memory: "200Mi"
        cpu: "200m"
  #nodeName: 192.168.0.184           #調度結果,系統調度後填入
  schedulerName: default-scheduler   #執行調度的調度器
  restartPolicy: Always
  nodeSelector:               #匹配node的label,從系統的node複製出來,結果所有匹配,pod正常運行
    #disktype: ssd            #不符合條件,致使pending
    #node-flavor: s3.large.2  #不符合條件,致使pending
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    failure-domain.beta.kubernetes.io/is-baremetal: "false"
    failure-domain.beta.kubernetes.io/region: cn-east-2
    failure-domain.beta.kubernetes.io/zone: cn-east-2a
    kubernetes.io/availablezone: cn-east-2a
    kubernetes.io/hostname: 192.168.0.184
    os.architecture: amd64
    os.name: CentOS_Linux_7_Core
    os.version: 3.10.0-957.5.1.el7.x86_64
  affinity:
    nodeAffinity:                                     #引入運算符,能夠排除不具有指定label的node
      requiredDuringSchedulingIgnoredDuringExecution: #硬性過濾
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In        #運算符
            values:
            - 192.168.0.184
      preferredDuringSchedulingIgnoredDuringExecution: #軟性評分,不具有指定label打低分,下降node被選中的概率 
      - weight: 1
        preference:
          matchExpressions:
          - key: node-flavor
            operator: In
            values:
            - s3.large.2
    #podAffinity:     #根據集羣中的pod來選擇節點Node
    #podAntiAffinity: #避免某些pod分佈在同一組Node上,爲podAffinity取反
  tolerations:                      #高級調度策略
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300

 

node-affinity

apiVersion: v1
kind: Pod
metadata:
  name: node-affinity
  labels:
    run: node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: has-eip
            operator: In
            values:
            - "yes"
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

使用命令查看node標籤,並無"has-eip"標籤

[root@cce-7day-fudonghai-24106 027day]# kubectl get nodes --show-labels
NAME            STATUS    ROLES     AGE       VERSION                     LABELS
192.168.0.184   Ready     <none>    25d       v1.13.7-r0-CCE2.0.24.B001   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/is-baremetal=false,failure-domain.beta.kubernetes.io/region=cn-east-2,failure-domain.beta.kubernetes.io/zone=cn-east-2a,has-eip=yes,kubernetes.io/availablezone=cn-east-2a,kubernetes.io/hostname=192.168.0.184,os.architecture=amd64,os.name=CentOS_Linux_7_Core,os.version=3.10.0-957.5.1.el7.x86_64

這個時候pod處於Pending狀態

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   0/1       Pending   0          5s

在CCE的管理頁面,調到「節點管理」,找到惟一一個node,點擊「標籤管理」,手動添加一個"has-eip"標籤,值爲yes。再次觀察pod狀體,爲running狀態

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   1/1       Running   0          7m55s

 

pod-affinity,選擇和label爲run:pod-affinity(前面一個pod),在「node」範圍內親和 

apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity
  labels:
    run: pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: run
            operator: In
            values:
            - "node-affinity"
        topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

本pod必須在前面的pod創建起來後才能running。適用場景是和一組特定的pod分佈在一塊兒,好比前端+後端。這樣在同一AZ內流量免費,且網速快。

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS    RESTARTS   AGE
node-affinity   0/1       Pending   0          9s
pod-affinity    0/1       Pending   0          41s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME            READY     STATUS              RESTARTS   AGE
node-affinity   0/1       ContainerCreating   0          53s
pod-affinity    1/1       Running             0          85s

 

pod-anti-affinity,和前面pod反親和,避免某些Pod分佈在同一組Node上。與podAffinity的差別,1,匹配過程相同,2,最終處理調度結果時取反

apiVersion: v1
kind: Pod
metadata:
  name: pod-anti-affinity
  labels:
    run: pod-anti-affinity
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: run
            operator: In
            values:
            - "node-affinity"
        topologyKey: kubernetes.io/hostname
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0

由於只有一個節點,調度失敗

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                READY     STATUS    RESTARTS   AGE
node-affinity       1/1       Running   0          18m
pod-affinity        1/1       Running   0          19m
pod-anti-affinity   0/1       Pending   0          7s

查看失敗緣由

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod pod-anti-affinity
Name:               pod-anti-affinity
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             run=pod-anti-affinity
Annotations:        <none>
Status:             Pending
IP:                 

Conditions:
  Type           Status
  PodScheduled   False 
Volumes:

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  48s (x18 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.

 

pod-tolerations,

apiVersion: v1
kind: Pod
metadata:
  name: pod-tolerations
  labels:
    run: pod-tolerations
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: container-0
  tolerations:
  - key: gpu
    operator: Equal
    value: "yes"
    effect: NoSchedule

先給node打上標籤,使調度不成功。taints:避免Pod調度到特定的Node上。帶effect的特殊label,對Pod有排斥性

[root@cce-7day-fudonghai-24106 027day]# kubectl taint node 192.168.0.184 gpu=no:NoSchedule
node/192.168.0.184 tainted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
node-affinity       1/1       Running   0          41m       172.16.0.22   192.168.0.184   <none>           <none>
pod-affinity        1/1       Running   0          42m       172.16.0.21   192.168.0.184   <none>           <none>
pod-anti-affinity   0/1       Pending   0          22m       <none>        <none>          <none>           <none>
pod-tolerations     0/1       Pending   0          5s        <none>        <none>          <none>           <none>

若是不打這個標籤,直接會調度成功。

 

課下實驗部分

1,經過命令行,使用nginx鏡像建立一個pod並手動調度到集羣中的一個節點。

apiVersion: v1
kind: Pod
metadata:
  name: cce7days-fudonghai
  labels:
    app: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - 192.168.0.184  #node節點的私網IP地址
  containers:
  - image: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0    #容器鏡像地址
    imagePullPolicy: IfNotPresent
    name: container-0
    resources: {}
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-secret
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}

開始的時候pod並無運行起來,查看pod狀態發現

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod cce7days-fudonghai 
Name:               cce7days-fudonghai
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=nginx
Annotations:        <none>
Status:             Pending
IP:                 
Containers:
  container-0:
    Image:        swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9rk4h (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-9rk4h:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9rk4h
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  52s (x2 over 52s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

手動刪除node的taint,就能夠運行起來

[root@cce-7day-fudonghai-24106 027day]# kubectl taint node 192.168.0.184 gpu:NoSchedule-
node/192.168.0.184 untainted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-fudonghai   1/1       Running   0          62m       172.16.0.24   192.168.0.184   <none>           <none>

 

2,經過命令行,建立一個deployment,擁有2個pod,其自身的pod之間在節點級別反親和

實驗開始前須要再購買一個節點,選最低的按需計費0.42/小時,購買後有幾分鐘建立時間。兩個節點並不在一個可用區裏面,但在一個集羣裏面

[root@cce-7day-fudonghai-24106 027day]# kubectl get node -owide
NAME            STATUS    ROLES     AGE       VERSION                     INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
192.168.0.184   Ready     <none>    26d       v1.13.7-r0-CCE2.0.24.B001   192.168.0.184   <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.0
192.168.0.187   Ready     <none>    93s       v1.13.4-r0-CCE2.0.23.B001   192.168.0.187   <none>        CentOS Linux 7 (Core)   3.10.0-957.5.1.el7.x86_64   docker://18.9.0

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app1-fudonghai
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cce7days-app1-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app1-fudonghai
    spec:
      containers:
       - image: nginx
         name: container-0
         imagePullPolicy: IfNotPresent
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      imagePullSecrets:
        - name: default-secret
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - cce7days-app1-fudonghai
            topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler

建立並查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f anti-affinity-deployment.yaml 
deployment.apps/cce7days-app1-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                       READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app1-fudonghai-54d59459b9-8lwzb   1/1       Running   0          10s       172.16.0.26   192.168.0.184   <none>           <none>
cce7days-app1-fudonghai-54d59459b9-9485l   1/1       Running   0          10s       172.16.0.36   192.168.0.187   <none>           <none>

 

 

3,經過命令行,建立一個deployment,擁有2個pod,並配置該deployment的pod與第1個deployment的pod在節點級別親和

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app2-fudonghai
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cce7days-app2-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app2-fudonghai
    spec:
      containers:
       - image: nginx
         name: container-0
      imagePullSecrets:
        - name: default-secret
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - cce7days-app1-fudonghai
              topologyKey: kubernetes.io/hostname

建立並查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f affinity-deployment.yaml 
deployment.apps/cce7days-app2-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                       READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app1-fudonghai-54d59459b9-8lwzb   1/1       Running   0          4m44s     172.16.0.26   192.168.0.184   <none>           <none>
cce7days-app1-fudonghai-54d59459b9-9485l   1/1       Running   0          4m44s     172.16.0.36   192.168.0.187   <none>           <none>
cce7days-app2-fudonghai-8fbd78c48-5gr7m    1/1       Running   0          4s        172.16.0.37   192.168.0.187   <none>           <none>
cce7days-app2-fudonghai-8fbd78c48-645z4    1/1       Running   0          4s        172.16.0.27   192.168.0.184   <none>           <none>

 

 

第四部分

Day4 Kubernetes應用生命週期原理分析

 第3課:K8S日誌、監控與應用管理實訓

 

課上實驗部分

 

[root@cce-7day-fudonghai-24106 027day]# kubectl cluster-info
Kubernetes master is running at https://192.168.0.252:5443
CoreDNS is running at https://192.168.0.252:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
[root@cce-7day-fudonghai-24106 027day]# kubectl cluster-info dump > a.txt
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -n kube-system
NAME                       READY     STATUS    RESTARTS   AGE
coredns-646fc859df-6m5cf   0/1       Pending   0          20h
coredns-646fc859df-zc6w7   1/1       Running   2          27d
icagent-mlzw2              1/1       Running   52         27d
storage-driver-q9jcz       1/1       Running   2          27d
[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod coredns-646fc859df-6m5cf -n kube-system
Name:               coredns-646fc859df-6m5cf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=coredns
                    k8s-app=coredns
                    kubernetes.io/evictcritical=
                    pod-template-hash=646fc859df
                    release=cceaddon-coredns
Annotations:        checksum/config=3095a9b4028195e7e0b8b22c550bf183d0b7a8a7eba20808b36081d0b39f8b81
                    scheduler.alpha.kubernetes.io/critical-pod=
                    scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-646fc859df
Containers:
  coredns:
    Image:      100.125.17.64:20202/hwofficial/cce-coredns-linux-amd64:1.2.6.1
    Port:       5353/UDP
    Host Port:  0/UDP
    Args:
      -conf
      /etc/coredns/Corefile
      -rmem
      udp#8388608
      -wmem
      udp#1048576
    Limits:
      cpu:     500m
      memory:  512Mi
    Requests:
      cpu:        500m
      memory:     512Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=3s timeout=3s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-vrbw4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-vrbw4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-vrbw4
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 60s
                 node.kubernetes.io/unreachable:NoExecute for 60s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  4m (x1567 over 20h)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.
[root@cce-7day-fudonghai-24106 027day]# kubectl run redis --image=redis
deployment.apps/redis created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
redis-785f9d6bfb-sp5n8   1/1       Running   0          11s

 

看redis容器日誌

[root@cce-7day-fudonghai-24106 027day]# kubectl logs -f redis-785f9d6bfb-sp5n8 -c redis
1:C 28 Jul 2019 03:21:39.454 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 28 Jul 2019 03:21:39.454 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 28 Jul 2019 03:21:39.454 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 28 Jul 2019 03:21:39.463 * Running mode=standalone, port=6379.
1:M 28 Jul 2019 03:21:39.463 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 28 Jul 2019 03:21:39.463 # Server initialized
1:M 28 Jul 2019 03:21:39.463 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 28 Jul 2019 03:21:39.463 * Ready to accept connections

 運行2個副本的deploymnet

[root@cce-7day-fudonghai-24106 027day]# kubectl run nginx --image=nginx --replicas=2
deployment.apps/nginx created
[root@cce-7day-fudonghai-24106 027day]# kubectl get deploy
\NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     2/2       2            2           10s
[root@cce-7day-fudonghai-24106 027day]# \kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running   0          21s
nginx-7cdbd8cdc9-q5x6z   1/1       Running   0          21s

進入pod容器

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it nginx-7cdbd8cdc9-q5x6z /bin/sh
# ls
bin  boot  dev    etc  home  lib    lib64  media  mnt  opt    proc  root  run  sbin  srv  sys  tmp  usr  var
# exit

 

升級容器鏡像

[root@cce-7day-fudonghai-24106 027day]# kubectl set image deploy nginx nginx=nginx:1.9.1
deployment.extensions/nginx image updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running             0          8h
nginx-7cdbd8cdc9-q5x6z   1/1       Running             0          8h
nginx-8676fdbb6d-ljwsz   0/1       ContainerCreating   0          19s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS              RESTARTS   AGE
nginx-7cdbd8cdc9-6jpwd   1/1       Running             0          8h
nginx-7cdbd8cdc9-q5x6z   1/1       Running             0          8h
nginx-8676fdbb6d-ljwsz   0/1       ContainerCreating   0          28s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS        RESTARTS   AGE
nginx-7cdbd8cdc9-q5x6z   0/1       Terminating   0          8h
nginx-8676fdbb6d-ljwsz   1/1       Running       0          42s
nginx-8676fdbb6d-zbbx8   1/1       Running       0          10s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod 
NAME                     READY     STATUS    RESTARTS   AGE
nginx-8676fdbb6d-ljwsz   1/1       Running   0          2m51s
nginx-8676fdbb6d-zbbx8   1/1       Running   0          2m19s
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout status deploy nginx
deployment "nginx" successfully rolled out

查看歷史

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx
deployments "nginx"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=2
deployments "nginx" with revision #2
Pod Template:
  Labels:    pod-template-hash=8676fdbb6d
    run=nginx
  Containers:
   nginx:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=1
deployments "nginx" with revision #1
Pod Template:
  Labels:    pod-template-hash=7cdbd8cdc9
    run=nginx
  Containers:
   nginx:
    Image:    nginx
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

把maxSurge和maxUnavailable 都改爲2

[root@cce-7day-fudonghai-24106 027day]# kubectl edit deploy nginx
deployment.extensions/nginx edited
  strategy:
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
    type: RollingUpdate

改變資源升級

[root@cce-7day-fudonghai-24106 027day]# kubectl set resources deploy nginx -c=nginx --limits=cpu=200m,memory=256Mi
deployment.extensions/nginx resource requirements updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-6bc5c66cdd-dqwdm   1/1       Running   0          14s
nginx-6bc5c66cdd-xz85d   1/1       Running   0          14s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-6bc5c66cdd-dqwdm   1/1       Running   0          19s
nginx-6bc5c66cdd-xz85d   1/1       Running   0          19s
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx --revision=3
deployments "nginx" with revision #3
Pod Template:
  Labels:    pod-template-hash=6bc5c66cdd
    run=nginx
  Containers:
   nginx:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Limits:
      cpu:    200m
      memory:    256Mi
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

回滾到版本2

[root@cce-7day-fudonghai-24106 027day]# kubectl  rollout undo deploy nginx --to-revision=2
deployment.extensions/nginx

水平擴容

[root@cce-7day-fudonghai-24106 027day]# kubectl scale deploy nginx --replicas=4
deployment.extensions/nginx scaled
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS              RESTARTS   AGE
nginx-8676fdbb6d-2g4np   1/1       Running             0          3m39s
nginx-8676fdbb6d-c9k78   0/1       ContainerCreating   0          7s
nginx-8676fdbb6d-rj4np   1/1       Running             0          3m39s
nginx-8676fdbb6d-vjtln   0/1       ContainerCreating   0          7s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS              RESTARTS   AGE
nginx-8676fdbb6d-2g4np   1/1       Running             0          3m45s
nginx-8676fdbb6d-c9k78   0/1       ContainerCreating   0          13s
nginx-8676fdbb6d-rj4np   1/1       Running             0          3m45s
nginx-8676fdbb6d-vjtln   1/1       Running             0          13s

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy nginx
deployments "nginx"
REVISION  CHANGE-CAUSE
1         <none>
3         <none>
4         <none>

 

 

課下實驗部分

 1,經過Deployment方式,使用redis鏡像建立1個Pod。經過kubectl得到redis啓動日誌。打卡:將所用命令、建立的Deployment完整yaml截圖上傳

yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app3-fudonghai
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cce7days-app3-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app3-fudonghai
    spec:
      containers:
       - image: 'redis:latest'
         name: container-0
      imagePullSecrets:
        - name: default-secret
    # 此處親和性設置是爲了將pod調度到有EIP的節點,便於下載外網鏡像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

使用命令

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day4-redis-deployment.yaml 
deployment.apps/cce7days-app3-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                      READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app3-fudonghai-b4f5bf6d6-b5xrv   1/1       Running   0          20s       172.16.0.26   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl logs -f cce7days-app3-fudonghai-b4f5bf6d6-b5xrv -c container-0
1:C 29 Jul 2019 00:51:04.808 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 29 Jul 2019 00:51:04.808 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 29 Jul 2019 00:51:04.808 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 29 Jul 2019 00:51:04.811 * Running mode=standalone, port=6379.
1:M 29 Jul 2019 00:51:04.811 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 29 Jul 2019 00:51:04.811 # Server initialized
1:M 29 Jul 2019 00:51:04.811 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 29 Jul 2019 00:51:04.811 * Ready to accept connections
^C

 

 

2,經過命令行,建立1個deployment,副本數爲3,鏡像爲nginx:latest。而後滾動升級到nginx:1.9.1。

 yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-app4-fudonghai
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cce7days-app4-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-app4-fudonghai
    spec:
      containers:
       - image: 'nginx:latest'
         name: container-0
      imagePullSecrets:
        - name: default-secret
    # 此處親和性設置是爲了將pod調度到有EIP的節點,便於下載外網鏡像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

使用命令

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day4-nginx-deployment.yaml 
deployment.apps/cce7days-app4-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                      READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app4-fudonghai-6c5fb9794-5wl6r   1/1       Running   0          14s       172.16.0.29   192.168.0.184   <none>           <none>
cce7days-app4-fudonghai-6c5fb9794-9mqfm   1/1       Running   0          14s       172.16.0.28   192.168.0.184   <none>           <none>
cce7days-app4-fudonghai-6c5fb9794-zgjsk   1/1       Running   0          14s       172.16.0.27   192.168.0.184   <none>           <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl set image deploy cce7days-app4-fudonghai container-0=nginx:1.9.1
deployment.extensions/cce7days-app4-fudonghai image updated
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:31Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-5j2pn
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554233"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-5j2pn
    uid: a8cc08d2-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:36Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:36Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://41faea96f702417e1b76fd1b6462c40eb8399564c7da1e15844222946d5877fd
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:36Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.19
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:31Z
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:18Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-k62fv
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554165"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-k62fv
    uid: a1411f99-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:18Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:18Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://c7fa6320cfb4ad5bc6e0a31392bfd2ded5b5168647478a09386b63c69ef96b45
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:23Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.30
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:18Z
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: 2019-07-29T01:10:24Z
    generateName: cce7days-app4-fudonghai-7b4bd886f4-
    labels:
      app: cce7days-app4-fudonghai
      pod-template-hash: 7b4bd886f4
    name: cce7days-app4-fudonghai-7b4bd886f4-rqfh4
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: cce7days-app4-fudonghai-7b4bd886f4
      uid: a13fbe25-b19d-11e9-8bbd-fa163eb87d99
    resourceVersion: "6554200"
    selfLink: /api/v1/namespaces/default/pods/cce7days-app4-fudonghai-7b4bd886f4-rqfh4
    uid: a509133c-b19d-11e9-8bbd-fa163eb87d99
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - 192.168.0.184
    containers:
    - image: nginx:1.9.1
      imagePullPolicy: Always
      name: container-0
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: default-token-9rk4h
        readOnly: true
    dnsConfig:
      options:
      - name: single-request-reopen
        value: ""
      - name: timeout
        value: "2"
    dnsPolicy: ClusterFirst
    enableServiceLinks: true
    imagePullSecrets:
    - name: default-secret
    nodeName: 192.168.0.184
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: default
    serviceAccountName: default
    terminationGracePeriodSeconds: 30
    tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    volumes:
    - name: default-token-9rk4h
      secret:
        defaultMode: 420
        secretName: default-token-9rk4h
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:31Z
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2019-07-29T01:10:24Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://4bd5450bd4516da0710ddcf4e1be29823c535f88fb3149c266d2240ceb475fc4
      image: nginx:1.9.1
      imageID: docker-pullable://nginx@sha256:2f68b99bc0d6d25d0c56876b924ec20418544ff28e1fb89a4c27679a40da811b
      lastState: {}
      name: container-0
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2019-07-29T01:10:30Z
    hostIP: 192.168.0.184
    phase: Running
    podIP: 172.16.0.31
    qosClass: BestEffort
    startTime: 2019-07-29T01:10:24Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deploy cce7days-app4-fudonghai 
deployments "cce7days-app4-fudonghai"
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@cce-7day-fudonghai-24106 027day]# kubectl rollout history deployment cce7days-app4-fudonghai --revision=2
deployments "cce7days-app4-fudonghai" with revision #2
Pod Template:
  Labels:    app=cce7days-app4-fudonghai
    pod-template-hash=7b4bd886f4
  Containers:
   container-0:
    Image:    nginx:1.9.1
    Port:    <none>
    Host Port:    <none>
    Environment:    <none>
    Mounts:    <none>
  Volumes:    <none>

 

 

 

 

第五部分

Day5 Kubernetes網絡管理原理分析

 第4課:K8S網絡管理實訓

 

課上實驗部分

(一)svc

 創建類型爲clusterip的service,名稱爲my-svc-cp

[root@cce-7day-fudonghai-24106 027day]# kubectl create service clusterip my-svc-cp --tcp=80:8080
service/my-svc-cp created
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl   NodePort    10.247.1.217     <none>        3000:30078/TCP   4d22h
kubernetes    ClusterIP   10.247.0.1       <none>        443/TCP          28d
my-svc-cp ClusterIP 10.247.100.116 <none>        80/TCP           31s
[root@cce-7day-fudonghai-24106 027day]# curl 10.247.100.116:80
curl: (7) Failed connect to 10.247.100.116:80; Connection refused

k8s會產生同名的endpoint,此時endpoint尚未指定IP,因此看到爲none

[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints
NAME          ENDPOINTS            AGE
cka-kubectl   <none>               4d22h
kubernetes    192.168.0.252:5444   28d
my-svc-cp <none>               6m2s

查看這個service

[root@cce-7day-fudonghai-24106 027day]# kubectl describe service my-svc-cp 
Name:              my-svc-cp
Namespace:         default
Labels:            app=my-svc-cp
Annotations:       <none>
Selector:          app=my-svc-cp
Type:              ClusterIP
IP:                10.247.100.116
Port:              80-8080  80/TCP
TargetPort:        8080/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

 

創建類型爲nodeport的service,名稱爲my-svc-np

[root@cce-7day-fudonghai-24106 027day]# kubectl  create service nodeport my-svc-np --tcp=1234:80
service/my-svc-np created
[root@cce-7day-fudonghai-24106 027day]# kubectl describe service my-svc-np
Name:                     my-svc-np
Namespace:                default
Labels:                   app=my-svc-np
Annotations:              <none>
Selector:                 app=my-svc-np
Type:                     NodePort
IP:                       10.247.109.121
Port:                     1234-80  1234/TCP
TargetPort:               80/TCP
NodePort:                 1234-80  31263/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

nodeport的svc與clusterip的svc相同之處在於也有clusterip,不一樣之處是還有一個主機的端口號(我的認爲node端口號,能夠從外網訪問),這裏是31263

也就是說nodeport 涵蓋了 clusterip

[root@cce-7day-fudonghai-24106 027day]# curl 10.247.109.121:1234
curl: (7) Failed connect to 10.247.109.121:1234; Connection refused
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# curl http://122.112.252.69:31263
curl: (7) Failed connect to 122.112.252.69:31263; Connection refused

 

創建類型爲headless的svc,指定clusterip爲None

[root@cce-7day-fudonghai-24106 027day]# kubectl create svc clusterip my-svc-headless --clusterip="None"
service/my-svc-headless created
[root@cce-7day-fudonghai-24106 027day]# kubectl describe svc my-svc-headless 
Name:              my-svc-headless
Namespace:         default
Labels:            app=my-svc-headless
Annotations:       <none>
Selector:          app=my-svc-headless
Type:              ClusterIP
IP: None
Session Affinity:  None
Events:            <none>

 

上面創建的3個svc都是沒有後端的,下面創建一個名爲hello-nginx的後端,爲svc提供支撐

[root@cce-7day-fudonghai-24106 027day]# kubectl run hello-nginx --image=nginx
deployment.apps/hello-nginx created
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                           READY     STATUS    RESTARTS   AGE
hello-nginx-79c6778c6f-pdqzl   1/1       Running   0          6s

使用expose命令創建svc,把這個deploy和這個svc鏈接起來,svc類型clusterip,端口8090

[root@cce-7day-fudonghai-24106 027day]# kubectl expose deploy hello-nginx --type=ClusterIP --name=my-svc-nginx --port=8090 --target-port=80
service/my-svc-nginx exposed
You have new mail in /var/spool/mail/root
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl       NodePort    10.247.1.217     <none>        3000:30078/TCP   4d23h
kubernetes        ClusterIP   10.247.0.1       <none>        443/TCP          28d
my-svc-cp         ClusterIP   10.247.100.116   <none>        80/TCP           80m
my-svc-headless   ClusterIP   None             <none>        <none>           36m
my-svc-nginx ClusterIP 10.247.205.107 <none> 8090/TCP 7s
my-svc-np         NodePort    10.247.109.121   <none>        1234:31263/TCP   68m

測試一下,成功

[root@cce-7day-fudonghai-24106 027day]# curl 10.247.205.107:8090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

k8s會產生同名的endpoint,其IP就是剛纔那個nginx pod的IP:172.16.0.20

[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints 
NAME              ENDPOINTS            AGE
cka-kubectl       <none>               4d23h
kubernetes        192.168.0.252:5444   28d
my-svc-cp         <none>               84m
my-svc-headless   <none>               41m
my-svc-nginx 172.16.0.20:80 4m41s
my-svc-np         <none>               72m

[root@cce-7day-fudonghai-24106 027day]# kubectl describe pod hello-nginx-79c6778c6f-pdqzl 
Name:               hello-nginx-79c6778c6f-pdqzl
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               192.168.0.184/192.168.0.184
Start Time:         Mon, 29 Jul 2019 15:47:31 +0800
Labels:             pod-template-hash=79c6778c6f
                    run=hello-nginx
Annotations:        <none>
Status:             Running
IP: 172.16.0.20
Controlled By:      ReplicaSet/hello-nginx-79c6778c6f

也能夠用pod IP測試

[root@cce-7day-fudonghai-24106 027day]# curl 172.16.0.20
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

(二)查域名

查域名須要有工具nslookup,而實際考試環境沒有,須要找到一個有的docker鏡像run起來

[root@cce-7day-fudonghai-24106 027day]# wget https://kubernetes.io/examples/admin/dns/busybox.yaml
--2019-07-29 16:17:12--  https://kubernetes.io/examples/admin/dns/busybox.yaml
Resolving kubernetes.io (kubernetes.io)... 45.54.44.102
Connecting to kubernetes.io (kubernetes.io)|45.54.44.102|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234 [application/x-yaml]
Saving to: ‘busybox.yaml’

100%[========================================================================================>] 234         --.-K/s   in 0s      

2019-07-29 16:17:14 (7.26 MB/s) - ‘busybox.yaml’ saved [234/234]

busybox1.28的鏡像帶nslookup

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
 - sleep #須要睡眠阻塞住,否則一下就運行完了 - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

運行這個busybox鏡像

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f busybox.yaml 
pod/busybox created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                           READY     STATUS    RESTARTS   AGE
busybox                        1/1       Running   0          22s
hello-nginx-79c6778c6f-pdqzl   1/1       Running   0          46m

使用nslookup命令 查看 kubernetes服務的IP

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup kubernetes.default
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.247.0.1 kubernetes.default.svc.cluster.local
10.247.3.10 是DNS server的IP,在kube-system的namespace裏面
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc -n kube-system
NAME      TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   10.247.3.10   <none>        53/UDP,53/TCP,8080/TCP   28d
kubernetes服務的IP是 10.247.0.1,它後端的pod是API Server
[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
cka-kubectl       NodePort    10.247.1.217     <none>        3000:30078/TCP   5d
kubernetes        ClusterIP   10.247.0.1       <none>        443/TCP          28d

 

解析my-svc-nginx域名

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup my-svc-nginx
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      my-svc-nginx
Address 1: 10.247.205.107 my-svc-nginx.default.svc.cluster.local

 

解析hello-nginx-79c6778c6f-pdqzl 這個pod的IP,失敗,緣由未明

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup hello-nginx-79c6778c6f-pdqzl
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

nslookup: can't resolve 'hello-nginx-79c6778c6f-pdqzl'

換busybox這個pod,成功

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it busybox -- nslookup busybox 
Server:    10.247.3.10
Address 1: 10.247.3.10 coredns.kube-system.svc.cluster.local

Name:      busybox
Address 1: 172.16.0.21 busybox

 

課下實驗部分

 1,建立1個Service和1個Pod做爲其後端。經過kubectl describe得到該Service和對應Endpoints信息。

 pod的yaml

apiVersion: v1
kind: Pod
metadata:
  name: cce7days-app5-pod-fudonghai
  labels:
    app: cce7days-app5-pod-fudonghai
spec:
  #此處親和性設置是爲了將pod調度到有EIP的節點,便於下載外網鏡像
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - 192.168.0.184
  containers:
  - image: nginx:latest
    imagePullPolicy: IfNotPresent
    name: container-0
  restartPolicy: Always
  schedulerName: default-scheduler

svc的yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: cce7days-app5-svc-fudonghai
  name: cce7days-app5-svc-fudonghai
spec:
  ports:
  - name: service0
    port: 80
    protocol: TCP
    targetPort: 80
  selector:   #選中對應的pod
    app: cce7days-app5-pod-fudonghai
  type: NodePort

創建

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day5-pod.yaml 
pod/cce7days-app5-pod-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day5-service.yaml 
service/cce7days-app5-svc-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                          READY     STATUS    RESTARTS   AGE
cce7days-app5-pod-fudonghai   1/1       Running   0          32s

查看

[root@cce-7day-fudonghai-24106 027day]# kubectl get svc
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
cce7days-app5-svc-fudonghai   NodePort    10.247.103.223   <none>        80:31367/TCP   6m19s
kubernetes                    ClusterIP   10.247.0.1       <none>        443/TCP        29d
[root@cce-7day-fudonghai-24106 027day]# kubectl get endpoints
NAME                          ENDPOINTS            AGE
cce7days-app5-svc-fudonghai   172.16.0.22:80       6s
kubernetes                    192.168.0.252:5444   29d
[root@cce-7day-fudonghai-24106 027day]# kubectl describe svc cce7days-app5-svc-fudonghai
Name:                     cce7days-app5-svc-fudonghai
Namespace:                default
Labels:                   app=cce7days-app5-svc-fudonghai
Annotations:              <none>
Selector:                 app=cce7days-app5-pod-fudonghai
Type:                     NodePort
IP:                       10.247.103.223
Port:                     service0  80/TCP
TargetPort:               80/TCP
NodePort:                 service0  31367/TCP
Endpoints:                172.16.0.22:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[root@cce-7day-fudonghai-24106 027day]# kubectl describe endpoints cce7days-app5-svc-fudonghai
Name:         cce7days-app5-svc-fudonghai
Namespace:    default
Labels:       app=cce7days-app5-svc-fudonghai
Annotations:  <none>
Subsets:
  Addresses:          172.16.0.22
  NotReadyAddresses:  <none>
  Ports:
    Name      Port  Protocol
    ----      ----  --------
    service0  80    TCP

Events:  <none>

 由於是svc是nodeport類型的,能夠從外網使用http://122.112.252.69:31367/ 訪問

 

第六部分

Day6 Kubernetes存儲管理原理分析

 第5課:K8S存儲管理實訓

 

課上實驗部分

 把configMap掛在到容器指定的目錄裏面

ConfigMap的yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  special.type: charm

deploy的yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cce7days-configmap-fudonghai
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cce7days-configmap-fudonghai
  template:
    metadata:
      labels:
        app: cce7days-configmap-fudonghai
    spec:
      containers:
       - image: 'nginx:latest'
         name: container-0
         volumeMounts:
         - name: test
           mountPath: /tmp
      volumes:
       - name: test
         configMap:
          name: special-config
          defaultMode: 420
          items:
          - key: special.how
            path: welcome/how
    # 此處親和性設置是爲了將pod調度到有EIP的節點,便於下載外網鏡像
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - 192.168.0.184

若是pod沒有找到對應的configMap,會一直處於 ContainerCreating狀態,直到創建起這個configMap後,pod纔會Running

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-configmap-deployment.yaml 
deployment.apps/cce7days-configmap-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          2s
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-configmap.yaml 
configmap/special-config created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          33s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS              RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   0/1       ContainerCreating   0          34s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                                            READY     STATUS    RESTARTS   AGE
cce7days-configmap-fudonghai-79c5584d67-cmd9f   1/1       Running   0          41s

進入容器查看,鍵值是文件內容,path是文件名,path能夠帶多級路徑的,如welcome/how,welcome是目錄,how是最終文件

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it cce7days-configmap-fudonghai-57bf99985d-bjd4w /bin/sh
# cd /tmp
# ls
welcome
# cd welcome
# ls
how
# cat how
very# 

 

PVC方式測試詳見課後實驗

雲硬盤類型

[root@cce-7day-fudonghai-24106 027day]# kubectl get storageclass
NAME              PROVISIONER                     AGE
efs-performance   flexvolume-huawei.com/fuxiefs   29d
efs-standard      flexvolume-huawei.com/fuxiefs   29d
nfs-rw            flexvolume-huawei.com/fuxinfs   29d
obs-standard      flexvolume-huawei.com/fuxiobs   29d
obs-standard-ia   flexvolume-huawei.com/fuxiobs   29d
sas               flexvolume-huawei.com/fuxivol   29d
sata              flexvolume-huawei.com/fuxivol   29d
ssd               flexvolume-huawei.com/fuxivol   29d

 

 

課下實驗部分

一、部署一個statefulset應用,使用持久化卷,經過pvc聲明所需的存儲大小1G及訪問模式爲RWX。

創建pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-evs-auto
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: sata    #雲硬盤類型
    volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol
  labels:
    failure-domain.beta.kubernetes.io/region: cn-east-2
    failure-domain.beta.kubernetes.io/zone: cn-east-2a
spec:
  accessModes:    #訪問模式
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

create並查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-pvc.yaml 
persistentvolumeclaim/pvc-evs-auto created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pvc
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-evs-auto   Bound     pvc-927bc7a9-b29c-11e9-8bbd-fa163eb87d99   1Gi        RWX            sata           15s

statefulset的yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cce7days-app11-fudonghai
  namespace: default
spec:
  podManagementPolicy: OrderedReady
  serviceName: cce7days-app11-fudonghai-headless
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: cce7days-app11-fudonghai
      failure-domain.beta.kubernetes.io/region: cn-east-2
      failure-domain.beta.kubernetes.io/zone: cn-east-2a
  template:
    metadata:
      labels:
        app: cce7days-app11-fudonghai
        failure-domain.beta.kubernetes.io/region: cn-east-2
        failure-domain.beta.kubernetes.io/zone: cn-east-2a
    spec:
      affinity: {}
      containers:
      - image: swr.cn-east-2.myhuaweicloud.com/fudonghai/tank:v1.0
        imagePullPolicy: IfNotPresent
        name: container-0
        resources: {}
        volumeMounts:
        - mountPath: /tmp
          name: pvc-evs-example
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: default-secret  
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - name: pvc-evs-example
          persistentVolumeClaim:
            claimName: pvc-evs-auto  #對應PVC的名字
  updateStrategy:
    type: RollingUpdate

創建並查看

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day6-statefulset.yaml 
statefulset.apps/cce7days-app11-fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                                            READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
cce7days-app11-fudonghai-0                      1/1       Running   0          43s       172.16.0.26   192.168.0.184   <none>           <none>

進入容器內操做

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -ti cce7days-app11-fudonghai-0 /bin/sh
# df
Filesystem                                                                                       1K-blocks    Used Available Use% Mounted on
/dev/mapper/docker-253:1-788010-e5d376529d38bb6e9fa8e5a2fc33894198c53f52d559787ae67cdf0259fad6de  10475520  152016  10323504   2% /
tmpfs                                                                                                65536       0     65536   0% /dev
tmpfs                                                                                              1940016       0   1940016   0% /sys/fs/cgroup
/dev/sda 999320 2564 927944 1% /tmp

# echo "this is a test" > /tmp/test.txt
# cat /tmp/test.txt
this is a test
# exit

退出容器並刪除pod

[root@cce-7day-fudonghai-24106 027day]# kubectl delete pod cce7days-app11-fudonghai-0
pod "cce7days-app11-fudonghai-0" deleted
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                         READY     STATUS              RESTARTS   AGE
cce7days-app11-fudonghai-0   0/1       ContainerCreating   0          3s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                         READY     STATUS    RESTARTS   AGE
cce7days-app11-fudonghai-0   1/1       Running   0          64s

再次進入pod,看看以前創建的文件是否還在

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -ti cce7days-app11-fudonghai-0 /bin/sh
# cat /tmp/test.txt
this is a test

完成任務後及時刪除pvc和statefulset

[root@cce-7day-fudonghai-24106 027day]# kubectl delete -f day6-statefulset.yaml 
statefulset.apps "cce7days-app11-fudonghai" deleted
[root@cce-7day-fudonghai-24106 027day]# kubectl delete -f day6-pvc.yaml 
persistentvolumeclaim "pvc-evs-auto" deleted

 

 

第七部分

Day7 Kubernetes安全原理分析

 第6課:K8S安全管理實訓

 

課上實驗部分

1,networkpolicy

networkpolicy的yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: wangbo
  namespace: default
spec:
  podSelector:
    matchLabels:       #nginx使用這個role: db加入 NetworkPolicy
      role: db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:     #只接受遠端帶有role: frontend標籤的pod訪問
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 80   #nginx的端口是80

 經過cce控制檯建立無狀態工做負載:nginx,查看

[root@cce-7day-fudonghai-24106 027day]# kubectl get deploy
NAME      READY     UP-TO-DATE   AVAILABLE   AGE
nginx     1/1       1            1           6s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
NAME                     READY     STATUS    RESTARTS   AGE
nginx-656f6bf4f8-zmf52   1/1       Running   0          13s
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -owide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE   READINESS GATES
nginx-656f6bf4f8-zmf52   1/1       Running   0          37s       172.16.0.28   192.168.0.184   <none>           <none>

創建一個pod:fudongahi,不帶什麼特殊標籤,從pod裏能夠正常訪問nginx

apiVersion: v1
kind: Pod
metadata:
  name: fudonghai
  labels:
    run: normal
spec:
  containers:
  - name: euleros
    image: euleros:2.2.5
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f normal-pod.yaml 
pod/fudonghai created
[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai /bin/sh
sh-4.2# curl 172.16.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

給nginx那個pod打上標籤,關聯networkpolicy

[root@cce-7day-fudonghai-24106 027day]# kubectl edit pod nginx-656f6bf4f8-zmf52
pod/nginx-656f6bf4f8-zmf52 edited

新增標籤內容
  labels:
    role: db

這個時候咱們再從pod:fudonghai裏面訪問nginx,失敗

[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai sh
sh-4.2# curl 172.16.0.28:80

^C

 

繼續修改pod:fudonghai的標籤,加上role: frontend就能夠正常訪問了

[root@cce-7day-fudonghai-24106 027day]# kubectl edit pod fudonghai
pod/fudonghai edited
[root@cce-7day-fudonghai-24106 027day]# kubectl exec -it fudonghai sh
sh-4.2# curl 172.16.0.28:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
sh-4.2# exit

 

 

課下實驗部分

 1,serviceaccount認證方式,使用kubectl爲CCE集羣建立一個pod只讀用戶,該用戶只能查詢指定namespace下的pod權限。

 創建namespace:cce

[root@cce-7day-fudonghai-24106 027day]# kubectl create namespace cce
namespace/cce created
[root@cce-7day-fudonghai-24106 027day]# kubectl get ns
NAME          STATUS    AGE
cce           Active    4s
default       Active    30d
kube-public   Active    30d
kube-system   Active    30d

在cce namespace下建立一個serviceAccount(sa)並獲取對應的secret下的token

[root@cce-7day-fudonghai-24106 027day]# kubectl create sa cce-service-account -ncce
serviceaccount/cce-service-account created
[root@cce-7day-fudonghai-24106 027day]# kubectl get sa -ncce
NAME                  SECRETS   AGE
cce-service-account   1         14s
default               1         5m1s

獲取sa對應的secret名字:

[root@cce-7day-fudonghai-24106 027day]# kubectl get sa cce-service-account -ncce -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2019-07-31T06:58:21Z
  name: cce-service-account
  namespace: cce
  resourceVersion: "7070120"
  selfLink: /api/v1/namespaces/cce/serviceaccounts/cce-service-account
  uid: 950cea18-b360-11e9-8bbd-fa163eb87d99
secrets:
- name: cce-service-account-token-8hgjd

獲取secret下的token,並base64解碼獲取token明文:

[root@cce-7day-fudonghai-24106 027day]# token=`kubectl get secret cce-service-account-token-8hgjd -ncce -oyaml |grep token: | awk '{print $2}' | xargs echo -n | base64 -d`
[root@cce-7day-fudonghai-24106 027day]# echo $token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2NlLXNlcnZpY2UtYWNjb3VudC10b2tlbi04aGdqZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjY2Utc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTUwY2VhMTgtYjM2MC0xMWU5LThiYmQtZmExNjNlYjg3ZDk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNjZTpjY2Utc2VydmljZS1hY2NvdW50In0.H_6utsk_IqTzutOmjUCDvaIMUQ0F4W7PlyH41cO7JC2M6H4-ZS5AnRIwbP4E-5-b5lvof4d6UZv6MsvqgHWUUQwDRe5Ju21eBsED_pz7nltKNWgJhDKx50gxMytdelf0mDfznccSs66Sly_JJ48yF8636Q4XuNwaO1qPfnOvqWRUCDlDnJkza73l8qYUotQMZ9zjLXjOMnAo1DM7RPI2zGlv3c8KCSqqj6UYU2Jg_u8dz9nzOnWjZCkg5dRs5DBtKdkhnlCNnlC7fH8I7OkS-URq_rvxrAqWkPIWD2H3qKzZks6Q4Ydp-zdCnD_f4Va-ICxqjHBPGThKX_Xt1wSwYQ

新增cce-user用戶

[root@cce-7day-fudonghai-24106 027day]# kubectl config set-cluster cce-viewer --server=https://192.168.0.252:5443 --certificate-authority=/var/paas/srv/kubernetes/ca.crt 
Cluster "cce-viewer" set.
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-context cce-viewer --cluster=cce-7days-fudonghai
Context "cce-viewer" created.
[root@cce-7day-fudonghai-24106 027day]# kubectl set-credentials cce-user --token=$token
Error: unknown command "set-credentials" for "kubectl"
Run 'kubectl --help' for usage.
unknown command "set-credentials" for "kubectl"
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-credentials cce-user --token=$token
User "cce-user" set.
[root@cce-7day-fudonghai-24106 027day]# kubectl config set-context cce-viewer --user=cce-user
Context "cce-viewer" modified.

經過以下命令能夠看到已經有新建的context:

[root@cce-7day-fudonghai-24106 027day]# kubectl config get-contexts
CURRENT   NAME         CLUSTER               AUTHINFO   NAMESPACE
          cce-viewer   cce-7days-fudonghai   cce-user   
*         internal     internalCluster       user       

授予cce-user只讀權限的role並經過rolebinding綁定對應的serviceAccount

[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day7-role.yaml 
role.rbac.authorization.k8s.io/pod-reader created
[root@cce-7day-fudonghai-24106 027day]# kubectl create -f day7-rolebinding.yaml 
rolebinding.rbac.authorization.k8s.io/pod-reader-binding created

role.yaml,role文件規定了,資源和對資源的權限。鑑權方式:RBAC(基於角色的訪問控制)

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: cce
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

rolebinding.yaml,rolebinding文件規定了role和serviceAccount的綁定

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: pod-reader-binding
  namespace: cce
subjects:
- kind: ServiceAccount
  name: cce-service-account #步驟2中建立的serviceAccount名稱
  namespace: cce
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

切換context到cce-viewer用戶下,驗證權限設置結果:

[root@cce-7day-fudonghai-24106 027day]# kubectl config use-context cce-viewer
Switched to context "cce-viewer".

查看default namespace下的pod,應該會返回403無權限的錯誤

[root@cce-7day-fudonghai-24106 027day]# kubectl get pod

可是提示錯誤:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
使用下面命令解決:

export KUBERNETES_MASTER=https://192.168.0.252:5443

繼續kubectl get pods出現新錯誤
No resources found.
Unable to connect to the server: x509: certificate signed by unknown authority

最後使用這句跳過了509,終於提示沒有權限

[root@cce-7day-fudonghai-24106 027day]# kubectl get pods --insecure-skip-tls-verify=true
No resources found.
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:cce:cce-service-account" cannot list resource "pods" in API group "" in the namespace "default"

繼續查看namespace:cce裏面的pod,權限正常

[root@cce-7day-fudonghai-24106 027day]# kubectl get pods -ncce --insecure-skip-tls-verify=true
No resources found.

使用以下命令便可切換回admin管理員權限的context:

[root@cce-7day-fudonghai-24106 027day]# kubectl config use-context internal
Switched to context "internal".
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod
No resources found.
[root@cce-7day-fudonghai-24106 027day]# kubectl get pod -ncce
No resources found.

 查看以前全部操做所做的配置

[root@cce-7day-fudonghai-24106 027day]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /var/paas/srv/kubernetes/ca.crt
    server: https://192.168.0.252:5443
  name: cce-viewer
- cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.0.252:5443
  name: internalCluster
contexts:
- context:
    cluster: cce-7days-fudonghai
    user: cce-user
  name: cce-viewer
- context:
    cluster: internalCluster
    user: user
  name: internal
current-context: internal
kind: Config
preferences: {}
users:
- name: cce-user
  user:
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2NlLXNlcnZpY2UtYWNjb3VudC10b2tlbi04aGdqZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjY2Utc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTUwY2VhMTgtYjM2MC0xMWU5LThiYmQtZmExNjNlYjg3ZDk5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNjZTpjY2Utc2VydmljZS1hY2NvdW50In0.H_6utsk_IqTzutOmjUCDvaIMUQ0F4W7PlyH41cO7JC2M6H4-ZS5AnRIwbP4E-5-b5lvof4d6UZv6MsvqgHWUUQwDRe5Ju21eBsED_pz7nltKNWgJhDKx50gxMytdelf0mDfznccSs66Sly_JJ48yF8636Q4XuNwaO1qPfnOvqWRUCDlDnJkza73l8qYUotQMZ9zjLXjOMnAo1DM7RPI2zGlv3c8KCSqqj6UYU2Jg_u8dz9nzOnWjZCkg5dRs5DBtKdkhnlCNnlC7fH8I7OkS-URq_rvxrAqWkPIWD2H3qKzZks6Q4Ydp-zdCnD_f4Va-ICxqjHBPGThKX_Xt1wSwYQ
- name: user
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

 

 

第八部分

 第7課:K8S集羣運維與安裝配置實訓

 第8課:K8S問題排查實訓

 

課上實驗部分

 

課下實驗部分

相關文章
相關標籤/搜索