K8s之Pod進階

注意此篇文章接上篇:K8s之Pod資源管理及建立Harbor私有鏡像倉庫
https://blog.51cto.com/14464303/2471369node

1、資源限制:

pod和container的資源請求和限制:mysql

spec.containers[].resources.limits.cpu #cpu上限sql

spec.containers[].resources.limits.memory #內存上限shell

spec.containers[].resources.requests.cpu #建立時分配的基本cpu資源vim

spec.containers[].resources.requests.memory #建立時分配的基本內存資源後端

示例(在master1上操做):

[root@master1 demo]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: frontend        #Pod資源的名稱
spec:
  containers:
  - name: db        #容器1的名稱
    image: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "password"
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  - name: wp        #容器2的名稱
    image: wordpress
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
#插入完成後,按Esc退出插入模式,輸入:wq保存退出

`建立資源`
[root@master1 demo]# kubectl apply -f pod2.yaml
pod/frontend created

`查看資源詳細信息`
[root@master1 demo]# kubectl describe pod frontend
Name:               frontend
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               192.168.18.148/192.168.18.148       #被分配到的節點爲node1
......此處省略多行
Events:
  Type    Reason     Age   From                     Message
  ----    ------     ----  ----                     -------
  Normal  Scheduled  89s   default-scheduler        Successfully assigned default/frontend to 192.168.18.148
  Normal  Pulling    88s   kubelet, 192.168.18.148  pulling image "mysql"
  Normal  Pulled     23s   kubelet, 192.168.18.148  Successfully pulled image "mysql"
  Normal  Created    23s   kubelet, 192.168.18.148  Created container
  Normal  Started    22s   kubelet, 192.168.18.148  Started container
  Normal  Pulling    22s   kubelet, 192.168.18.148  pulling image "wordpress"       #處於鏡像拉取wordpress狀態

[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
frontend                            2/2     Running   0          4m26s
#此時兩個容器就會處於Running運行狀態

`查看對應節點上Pod資源的佔用狀況`
[root@master1 demo]# kubectl describe nodes 192.168.18.148
Name:               192.168.18.148
......此處省略多行
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests     Limits
  --------  --------     ------
  cpu       550m (55%)   1100m (110%)       #核心資源
  memory    228Mi (13%)  556Mi (32%)        #上限資源

`查看命名空間`
[root@master1 demo]# kubectl get ns
NAME          STATUS   AGE
default       Active   13d
kube-public   Active   13d
kube-system   Active   13d
#只要不用-n指定,出現的就是默認的這三個

2、重啓策略:

1:Always:當容器終止推出後,老是重啓容器,默認策略api

2:Onfailure:當容器異常退出(退出碼爲非0)時,重啓容器bash

3:Never:當容器終止退出,從不重啓資源app

注意:k8s中不支持重啓pod資源,只有刪除重建frontend

示例(在master1上操做):

`默認的重啓策略是Always`
[root@master1 demo]# kubectl edit deploy
#輸入/restartPolicy查找
restartPolicy: Always       #沒有設定重啓策略時默認爲Always

[root@master1 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:       #參數
    - /bin/sh   #在shell環境中
    - -c        #command命令
    - sleep 30; exit 3      #容器啓動後休眠30s,異常退出返回狀態碼爲非0值
#插入完成後,按Esc退出插入模式,輸入:wq保存退出

[root@master1 demo]# kubectl apply -f pod3.yaml
pod/foo created

[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
foo                                 0/1     ContainerCreating   0          18s
#其中有RESTARTS重啓值,此時爲0
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
foo                                 0/1     Error     0          62s
#此時出現Error報錯,由於我i們剛剛設置的異常退出,一會再查看時RESTARTS重啓值會變爲一、
`這個就是依照其中的重啓策略去執行的`
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
foo                                 1/1     Running   1          3m13s

`先刪除以前建立的資源,由於會佔用`
[root@master1 demo]# kubectl delete -f pod3.yaml
pod "foo" deleted
[root@master1 demo]# kubectl delete -f pod2.yaml
pod "frontend" deleted

`添加劇啓策略Never`
[root@master1 demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - /bin/sh
    - -c
    - sleep 10              #修改休眠時間爲10s
  restartPolicy: Never      #添加劇啓策略
#修改完成後,按Esc退出插入模式,輸入:wq保存退出

[root@master1 demo]# kubectl apply -f pod3.yaml
pod/foo created
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
foo                                 1/1     Running   0          14s
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
foo                                 0/1     Completed   0          65s
#此時資源建立完成,不須要使用的狀態下會自動休眠,由於之間添加了重啓策略,因此不會進行重啓

3、健康檢查:又稱爲探針(Probe)

注意:規則能夠同時定義

livenessProbe 若是檢查失敗,將殺死容器,根據Pod的restartPolicy來操做

ReadinessProbe 若是檢查失敗,kubernetes會把Pod從service endpoints後端節點中中剔除

Probe支持三種檢查方法:

httpGet 發送http請求,返回200-400範圍狀態碼爲成功

exec 執行Shell命令返回狀態碼是0爲成功

tcpSocket 發起TCP Socket創建成功

示例exec方式(在master1上操做):

[root@master1 demo]# vim pod4.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy     #建立一個空文件,休眠30s,刪除這個空文件
    livenessProbe:  
      exec:     #探測健康
        command:    #command命令
        - cat       #執行查看
        - /tmp/healthy      #建立的空文件
      initialDelaySeconds: 5    #容器建立完成5秒以後開始健康檢查
      periodSeconds: 5          #檢查的間隔頻率爲5秒

`休眠以前檢查狀態返回值爲0,30秒休眠結束以後再檢查,由於沒有這個文件了就會返回非0值`

`刷新資源`
[root@master1 demo]# kubectl apply -f pod4.yaml
pod/liveness-exec created

[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
liveness-exec                       1/1     Running     0          24s
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
liveness-exec                       0/1     Completed   0          53s
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
liveness-exec                       1/1     Running     1          67s
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
liveness-exec                       0/1     CrashLoopBackOff   1          109s
[root@master1 demo]# kubectl get pods
NAME                                READY   STATUS      RESTARTS   AGE
liveness-exec                       1/1     Running     2          2m5s
#當中狀態不斷改變,就表明它正在不斷的進行檢查,而後不斷的執行重啓策略,其中的RESTARTS重啓值也會隨之增長
相關文章
相關標籤/搜索