前文咱們瞭解了在k8s上的資源標籤、標籤選擇器以及資源註解相關話題,回顧請參考http://www.javashuo.com/article/p-yesexyfo-ny.html;今天咱們來聊下k8s上的核心資源pod生命週期、健康/就緒狀態探測以及pod資源限制相關話題;html
一、Pod生命週期node
pod生命週期是指在pod開始建立到pod退出所消耗的時間範圍,咱們把開始到結束的這段時間範圍就叫作pod的生命週期;其大概過程以下圖所示nginx
提示:上圖主要描述了一個pod從建立到退出,中間這段時間經歷的過程;從大的方向上看,pod生命週期分兩個階段,第一階段是初始化容器,第二階段是主容器的整個生命週期;其中對於主容器來來講,它的生命週期有分了三個階段,第一階段是運行post start hook,這個階段是主容器運行以後當即須要作的事;第二階段是主容器正常運行的階段,在這個階段中,咱們能夠定義對容器的健康狀態檢查和就緒狀態檢查;第三階段是運行pre stop hook,這個階段主要作容器即將退出前須要作的事;這裏須要注意對於初始化容器來講,一個pod中能夠定義多個初始化容器,他們必須是串行執行,只有當全部的初始化容器執行完後,對應的主容器纔會啓動;對於對容器的健康狀態檢查和就緒狀態檢查,咱們也能夠定義開始檢查的延遲時長;由於有些容器存在容器顯示running狀態,但內部程序尚未初始化,若是當即作健康狀態檢查,可能存在健康狀態爲不健康,從而致使容器重啓的情況;web
二、Pod建立過程docker
提示:首先用戶經過客戶端工具將請求提交給apiserver,apiserver收到用戶的請求之後,它會嘗試將用戶提交的請求內容存進etcd中,etcd存入完成後就反饋給apiserver寫入數據完成,此時apiserver就返回客戶端,說某某資源已經建立;隨後apiserver要發送一個watch信號給scheduler,說要建立一個新pod,請你看看在那個節點上建立合適,scheduler收到信號之後,就開始作調度,並把調度後端結果反饋給apiserver,apiserver收到調度器的調度信息之後,它就把對應調度信息保存到etcd中,隨後apiServer要發送一個watch信號給對應被調度的主機上的kubelet,對應主機上的kubelet收到消息後,馬上調用docker,並把對應容器跑起來;當容器運行起來之後,docker會向kubelet返回容器的狀體;隨後kubelet把容器的狀態反饋給apiserver,由apiserver把容器的狀態信息保存到etcd中;最後當etcd中的容器狀態信息更新完成後,隨後apiserver把容器狀態信息更新完成的消息發送給對應主機的kubelet;後端
三、在資源配置清單中定義初始化容器api
[root@master01 ~]# cat pod-demo6.yaml apiVersion: v1 kind: Pod metadata: name: nginx-pod-demo6 namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 hostPort: 8080 name: web protocol: TCP initContainers: - name: init-something image: busybox command: - /bin/sh - -c - "sleep 60" [root@master01 ~]#
提示:在資源配置清單中定義初始化容器須要在spec字段下,使用initContainers字段來定義,這個字段的值是一個列表對象;初始化容器的定義和主容器的定義方式很相似;上面初始化容器中主要乾了一件事,就是sleep 60,意思是在啓動主容器前,首先要讓初始化容器中的操做執行完之後,對應的主容器纔會開始運行;若是定義的初始化容器有多個,則要等待全部初始化容器中的操做執行完之後,對應主容器纔會開始啓動;bash
四、Pod生命週期的兩個函數鉤子的使用app
postStart:這個函數鉤子主要用來定義在主容器啓動以後,當即須要作的事,好比執行一個命令,建立一個文件等等;這裏須要注意的是,postStart這個函數鉤子說定義的操做,都是針對主容器的,因此執行命令或其餘操做的前提都是主容器上可以正常執行的操做;curl
示例:定義運行一個nginx容器,在容器啓動以後當即在對應html目錄下建立一個文件,做爲用戶自定義測試頁面
[root@master01 ~]# cat pod-demo7.yaml apiVersion: v1 kind: Pod metadata: name: nginx-pod-demo7 namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 hostPort: 8080 name: web protocol: TCP lifecycle: postStart: exec: command: - /bin/sh - -c - "echo 'this is test page' > /usr/share/nginx/html/test.html" [root@master01 ~]#
提示:在資源配置清單中定義主容器啓動以後須要作的事,須要在對應主容器下用lifecycle字段來定義;其中postStart字段使用用來指定主容器啓動以後要作到事,這個字段的值是一個對象;其中exec是用來定義使用exec來執行命令,command字段用來指定要執行的命令;除了能夠用exec來定義執行命令,還可使用httpGet來向當前容器中的url發起http請求,或者使用tcpSocket來向某個主機的某個端口套接字發起請求,默認不指定host表示向當前podip發起請求;
執行配置清單
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 2 7d19h myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 3 7d19h myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 2 7d19h ngx-dep-5c8d96d457-w6nss 1/1 Running 2 7d20h [root@master01 ~]# kubectl apply -f pod-demo7.yaml pod/nginx-pod-demo7 created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 2 7d19h 10.244.1.12 node01.k8s.org <none> <none> myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 3 7d19h 10.244.3.13 node03.k8s.org <none> <none> myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 2 7d19h 10.244.2.8 node02.k8s.org <none> <none> nginx-pod-demo7 1/1 Running 0 6s 10.244.1.13 node01.k8s.org <none> <none> ngx-dep-5c8d96d457-w6nss 1/1 Running 2 7d20h 10.244.2.9 node02.k8s.org <none> <none> [root@master01 ~]#
驗證:訪問對應pod看看test.html是否可以訪問到?
[root@master01 ~]# curl 10.244.1.13/test.html this is test page [root@master01 ~]#
提示:能夠看到訪問對應的pod的ip地址,可以訪問到咱們剛纔定義容器啓動以後建立的文件內容;
preStop:這個函數鉤子主要用來定義在容器結束以前須要作的事情,使用方式和postStart同樣,都是在對應主容器裏的lifesycle字段下定義;它也可使用exec來執行命令或者httpGet來向容器的某個url發起請求,或者使用tcpSocket向某個套接字發起請求;
示例:在容器結束前執行echo 命令
[root@master01 ~]# cat pod-demo8.yaml apiVersion: v1 kind: Pod metadata: name: nginx-pod-demo8 namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx lifecycle: postStart: exec: command: - /bin/sh - -c - "echo 'this is test page' > /usr/share/nginx/html/test.html" preStop: exec: command: ["/bin/sh","-c","echo goodbye.."] [root@master01 ~]#
五、pod終止過程
提示:用戶經過客戶端工具想APIserver發送刪除pod的指令,在APIserver收到用戶發來的刪除指令後,首先APIserver會把對應的操做寫到etcd中,並設置其寬限期,而後etcd把對應數據寫好之後,響應APIserver,隨後APIserver響應客戶端說對應容器已經標記爲terminating狀態;隨後APIserver會發送一個把對應pod標記爲terminating狀態的消息給endpoint端點控制,讓其刪除與當前要刪除pod相關的全部service,(其實在k8s上咱們建立service關聯pod不是直接關聯pod,是現關聯endpoint端點控制器,而後端點控制器再關聯pod),隨後APIserver會向對應要刪除pod所在主機上的kubelet發送將pod標記爲terminating狀態的消息,當對應主機收到APIserver發送的標記pod爲terminating狀態消息後,對應主機上的kubelet會向對應pod裏運行的容器發送TERM信號,隨後再執行preStop中定義的操做;隨後等待寬限期超時,若是對應的pod尚未被刪除,此時APIserver就會向對應pod所在主機上的kubelet發送寬限期超時的消息,此時對應kubelet會向對應容器發送SIGKILL信號來強制刪除對應的容器,隨後docker把對應容器刪除後,把刪除完容器的消息響應給APIserver,此時APIserver會向etcd發送刪除對應pod在etcd中的全部信息;
六、pod健康狀態探測
所謂pod健康狀態探測是指檢查對應pod是否健康,若是不健康就把對應的pod重啓;健康狀態探測是一個週期性的工做;只要發現對應pod不健康,就重啓對應pod;在k8s上對pod的健康狀態探測的方式有三種,第一種上執行命令,只有對應命令執退出碼爲0就表示對應pod是處於健康狀態,不然就不健康;第二種是用httpGet來探測對應pod裏的容器的某個url是否能夠訪問,只有請求對應的url狀態碼爲200才表示對應pod是健康的,不然就不健康;第三種是使用tcpSocket的方式來對某個套接字發送請求,只有套接字正常響應就表示對應pod是處於健康的,不然就是不健康;至於咱們要使用那種方式來判斷pod的健康與否,這取決與pod裏的服務和業務邏輯;
示例:使用exec執行命令的方式來探測pod的健康狀態
[root@master01 ~]# cat pod-demo9.yaml apiVersion: v1 kind: Pod metadata: name: liveness-exec namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx lifecycle: postStart: exec: command: - /bin/sh - -c - "echo 'this is test page' > /usr/share/nginx/html/test.html" preStop: exec: command: ["/bin/sh","-c","echo goodbay.."] livenessProbe: exec: command: ["/usr/bin/test","-e","/usr/share/nginx/html/test.html"] [root@master01 ~]#
提示:使用配置清單定義pod的健康狀態監測須要用到livenessProbe這個字段,這個字段的值是一個對象;以上配置表示判斷/usr/share/nginx/html/test.html這個文件是否存在,存在就表示對應pod健康,不然就不健康;
應用配置清單
[root@master01 ~]# kubectl apply -f pod-demo9.yaml pod/liveness-exec created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 0 4s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 4h45m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d [root@master01 ~]#
提示:能夠看到對應pod如今已經正常運行着,而且重啓次數爲0;
測試:進入對應pod把test.html文件刪除,看看對應pod還會正常處於健康狀態嗎?重啓次數仍是0嗎?
[root@master01 ~]# kubectl exec liveness-exec -- rm -f /usr/share/nginx/html/test.html
查看對應pod狀態
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 1 2m45s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 4h48m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d [root@master01 ~]#
提示:能夠看到對應pod重啓數已經變爲1了,說明pod發生了重啓;
查看pod的詳細信息
提示:從上面的截圖能夠看到,pod健康狀態檢查不經過,就把容器給重啓了;重啓之後對應的文件又回從新建立,因此再次健康狀態監測就經過了,因此pod處於健康狀態;
示例:使用httpGet探測對應pod是否健康
[root@master01 ~]# cat liveness-httpget.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx ports: - name: http containerPort: 80 lifecycle: postStart: exec: command: - /bin/sh - -c - "echo 'this is test page' > /usr/share/nginx/html/test.html" preStop: exec: command: ["/bin/sh","-c","echo goodbay.."] livenessProbe: httpGet: path: /test.html port: http scheme: HTTP failureThreshold: 2 initialDelaySeconds: 2 periodSeconds: 3 [root@master01 ~]#
提示:failureThreshold字段用於指定失敗閾值,即多少次失敗就把對應pod標記爲不健康;默認是3次;initialDelaySeconds字段用於指定初始化後延遲多少時間再作健康狀態監測;periodSeconds字段用於指定監測頻率,默認是10秒一次;最小爲1秒一次;以上配置清單表示對pod容器裏的/test.html這個url發起請求,若是響應碼爲200就表示pod健康,不然就不健康;httpGet中必須指定端口,端口信息能夠應用上面容器中定義的端口名稱;
應用配置清單
[root@master01 ~]# kubectl apply -f liveness-httpget.yaml pod/liveness-httpget created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 29m liveness-httpget 1/1 Running 0 5s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h15m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d [root@master01 ~]#
驗證:進入對應pod,把test.html文件刪除,看看對應pod是否會重啓?
[root@master01 ~]# kubectl exec liveness-httpget -- rm -rf /usr/share/nginx/html/test.html [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 30m liveness-httpget 1/1 Running 1 97s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h16m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d [root@master01 ~]#
提示:能夠看到對應pod已經發生了重啓;
查看pod詳細信息
提示:能夠看到對應pod健康狀態探測失敗,並重啓的事件;
示例:使用tcpsocket方式來探測pod健康狀態
[root@master01 ~]# cat liveness-tcpsocket.yaml apiVersion: v1 kind: Pod metadata: name: liveness-tcpsocket namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx ports: - name: http containerPort: 80 livenessProbe: tcpSocket: port: http failureThreshold: 2 initialDelaySeconds: 2 periodSeconds: 3 [root@master01 ~]#
提示:使用tcpSocket方式來探測健康與否,默認不指定host字段表示探測對應podip;
應用資源配置清單
[root@master01 ~]# kubectl apply -f liveness-tcpsocket.yaml pod/liveness-tcpsocket created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 42m liveness-httpget 1/1 Running 1 12m liveness-tcpsocket 1/1 Running 0 5s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h27m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d [root@master01 ~]#
測試:進入pod裏的容器,修改nginx的端口爲81,看看對應pod是否會重啓?
[root@master01 ~]# kubectl exec liveness-tcpsocket -it -- /bin/sh / # netstat -tnl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN / # grep "listen" /etc/nginx/conf.d/default.conf listen 80; # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 / # sed -i 's@ listen.*@ listen 81;@g' /etc/nginx/conf.d/default.conf / # grep "listen" /etc/nginx/conf.d/default.conf listen 81; # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 / # nginx -s reload 2020/12/16 11:49:51 [notice] 35#35: signal process started / # command terminated with exit code 137 [root@master01 ~]#
提示:能夠看到咱們修改了配置文件讓nginx監聽81端口,沒過幾秒就退出了;
查看對應pod是否發生了重啓?
提示:能夠看到對應pod裏的事件信息說健康狀態監測10.244.3.22:80鏈接失敗,容器重啓了;
七、pod就緒狀態探測
所謂pod就緒狀態探測是指探測對應pod是否就緒,主要用在service關聯後端pod的一個重要依據,若是對應pod未就緒,對應service就不該該關聯pod,不然可能發生用戶訪問對應service,響應服務不可用;pod就緒狀態檢查和健康狀態檢查二者最主要的區別是,健康狀態檢查,一旦對應pod不健康了,就會執行重啓對應pod的操做,而就緒狀態檢查是沒有權限去重啓pod,若是對應pod沒有就緒,它不會作任何操做;一樣的對就緒狀態檢查在k8s上也有三種方式和健康狀態檢查的方式一摸同樣;
示例:使用exec方式探測pod就緒狀態
[root@master01 ~]# cat readiness-demo.yaml apiVersion: v1 kind: Pod metadata: name: readiness-demo namespace: default labels: app: nginx env: testing annotations: descriptions: "this is test pod " spec: containers: - image: nginx:1.14-alpine imagePullPolicy: IfNotPresent name: nginx ports: - name: http containerPort: 80 lifecycle: postStart: exec: command: ["/bin/sh","-c","echo 'this is test page' > /usr/share/nginx/html/test.html"] readinessProbe: exec: command: ["/usr/bin/test","-e","/usr/share/nginx/html/test.html"] failureThreshold: 2 initialDelaySeconds: 2 periodSeconds: 3 [root@master01 ~]#
提示:以上清單表示若是/usr/share/nginx/html/test.html文件存在,則表示對應pod就緒,不然就表示爲就緒;
應用配置清單
[root@master01 ~]# kubectl apply -f readiness-demo.yaml pod/readiness-demo created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 65m liveness-httpget 1/1 Running 1 35m liveness-tcpsocket 1/1 Running 1 23m myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h50m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 0/1 Running 0 5s [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 65m liveness-httpget 1/1 Running 1 36m liveness-tcpsocket 1/1 Running 1 23m myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h51m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 1/1 Running 0 25s [root@master01 ~]#
提示:能夠看到應用配置清單之後,對應的pod從未就緒到就緒狀態了;
測試:刪除pod容器中的test.html文件,看看對應pod是否會從就緒狀態到未就緒狀態?
[root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 67m liveness-httpget 1/1 Running 1 37m liveness-tcpsocket 1/1 Running 1 25m myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h52m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 1/1 Running 0 2m3s [root@master01 ~]# kubectl exec readiness-demo -- rm -rf /usr/share/nginx/html/test.html [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 67m liveness-httpget 1/1 Running 1 38m liveness-tcpsocket 1/1 Running 1 25m myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h53m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 0/1 Running 0 2m36s [root@master01 ~]#
提示:能夠看到對應pod已經處於未就緒狀態了;
查看對應pod的詳細信息
提示:在對應pod的詳細信息中也能看到對應的事件,不一樣於健康狀態探測,就緒狀態探測,它這裏不會重啓pod;
測試:建立test.html文件,看看對應pod是否會從未就緒狀態到就緒狀態?
[root@master01 ~]# kubectl exec readiness-demo -- touch /usr/share/nginx/html/test.htm [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 72m liveness-httpget 1/1 Running 1 42m liveness-tcpsocket 1/1 Running 1 30m myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 5h57m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 1/1 Running 0 7m11s [root@master01 ~]#
提示:能夠看到對應pod已經處於就緒狀態;
八、pod資源限制
所謂pod資源限制就是指限制對應pod裏容器的cpu和內存使用量;咱們知道若是一個容器不限制其資源的使用大小,頗有可能發生一個容器將宿主機上的內存耗盡的狀況,若是一旦發生內存耗盡,內核頗有可能向容器進程發起oom(out of memory),這樣一來運行在docker上的其餘容器也會相繼退出;因此爲了避免讓相似的狀況發生,咱們有必要給pod裏的容器作資源限定;
資源計量方式
對於cpu來說,它是可壓縮資源,所謂能夠壓縮資源就是表示cpu不夠用時,它並不會報錯,pod能夠等待;對於內存來說,它是不可壓縮資源,不可壓縮就是指若是內存不夠用對應程序就會崩潰,從而致使容器退出;cpu的計量方式是m,即1核心=1000m,0.5個核心就等於500m;內存的計量方式默認單位是字節,咱們在指定內存資源,直接加上單位便可;可使用E、P、T、G、M、K爲後綴單位,也可使用Ei、Pi、Ti、Gi、Mi、Ki做爲單位;
示例:在資源清單中限制pod資源
[root@master01 ~]# cat resource.yaml apiVersion: v1 kind: Pod metadata: name: stress-pod spec: containers: - name: stress image: ikubernetes/stress-ng command: ["/usr/bin/stress-ng", "-c 1", "-m 1", "--metrics-brief"] resources: requests: memory: "128Mi" cpu: "200m" limits: memory: "512Mi" cpu: "400m" [root@master01 ~]#
提示:定義pod的資源限制,須要用到resources這個字段,這個字段的值爲一個對象;其中requests字段用於指定下限,limits指定資源的上限;
應用資源清單
[root@master01 ~]# kubectl apply -f resource.yaml pod/stress-pod created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES liveness-exec 1/1 Running 2 147m 10.244.3.21 node03.k8s.org <none> <none> liveness-httpget 1/1 Running 1 118m 10.244.2.14 node02.k8s.org <none> <none> liveness-tcpsocket 1/1 Running 1 105m 10.244.3.22 node03.k8s.org <none> <none> myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d 10.244.1.16 node01.k8s.org <none> <none> myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d 10.244.3.17 node03.k8s.org <none> <none> myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d 10.244.2.11 node02.k8s.org <none> <none> nginx-pod-demo7 1/1 Running 1 7h12m 10.244.1.14 node01.k8s.org <none> <none> ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d 10.244.2.12 node02.k8s.org <none> <none> readiness-demo 1/1 Running 0 82m 10.244.3.23 node03.k8s.org <none> <none> stress-pod 1/1 Running 0 13s 10.244.2.16 node02.k8s.org <none> <none> [root@master01 ~]#
提示:能夠看到stress-pod被調度到node02上運行了;
測試:在node02上使用doucker stats命令查看對應stress-pod容器佔用資源狀況
提示:能夠看到在node02上跑的k8s_stress_stress-pod_default容器佔有cpu和內存都是咱們在資源清單中定義的量;
示例:當pod裏的容器資源不夠用時,對應pod是否會發生oom呢?
[root@master01 ~]# cat memleak-pod.yaml apiVersion: v1 kind: Pod metadata: name: memleak-pod spec: containers: - name: simmemleak image: saadali/simmemleak resources: requests: memory: "64Mi" cpu: "1" limits: memory: "1Gi" cpu: "1" [root@master01 ~]#
提示:以上配置清單主要限制了容器最大內存爲1G,最小內存爲64M,cpu上下限都爲1核心;
應用配置清單
[root@master01 ~]# kubectl apply -f memleak-pod.yaml pod/memleak-pod created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 155m liveness-httpget 1/1 Running 1 126m liveness-tcpsocket 1/1 Running 1 113m memleak-pod 0/1 ContainerCreating 0 2s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 7h21m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 1/1 Running 0 90m stress-pod 1/1 Running 0 8m46s [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 2 156m liveness-httpget 1/1 Running 1 126m liveness-tcpsocket 1/1 Running 1 114m memleak-pod 0/1 OOMKilled 0 21s myapp-dep-5bc4d8cc74-cvkbc 1/1 Running 3 8d myapp-dep-5bc4d8cc74-gmt7w 1/1 Running 4 8d myapp-dep-5bc4d8cc74-gqhh5 1/1 Running 3 8d nginx-pod-demo7 1/1 Running 1 7h21m ngx-dep-5c8d96d457-w6nss 1/1 Running 3 8d readiness-demo 1/1 Running 0 91m stress-pod 1/1 Running 0 9m5s [root@master01 ~]#
提示:能夠看到應用資源清單之後,對應的pod處於OOMKilled狀態;緣由是咱們運行的鏡像裏面的程序一直申請內存,超出了最大限制;
查看pod詳細信息
提示:能夠看到當前pod狀態爲terminated狀態,緣由是OOMKilled;上一次狀態爲terminated,緣由也是OOMKilled;