容器運行過程當中須要分配所需的資源,如何與cggroup聯動配合呢?答案是經過定義resource來實現資源的分配,資源的分配單位主要是cpu和memory,資源的定義分兩種:requests和limits,requests表示請求資源,主要用於初始kubernetes調度pod時的依據,表示必須知足的分配資源;limits表示資源的限制,即pod不能超過limits定義的限制大小,超過則經過cggroup限制,pod中定義資源能夠經過下面四個字段定義:html
一、開始學習如何定義pod的resource資源,以下以定義nginx-demo爲例,容器請求cpu資源爲250m,限制爲500m,請求內存資源爲128Mi,限制內存資源爲256Mi,固然也能夠定義多個容器的資源,多個容器相加就是pod的資源總資源,以下:node
[root@node-1 demo]#cat nginx-resource.yaml apiVersion: v1 kind: Pod metadata: name: nginx-demo labels: name: nginx-demo spec: containers: - name: nginx-demo image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports: - name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 0.25 memory: 128Mi limits: cpu: 500m memory: 256Mi
二、應用pod的配置定義(如以前的pod還存在,先將其刪除kubectl delete pod <pod-name>),或pod命名爲另一個名linux
[root@node-1 demo]# kubectl apply -f nginx-resource.yaml pod/nginx-demo created
三、查看pod資源的分配詳情nginx
[root@node-1 demo]# kubectl get pods NAME READY STATUS RESTARTS AGE demo-7b86696648-8bq7h 1/1 Running 0 12d demo-7b86696648-8qp46 1/1 Running 0 12d demo-7b86696648-d6hfw 1/1 Running 0 12d nginx-demo 1/1 Running 0 94s [root@node-1 demo]# kubectl describe pods nginx-demo Name: nginx-demo Namespace: default Priority: 0 Node: node-3/10.254.100.103 Start Time: Sat, 28 Sep 2019 12:10:49 +0800 Labels: name=nginx-demo Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-demo"},"name":"nginx-demo","namespace":"default"},"sp... Status: Running IP: 10.244.2.13 Containers: nginx-demo: Container ID: docker://55d28fdc992331c5c58a51154cd072cd6ae37e03e05ae829a97129f85eb5ed79 Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat, 28 Sep 2019 12:10:51 +0800 Ready: True Restart Count: 0 Limits: #限制資源 cpu: 500m memory: 256Mi Requests: #請求資源 cpu: 250m memory: 128Mi Environment: <none> ...省略...
四、Pod的資源如何分配呢?毫無疑問是從node上分配的,當咱們建立一個pod的時候若是設置了requests,kubernetes的調度器kube-scheduler會執行兩個調度過程:filter過濾和weight稱重,kube-scheduler會根據請求的資源過濾,把符合條件的node篩選出來,而後再進行排序,把最知足運行pod的node篩選出來,而後再特定的node上運行pod。調度算法和細節能夠參考下kubernetes調度算法介紹。以下是node-3節點資源的分配詳情:算法
[root@node-1 ~]# kubectl describe node node-3 ...省略... Capacity: #節點上資源的總資源狀況,1個cpu,2g內存,110個pod cpu: 1 ephemeral-storage: 51473888Ki hugepages-2Mi: 0 memory: 1882352Ki pods: 110 Allocatable: #節點允許分配的資源狀況,部分預留的資源會排出在Allocatable範疇 cpu: 1 ephemeral-storage: 47438335103 hugepages-2Mi: 0 memory: 1779952Ki pods: 110 System Info: Machine ID: 0ea734564f9a4e2881b866b82d679dfc System UUID: FFCD2939-1BF2-4200-B4FD-8822EBFFF904 Boot ID: 293f49fd-8a7c-49e2-8945-7a4addbd88ca Kernel Version: 3.10.0-957.21.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.3 Kubelet Version: v1.15.3 Kube-Proxy Version: v1.15.3 PodCIDR: 10.244.2.0/24 Non-terminated Pods: (3 in total) #節點上運行pod的資源的狀況,除了nginx-demo以外還有多個pod Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default nginx-demo 250m (25%) 500m (50%) 128Mi (7%) 256Mi (14%) 63m kube-system kube-flannel-ds-amd64-jp594 100m (10%) 100m (10%) 50Mi (2%) 50Mi (2%) 14d kube-system kube-proxy-mh2gq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d Allocated resources: #已經分配的cpu和memory的資源狀況 (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 350m (35%) 600m (60%) memory 178Mi (10%) 306Mi (17%) ephemeral-storage 0 (0%) 0 (0%) Events: <none>
Pod的定義的資源requests和limits做用於kubernetes的調度器kube-sheduler上,實際上cpu和內存定義的資源會應用在container上,經過容器上的cggroup實現資源的隔離做用,接下來咱們介紹下資源分配的原理。docker
以上面定義的nginx-demo爲例,研究下pod中定義的requests和limits應用在docker生效的參數:json
一、查看pod所在的node節點,nginx-demo調度到node-3節點上api
[root@node-1 ~]# kubectl get pods -o wide nginx-demo NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-demo 1/1 Running 0 96m 10.244.2.13 node-3 <none> <none>
二、獲取容器的id號,能夠經過kubectl describe pods nginx-demo的containerID獲取到容器的id,或者登錄到node-3節點經過名稱過濾獲取到容器的id號,默認會有兩個pod:一個經過pause鏡像建立,另一個經過應用鏡像建立app
[root@node-3 ~]# docker container list |grep nginx 55d28fdc9923 84581e99d807 "nginx -g 'daemon of…" 2 hours ago Up 2 hours k8s_nginx-demonginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0 2fe0498ea9b5 k8s.gcr.io/pause:3.1 "/pause" 2 hours ago Up 2 hours k8s_POD_nginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0
三、查看docker容器詳情信息ide
[root@node-3 ~]# docker container inspect 55d28fdc9923 [ ...部分輸出省略... { "Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656", "ResolvConfPath": "/var/lib/docker/containers/2fe0498ea9b5dfe1eb63eba09b1598a8dfd60ef046562525da4dcf7903a25250/resolv.conf", "HostConfig": { "Binds": [ "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/volumes/kubernetes.io~secret/default-token-5qwmc:/var/run/secrets/kubernetes.io/serviceaccount:ro", "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/etc-hosts:/etc/hosts", "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/containers/nginx-demo/1cc072ca:/dev/termination-log" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": { "max-size": "100m" } }, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 256, CPU分配的權重,做用在requests.cpu上 "Memory": 268435456, 內存分配的大小,做用在limits.memory上 "NanoCpus": 0, "CgroupParent": "kubepods-burstable-pod66958ef7_507a_41cd_a688_7a4976c6a71e.slice", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 100000, CPU分配的使用比例,和CpuQuota一塊兒做用在limits.cpu上 "CpuQuota": 50000, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 268435456, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, }, } ]
pod中cpu的限制主要經過requests.cpu和limits.cpu來定義,limits是不能超過的cpu大小,咱們經過stress鏡像來驗證,stress是一個cpu和內存的壓側工具,經過指定args參數的定義壓側cpu的大小。監控pod的cpu和內存可經過kubectl top的方式來查看,依賴於監控組件如metric-server或promethus,當前沒有安裝,咱們經過docker stats的方式來查看。
一、經過stress鏡像定義一個pod,分配0.25個cores和最大限制0.5個core使用比例
[root@node-1 demo]# cat cpu-demo.yaml apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: default annotations: kubernetes.io/description: "demo for cpu requests and" spec: containers: - name: stress-cpu image: vish/stress resources: requests: cpu: 250m limits: cpu: 500m args: - -cpus - "1"
二、應用yaml文件生成pod
[root@node-1 demo]# kubectl apply -f cpu-demo.yaml pod/cpu-demo created
三、查看pod資源分配詳情
[root@node-1 demo]# kubectl describe pods cpu-demo Name: cpu-demo Namespace: default Priority: 0 Node: node-2/10.254.100.102 Start Time: Sat, 28 Sep 2019 14:33:12 +0800 Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"demo for cpu requests and"},"name":"cpu-demo","nam... kubernetes.io/description: demo for cpu requests and Status: Running IP: 10.244.1.14 Containers: stress-cpu: Container ID: docker://14f93767ad37b92beb91e3792678f60c9987bbad3290ae8c29c35a2a80101836 Image: progrium/stress Image ID: docker-pullable://progrium/stress@sha256:e34d56d60f5caae79333cee395aae93b74791d50e3841986420d23c2ee4697bf Port: <none> Host Port: <none> Args: -cpus 1 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 28 Sep 2019 14:34:28 +0800 Finished: Sat, 28 Sep 2019 14:34:28 +0800 Ready: False Restart Count: 3 Limits: #cpu限制使用的比例 cpu: 500m Requests: #cpu請求的大小 cpu: 250m
四、登錄到特定的node節點,經過docker container stats查看容器的資源使用詳情
在pod所屬的node上經過top查看,cpu的使用率限制百分比爲50%。
經過上面的驗證能夠得出結論,咱們在stress容器中定義使用1個core,經過limits.cpu限定可以使用的cpu大小是500m,測試驗證pod的資源已在容器內部或宿主機上都嚴格限制在50%(node機器上只有一個cpu,若是有2個cpu則會分攤爲25%)。
一、經過stress鏡像測試驗證requests.memory和limits.memory的生效範圍,limits.memory定義容器可以使用的內存資源大小,當超過內存設定的大小後容器會發生OOM,以下定義一個測試的容器,最大內存不能超過512M,使用stress鏡像--vm-bytes定義壓側內存大小爲256Mi
[root@node-1 demo]# cat memory-demo.yaml apiVersion: v1 kind: Pod metadata: name: memory-stress-demo annotations: kubernetes.io/description: "stress demo for memory limits" spec: containers: - name: memory-stress-limits image: polinux/stress resources: requests: memory: 128Mi limits: memory: 512Mi command: ["stress"] args: ["--vm", "1", "--vm-bytes", "256M", "--vm-hang", "1"]
二、應用yaml文件生成pod
[root@node-1 demo]# kubectl apply -f memory-demo.yaml pod/memory-stress-demo created [root@node-1 demo]# kubectl get pods memory-stress-demo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES memory-stress-demo 1/1 Running 0 41s 10.244.1.19 node-2 <none> <none>
三、查看資源的分配狀況
[root@node-1 demo]# kubectl describe pods memory-stress-demo Name: memory-stress-demo Namespace: default Priority: 0 Node: node-2/10.254.100.102 Start Time: Sat, 28 Sep 2019 15:13:06 +0800 Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"stress demo for memory limits"},"name":"memory-str... kubernetes.io/description: stress demo for memory limits Status: Running IP: 10.244.1.16 Containers: memory-stress-limits: Container ID: docker://c7408329cffab2f10dd860e50df87bd8671e65a0f8abb4dae96d059c0cb6bb2d Image: polinux/stress Image ID: docker-pullable://polinux/stress@sha256:6d1825288ddb6b3cec8d3ac8a488c8ec2449334512ecb938483fc2b25cbbdb9a Port: <none> Host Port: <none> Command: stress Args: --vm 1 --vm-bytes 256Mi --vm-hang 1 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 28 Sep 2019 15:14:08 +0800 Finished: Sat, 28 Sep 2019 15:14:08 +0800 Ready: False Restart Count: 3 Limits: #內存限制大小 memory: 512Mi Requests: #內存請求大小 memory: 128Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)
四、查看容器內存資源的使用狀況,分配256M內存,最大可以使用爲512Mi,利用率爲50%,此時沒有超過limits限制的大小,容器運行正常
五、當容器內部超過內存的大小會怎麼樣呢,咱們將--vm-byte設置爲513M,容器會嘗試運行,超過內存後會OOM,kube-controller-manager會不停的嘗試重啓容器,RESTARTS的次數會不停的增長。
[root@node-1 demo]# cat memory-demo.yaml apiVersion: v1 kind: Pod metadata: name: memory-stress-demo annotations: kubernetes.io/description: "stress demo for memory limits" spec: containers: - name: memory-stress-limits image: polinux/stress resources: requests: memory: 128Mi limits: memory: 512Mi command: ["stress"] args: ["--vm", "1", "--vm-bytes", "520M", "--vm-hang", "1"] . #容器中使用內存爲520M 查看容器的狀態爲OOMKilled,RESTARTS的次數不斷的增長,不停的嘗試重啓 [root@node-1 demo]# kubectl get pods memory-stress-demo NAME READY STATUS RESTARTS AGE memory-stress-demo 0/1 OOMKilled 3 60s
服務質量QOS(Quality of Service)主要用於pod調度和驅逐時參考的重要因素,不一樣的QOS其服務質量不一樣,對應不一樣的優先級,主要分爲三種類型的Qos:
一、Pod中沒有定義resource,默認的Qos策略爲BestEffort,優先級別最低,當資源比較進展是須要驅逐evice時,優先驅逐BestEffort定義的Pod,以下定義一個BestEffort的Pod
[root@node-1 demo]# cat nginx-qos-besteffort.yaml apiVersion: v1 kind: Pod metadata: name: nginx-qos-besteffort labels: name: nginx-qos-besteffort spec: containers: - name: nginx-qos-besteffort image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports: - name: nginx-port-80 protocol: TCP containerPort: 80 resources: {}
二、建立pod並查看Qos策略,qosClass爲BestEffort
[root@node-1 demo]# kubectl apply -f nginx-qos-besteffort.yaml pod/nginx-qos-besteffort created 查看Qos策略 [root@node-1 demo]# kubectl get pods nginx-qos-besteffort -o yaml apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-besteffort"},"name":"nginx-qos-besteffort","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.7.9","imagePullPolicy":"IfNotPresent","name":"nginx-qos-besteffort","ports":[{"containerPort":80,"name":"nginx-port-80","protocol":"TCP"}],"resources":{}}]}} creationTimestamp: "2019-09-28T11:12:03Z" labels: name: nginx-qos-besteffort name: nginx-qos-besteffort namespace: default resourceVersion: "1802411" selfLink: /api/v1/namespaces/default/pods/nginx-qos-besteffort uid: 56e4a2d5-8645-485d-9362-fe76aad76e74 spec: containers: - image: nginx:1.7.9 imagePullPolicy: IfNotPresent name: nginx-qos-besteffort ports: - containerPort: 80 name: nginx-port-80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log ...省略... status: hostIP: 10.254.100.102 phase: Running podIP: 10.244.1.21 qosClass: BestEffort #Qos策略 startTime: "2019-09-28T11:12:03Z"
三、刪除測試Pod
[root@node-1 demo]# kubectl delete pods nginx-qos-besteffort pod "nginx-qos-besteffort" deleted
一、Pod的服務質量爲Burstable,僅次於Guaranteed的服務質量,至少須要一個container定義了requests,且requests定義的資源小於limits資源
[root@node-1 demo]# cat nginx-qos-burstable.yaml apiVersion: v1 kind: Pod metadata: name: nginx-qos-burstable labels: name: nginx-qos-burstable spec: containers: - name: nginx-qos-burstable image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports: - name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 200m memory: 256Mi
二、應用yaml文件生成pod並查看Qos類型
[root@node-1 demo]# kubectl apply -f nginx-qos-burstable.yaml pod/nginx-qos-burstable created 查看Qos類型 [root@node-1 demo]# kubectl describe pods nginx-qos-burstable Name: nginx-qos-burstable Namespace: default Priority: 0 Node: node-2/10.254.100.102 Start Time: Sat, 28 Sep 2019 19:27:37 +0800 Labels: name=nginx-qos-burstable Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-burstable"},"name":"nginx-qos-burstable","namespa... Status: Running IP: 10.244.1.22 Containers: nginx-qos-burstable: Container ID: docker://d1324b3953ba6e572bfc63244d4040fee047ed70138b5a4bad033899e818562f Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat, 28 Sep 2019 19:27:39 +0800 Ready: True Restart Count: 0 Limits: cpu: 200m memory: 256Mi Requests: cpu: 100m memory: 128Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-5qwmc: Type: Secret (a volume populated by a Secret) SecretName: default-token-5qwmc Optional: false QoS Class: Burstable #服務質量是可波動的Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 95s default-scheduler Successfully assigned default/nginx-qos-burstable to node-2 Normal Pulled 94s kubelet, node-2 Container image "nginx:1.7.9" already present on machine Normal Created 94s kubelet, node-2 Created container nginx-qos-burstable Normal Started 93s kubelet, node-2 Started container nginx-qos-burstable
一、resource中定義的cpu和memory必須包含有requests和limits,切requests和limits的值必須相同,其優先級別最高,當出現調度和驅逐時優先保障該類型的Qos,以下定義一個nginx-qos-guaranteed的容器,requests.cpu和limits.cpu相同,同理requests.memory和limits.memory.
[root@node-1 demo]# cat nginx-qos-guaranteed.yaml apiVersion: v1 kind: Pod metadata: name: nginx-qos-guaranteed labels: name: nginx-qos-guaranteed spec: containers: - name: nginx-qos-guaranteed image: nginx:1.7.9 imagePullPolicy: IfNotPresent ports: - name: nginx-port-80 protocol: TCP containerPort: 80 resources: requests: cpu: 200m memory: 256Mi limits: cpu: 200m memory: 256Mi
二、應用yaml文件生成pod並查看pod的Qos類型爲可徹底保障Guaranteed
[root@node-1 demo]# kubectl apply -f nginx-qos-guaranteed.yaml pod/nginx-qos-guaranteed created [root@node-1 demo]# kubectl describe pods nginx-qos-guaranteed Name: nginx-qos-guaranteed Namespace: default Priority: 0 Node: node-2/10.254.100.102 Start Time: Sat, 28 Sep 2019 19:37:15 +0800 Labels: name=nginx-qos-guaranteed Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-guaranteed"},"name":"nginx-qos-guaranteed","names... Status: Running IP: 10.244.1.23 Containers: nginx-qos-guaranteed: Container ID: docker://cf533e0e331f49db4e9effb0fbb9249834721f8dba369d281c8047542b9f032c Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sat, 28 Sep 2019 19:37:16 +0800 Ready: True Restart Count: 0 Limits: cpu: 200m memory: 256Mi Requests: cpu: 200m memory: 256Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-5qwmc: Type: Secret (a volume populated by a Secret) SecretName: default-token-5qwmc Optional: false QoS Class: Guaranteed #服務質量爲可徹底保障Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25s default-scheduler Successfully assigned default/nginx-qos-guaranteed to node-2 Normal Pulled 24s kubelet, node-2 Container image "nginx:1.7.9" already present on machine Normal Created 24s kubelet, node-2 Created container nginx-qos-guaranteed Normal Started 24s kubelet, node-2 Started container nginx-qos-guaranteed
本章是kubernetes系列教程第六篇文章,經過介紹resource資源的分配和服務質量Qos,關於resource有節點使用建議:
容器計算資源管理:https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
pod內存資源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/
pod cpu資源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
服務質量QOS:https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
Docker關於CPU的限制:http://www.javashuo.com/article/p-qpypwyfs-cr.html