Pod中能夠同時運行多個進程(做爲容器運行)協同工做。同一個Pod中的容器會自動的分配到同一個 node 上。同一個Pod中的容器共享資源、網絡環境和依賴,它們老是被同時調度。前端
注意在一個Pod中同時運行多個容器是一種比較高級的用法。只有當你的容器須要緊密配合協做的時候才考慮用這種模式。例如,你有一個容器做爲web服務器運行,須要用到共享的volume,有另外一個「sidecar」容器來從遠端獲取資源更新這些文件,以下圖所示:node
localhost
互相通訊。Pod中的容器與外界通訊時,必須分配共享網絡資源(例如使用宿主機的端口映射)。•Infrastructure Container:基礎容器,維護整個Pod網絡空間
•InitContainers:初始化容器,先於業務容器開始執行
•Containers:業務容器,並行啓動mysql
Pod是Kubernetes中最基本的部署調度單元,能夠包含container,邏輯上表示某種應用的一個實例。例如一個web站點應用由前端、後端及數據庫構建而成,這三個組件將運行在各自的容器中,那麼咱們能夠建立包含三個container的pod。linux
(1)客戶端提交建立請求,能夠經過API Server的Restful API,也可使用kubectl命令行工具。支持的數據類型包括JSON和YAML。 (2)API Server處理用戶請求,存儲Pod數據到etcd。 (3)調度器經過API Server查看未綁定的Pod。嘗試爲Pod分配主機。 (4)過濾主機 (調度預選):調度器用一組規則過濾掉不符合要求的主機。好比Pod指定了所須要的資源量,那麼可用資源比Pod須要的資源量少的主機會被過濾掉。 (5)主機打分(調度優選):對第一步篩選出的符合要求的主機進行打分,在主機打分階段,調度器會考慮一些總體優化策略,好比把容一個Replication Controller的副本分佈到不一樣的主機上,使用最低負載的主機等。 (6)選擇主機:選擇打分最高的主機,進行binding操做,結果存儲到etcd中。 (7)kubelet根據調度結果執行Pod建立操做: 綁定成功後,scheduler會調用APIServer的API在etcd中建立一個boundpod對象,描述在一個工做節點上綁定運行的全部pod信息。運行在每一個工做節點上的kubelet也會按期與etcd同步boundpod信息,一旦發現應該在該工做節點上運行的boundpod對象沒有更新,則調用Docker API建立並啓動pod內的容器。
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/nginx
k8s調度器會將pod調度到資源知足要求而且評分最高的node上。web
咱們可使用多種規則好比:
1.設置cpu、內存的使用要求
2.增長node的label,並經過pod.Spec.NodeSelector進行強匹配;
3.直接設置pod的nodeName,跳過調度直接下發。
k8s 1.2加入了一個實驗性的功能:affinity。意爲親和性。這個特性的設計初衷是爲了替代nodeSelector,並擴展更強大的調度策略。算法
調度器的工做機制是這樣的sql
1、預備工做 1、緩存全部的node節點,記錄他們的規格:cpu、內存、磁盤空間、gpu顯卡數等; 2、緩存全部運行中的pod,按照pod所在的node進行區分,統計每一個node上的pod request了多少資源。request是pod的QoS配置。 3、list & watch pod資源,當檢查到有新的Pending狀態的pod出現,就將它加入到調度隊列中。 4、調度器的worker組件從隊列中取出pod進行調度。 2、調度過程 1、先將當前全部的node放入隊列; 2、執行predicates算法,對隊列中的node進行篩選。這裏算法檢查了一些pod運行的必要條件,包括port不衝突、cpu和內存資源QoS(若是有的話)必須知足、掛載volume(若是有的話)類型必須匹配、nodeSelector規則必須匹配、硬性的affinity規則(下文會提到)必須匹配、node的狀態(condition)必須正常,taint_toleration硬規則(下文會提到)等等。 3、執行priorities算法,對隊列中剩餘的node進行評分,這裏有許多評分項,各個項目有各自的權重:總體cpu,內存資源的平衡性、node上是否有存在要求的鏡像、同rs的pod是否有調度、node affinity的軟規則、taint_toleration軟規則(下文會提到)等等。 4、最終評分最高的node會被選出。即代碼中suggestedHost, err := sched.schedule(pod)一句(plugin/pkg/scheduler/scheduler.go)的返回值。 5、調度器執行assume方法,該方法在pod調度到node以前,就以「該pod運行在目標node上」 爲場景更新調度器緩存中的node 信息,也即預備工做中的一、2兩點。這麼作是爲了讓pod在真正調度到node上時,調度器也能夠同時作後續其餘pod的調度工做。 6、調度器執行bind方法,該方法建立一個Binding資源,apiserver檢查到建立該資源時,會主動更新pod的nodeName字段。完成調度
將標籤附加到nodedocker
[root@k8s-master1 ~]# kubectl label nodes 192.168.0.126 env_role=dev node/192.168.0.126 labeled [root@k8s-master1 ~]# kubectl label nodes 192.168.0.125 env_role=test node/192.168.0.125 labeled [root@k8s-master1 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS 192.168.0.125 Ready <none> 2d4h v1.13.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=test,kubernetes.io/hostname=192.168.0.125 192.168.0.126 Ready <none> 2d4h v1.13.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=dev,kubernetes.io/hostname=192.168.0.126
將nodeSelector字段添加到pod配置中數據庫
[root@k8s-master1 ~]# vim pod2.yaml apiVersion: v1 kind: Pod metadata: name: nginx
labels:
app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: env_role: dev [root@k8s-master1 ~]# kubectl create -f pod2.yaml pod/nginx created
查看pod
[root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 115s [root@k8s-master1 ~]# kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 PriorityClassName: <none> Node: 192.168.0.126/192.168.0.126 Start Time: Fri, 21 Dec 2018 14:07:49 +0800 Labels: <none> Annotations: <none> Status: Running IP: 172.17.92.2 Containers: nginx: Container ID: docker://8c7e442cc83b6532b3bda707f389fa371861d173e9395149bbede9e166bf559c Image: nginx Image ID: docker-pullable://nginx@sha256:1a0043cfb1987774c6981c41e49f758c58ace64c30e1c4ecff5cedff0b5c88da Port: <none> Host Port: <none> State: Running Started: Fri, 21 Dec 2018 14:07:50 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-7vs6s (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-7vs6s: Type: Secret (a volume populated by a Secret) SecretName: default-token-7vs6s Optional: false QoS Class: BestEffort Node-Selectors: env_role=dev Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m11s default-scheduler Successfully assigned default/nginx to 192.168.0.126 Normal Pulled 2m11s kubelet, 192.168.0.126 Container image "nginx" already present on machine Normal Created 2m10s kubelet, 192.168.0.126 Created container Normal Started 2m10s kubelet, 192.168.0.126 Started container
spec.nodeName
用於強制約束將Pod調度到指定的Node節點上,這裏說是「調度」,但其實指定了nodeName的Pod會直接跳過Scheduler的調度邏輯,直接寫入PodList列表,該匹配規則是強制匹配。
[root@k8s-master1 ~]# vim pod3.yaml apiVersion: v1 kind: Pod metadata: name: pod-example labels: app: nginx spec: nodeName: 192.168.0.125 containers: - name: nginx image: nginx:1.15 [root@k8s-master1 ~]# kubectl create -f pod3.yaml pod/pod-example created
查看pod
[root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 10m pod-example 1/1 Running 0 22s [root@k8s-master1 ~]# kubectl describe pod pod-example Name: pod-example Namespace: default Priority: 0 PriorityClassName: <none> Node: 192.168.0.125/192.168.0.125 Start Time: Fri, 21 Dec 2018 14:17:47 +0800 Labels: app=nginx Annotations: <none> Status: Running IP: 172.17.19.2 Containers: nginx: Container ID: docker://5e33eacb41e9b4ffd072141af63209229543105feb804d23b72a09be9e414409 Image: nginx:1.15 Image ID: docker-pullable://nginx@sha256:5d32f60db294b5deb55d078cd4feb410ad88e6fe77500c87d3970eca97f54dba Port: <none> Host Port: <none> State: Running Started: Fri, 21 Dec 2018 14:17:48 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-7vs6s (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-7vs6s: Type: Secret (a volume populated by a Secret) SecretName: default-token-7vs6s Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 31s kubelet, 192.168.0.125 Container image "nginx:1.15" already present on machine Normal Created 31s kubelet, 192.168.0.125 Created container Normal Started 30s kubelet, 192.168.0.125 Started container
參考:https://kubernetes.io/docs/concepts/containers/images/
查看docker登錄私有倉庫的憑據
[root@k8s-node01 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjAuMTIyIjogewoJCQkiYXV0aCI6ICJiM0J6T2xCQWMzTjNNSEprIgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wOS4wIChsaW51eCkiCgl9Cn0=
建立Secret
[root@k8s-master1 tomcat]# vim registry-pull-secret.yaml apiVersion: v1 kind: Secret metadata: name: registry-pull-secret data: .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjAuMTIyIjogewoJCQkiYXV0aCI6ICJiM0J6T2xCQWMzTjNNSEprIgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wOS4wIChsaW51eCkiCgl9Cn0= type: kubernetes.io/dockerconfigjson [root@k8s-master1 tomcat]# kubectl create -f registry-pull-secret.yaml secret/registry-pull-secret created [root@k8s-master1 tomcat]# kubectl get secret NAME TYPE DATA AGE default-token-7vs6s kubernetes.io/service-account-token 3 44h registry-pull-secret kubernetes.io/dockerconfigjson 1 23s
建立pod時,設置從私有倉庫拉取鏡像
[root@k8s-master1 tomcat]# vim tomcat.yaml apiVersion: apps/v1beta2 kind: Deployment metadata: name: tomcat-deployment namespace: default spec: replicas: 3 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: imagePullSecrets: - name: registry-pull-secret #secret 登錄倉庫的憑據 containers: - name: tomcat image: 192.168.0.122/ceba/tomcat #鏡像地址 imagePullPolicy: Always #設置每次建立pod都從新拉取鏡像 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: tomcat-service labels: app: tomcat spec: type: NodePort ports: - port: 80 targetPort: 8080 selector: app: tomcat
•IfNotPresent:默認值,鏡像在宿主機上不存在時才拉取
•Always:每次建立Pod 都會從新拉取一次鏡像
•Never:Pod 永遠不會主動拉取這個鏡像
查看拉取的鏡像正常啓動
[root@k8s-master1 tomcat]# kubectl get pod,svc,deploy NAME READY STATUS RESTARTS AGE pod/tomcat-deployment-6bb6864d4f-4xj82 1/1 Running 0 65s pod/tomcat-deployment-6bb6864d4f-drcjc 1/1 Running 0 47s pod/tomcat-deployment-6bb6864d4f-wmkxx 1/1 Running 0 45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 44h service/tomcat-service NodePort 10.0.0.3 <none> 80:49014/TCP 8m20s NAME READY UP-TO-DATE AVAILABLE AGE deployment.extensions/tomcat-deployment 3/3 3 3 8m20s
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
建立Pod的時候,能夠指定計算資源(目前支持的資源類型有CPU和內存),即指定每一個容器的資源請求(Request)和資源限制(Limit),資源請求是容器所需的最小資源需求,資源限制則是容器不能超過的資源上限。它們的大小關係是:0<=request<=limit<=infinity
Pod的資源請求就是Pod中容器資源請求之和。Kubernetes在調度Pod時,會根據Node中的資源總量(經過cAdvisor接口得到),以及該Node上已使用的計算資源,來判斷該Node是否知足需求。
資源請求可以保證Pod有足夠的資源來運行,而資源限制則是防止某個Pod無限制地使用資源,致使其餘Pod崩潰。特別是在公有云場景,每每會有惡意軟件經過搶佔內存來攻擊平臺。
原理:Docker 經過使用Linux Cgroup來實現對容器資源的控制,具體到啓動參數上是--memory和--cpu-shares。Kubernetes中是經過控制這兩個參數來實現對容器資源的控制。
Pod和Container的資源請求和限制:
•spec.containers[].resources.limits.cpu
•spec.containers[].resources.limits.memory
•spec.containers[].resources.requests.cpu
•spec.containers[].resources.requests.memory
示例
如下Pod有兩個容器。每一個Container都有0.25 cpu和64MiB內存的請求。每一個Container的內存限制爲0.5 cpu和128MiB。你能夠說Pod有0.5 cpu和128 MiB內存的請求,而且限制爲1 cpu和256MiB的內存。
[root@k8s-master1 ~]# vim pod1.yaml apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: db image: mysql env: - name: MYSQL_ROOT_PASSWORD value: "password" resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" - name: wp image: wordpress resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
查看node節點資源使用狀況
[root@k8s-master1 ~]# kubectl describe pod frontend | grep -A 3 Events Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19m default-scheduler Successfully assigned default/frontend to 192.168.0.125 [root@k8s-master1 ~]# kubectl describe nodes 192.168.0.125 Name: 192.168.0.125 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=192.168.0.125 Annotations: node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 19 Dec 2018 09:54:14 +0800 Taints: <none> Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 20 Dec 2018 17:26:03 +0800 Thu, 20 Dec 2018 10:29:48 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 20 Dec 2018 17:26:03 +0800 Thu, 20 Dec 2018 10:29:48 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 20 Dec 2018 17:26:03 +0800 Thu, 20 Dec 2018 10:29:48 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 20 Dec 2018 17:26:03 +0800 Thu, 20 Dec 2018 10:29:48 +0800 KubeletReady kubelet is posting ready status OutOfDisk Unknown Wed, 19 Dec 2018 09:54:14 +0800 Wed, 19 Dec 2018 13:23:20 +0800 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: InternalIP: 192.168.0.125 Hostname: 192.168.0.125 Capacity: cpu: 2 ephemeral-storage: 80699908Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3861520Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 74373035090 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3759120Ki pods: 110 System Info: Machine ID: def36d2abcbe49839534858f6f1e13c5 System UUID: 86B24D56-709A-3467-6DDB-966B3B807949 Boot ID: 949f73c9-b14b-4e94-9137-78b2ac83d046 Kernel Version: 3.10.0-957.1.3.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.9.0 Kubelet Version: v1.13.0 Kube-Proxy Version: v1.13.0 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default frontend 500m (25%) 1 (50%) 128Mi (3%) 256Mi (6%) 20m default tomcat-deployment-6bb6864d4f-drcjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h25m default tomcat-deployment-6bb6864d4f-wmkxx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h25m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 500m (25%) 1 (50%) memory 128Mi (3%) 256Mi (6%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 9m29s kube-proxy, 192.168.0.125 Starting kube-proxy. Normal Starting 8m42s kubelet, 192.168.0.125 Starting kubelet. Normal NodeHasSufficientMemory 8m42s kubelet, 192.168.0.125 Node 192.168.0.125 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m42s kubelet, 192.168.0.125 Node 192.168.0.125 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m42s kubelet, 192.168.0.125 Node 192.168.0.125 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m42s kubelet, 192.168.0.125 Updated Node Allocatable limit across pods Warning Rebooted 8m42s kubelet, 192.168.0.125 Node 192.168.0.125 has been rebooted, boot id: 949f73c9-b14b-4e94-9137-78b2ac83d046
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
在實際生產環境中,想要使得開發的應用程序徹底沒有bug,在任什麼時候候都運行正常,幾乎 是不可能的任務。所以,咱們須要一套管理系統,來對用戶的應用程序執行週期性的健康檢查和修復操做。這套管理系統必須運行在應用程序以外,這一點很是重要一一若是它是應用程序的一部分,極有可能會和應用程序一塊兒崩潰。所以,在Kubernetes中,系統和應用程序的健康檢查是由Kubelet來完成的。
一、進程級健康檢查
最簡單的健康檢查是進程級的健康檢查,即檢驗容器進程是否存活。這類健康檢查的監控粒 度是在Kubernetes集羣中運行的單一容器。Kubelet會按期經過Docker Daemon獲取全部Docker進程的運行狀況,若是發現某個Docker容器未正常運行,則從新啓動該容器進程。目前,進程級的健康檢查都是默認啓用的。
2.業務級健康檢查
在不少實際場景下,僅僅使用進程級健康檢查還遠遠不夠。有時,從Docker的角度來看,容器進程依舊在運行;可是若是從應用程序的角度來看,代碼處於死鎖狀態,即容器永遠都沒法正常響應用戶的業務爲了解決以上問題,Kubernetes引人了一個在容器內執行的活性探針的概念,以支持用戶本身實現應用業務級的健康檢查。
livenessProbe:若是檢查失敗,將殺死容器,根據Pod的restartPolicy來操做。
readinessProbe:若是檢查失敗,Kubernetes會把Pod從service endpoints中剔除。
[root@k8s-master1 ~]# kubectl explain pod.spec.containers.livenessProbe KIND: Pod VERSION: v1 RESOURCE: livenessProbe <Object> exec command 的方式探測 例如 ps 一個進程 failureThreshold 探測幾回失敗 纔算失敗 默認是連續三次 periodSeconds 每次的多長時間探測一次 默認10s timeoutSeconds 探測超市的秒數 默認1s initialDelaySeconds 初始化延遲探測,第一次探測的時候,由於主程序未必啓動完成 tcpSocket 檢測端口的探測 httpGet http請求探測
[root@k8s-master1 ~]# vim exec-liveness.yaml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 [root@k8s-master1 ~]# kubectl create -f exec-liveness.yaml pod/liveness-exec created [root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend 2/2 Running 106 16h liveness-exec 1/1 Running 0 16s
在配置文件中,您能夠看到Pod具備單個Container。該periodSeconds
字段指定kubelet應每5秒執行一次活躍度探測。該initialDelaySeconds
字段告訴kubelet它應該在執行第一次探測以前等待5秒。要執行探測,kubelet將cat /tmp/healthy
在Container中執行命令。若是命令成功,則返回0,而且kubelet認爲Container是活動且健康的。若是該命令返回非零值,則kubelet會終止Container並從新啓動它。
RESTARTS
已增長Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to 192.168.0.126 Normal Pulling 55s kubelet, 192.168.0.126 pulling image "busybox" Normal Pulled 47s kubelet, 192.168.0.126 Successfully pulled image "busybox" Normal Created 47s kubelet, 192.168.0.126 Created container Normal Started 47s kubelet, 192.168.0.126 Started container Warning Unhealthy 5s (x3 over 15s) kubelet, 192.168.0.126 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory [root@k8s-master1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend 2/2 Running 106 16h liveness-exec 1/1 Running 1 107s
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
Pod 的 status
在信息保存在 PodStatus 中定義,其中有一個 phase
字段。
Pod 的相位(phase)是 Pod 在其生命週期中的簡單宏觀概述。該階段並非對容器或 Pod 的綜合彙總,也不是爲了作爲綜合狀態機。
Pod 相位的數量和含義是嚴格指定的。除了本文檔中列舉的狀態外,不該該再假定 Pod 有其餘的 phase
值。
下面是 phase
可能的值:
下圖是Pod的生命週期示意圖,從圖中能夠看到Pod狀態的變化。
kubectl describe TYPE/NAME kubectl logs TYPE/NAME [-c CONTAINER] kubectl exec POD [-c CONTAINER] --COMMAND [args...]