資源配置清單用編排yaml的方式來編排Docker,採用RTETful的接口風格,主要的資源對象以下圖
html
資源的清單格式: 一級字段:apiVersion(group/version), kind, metadata(name,namespace,labels,annotations, ...), spec, status(只讀) Pod資源: spec.containers <[]object> - name <string> image <string> imagePullPolicy <string> Always, Never, IfNotPresent 修改鏡像中的默認應用: command, args (容器的https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ ) 標籤:(重要特性) key=value key: 字母、數字、_、-、. value:能夠爲空,只能字母或數字開頭及結尾,中間可以使用字母、數字、_、-、. 例子: # kubectl get pods -l app (app是key值) # kubectl get pods --show-labels # kubectl label pods pod-demo release=canary --overwrite # kubectl get pods -l release=canary 標籤選擇器: 等值關係:=,==,!= 集合關係: KEY in (VALUE1,VALUE2,...) KEY notin (VALUE1,VALUE2,...) KEY !KEY 許多資源支持內嵌字段定義其使用的標籤選擇器: matchLabels:直接給定鍵值 matchExpressions:基於給定的表達式來定義使用標籤選擇器,{key:"KEY", operator:"OPERATOR", values:[VAL1,VAL2,...]} 操做符: In, NotIn:values字段的值必須爲非空列表; Exists, NotExists:values字段的值必須爲空列表; nodeSelector <map[string]string> 節點標籤選擇器,能夠影響調度算法。 nodeName <string> annotations: 與label不一樣的地方在於,它不能用於挑選資源對象,僅用於爲對象提供「元數據」。
狀態:Pending, Running, Failed, Succeeded, Unknown 建立Pod經歷的過程:->apiServer->etcd保存->scheculer->etcd調度結果->當前節點運行pod(把狀態發回apiServer)->etcd保存 Pod生命週期中的重要行爲: 1. 初始化容器 2. 容器探測: liveness readiness (在生產環境中是必須配置的) 3. 探針類型有三種: ExecAction、TCPSocketAction、HTTPGetAction # kubectl explain pod.spec.containers.livenessProbe 4. restartPolicy: Always, OnFailure, Never. Default to Always. 5. lifecycle # kubectl explain pods.spec.containers.lifecycle.preStop # kubectl explain pods.spec.containers.lifecycle.preStart
# vim liveness-pod.yaml apiVersion: v1 kind: Pod metadata: name: liveness-exec-pod namespace: default spec: containers: - name: liveness-exec-container image: busybox:latest imagePullPolicy: IfNotPresent command: ["/bin/sh","-c","touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"] livenessProbe: exec: command: ["test","-e","/tmp/healthy"] initialDelaySeconds: 1 periodSeconds: 3 # kubectl create -f liveness-pod.yaml # kubectl describe pod liveness-exec-pod State: Running Started: Thu, 09 Aug 2018 01:39:11 -0400 Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 09 Aug 2018 01:38:03 -0400 Finished: Thu, 09 Aug 2018 01:39:09 -0400 Ready: True Restart Count: 1 Liveness: exec [test -e /tmp/healthy] delay=1s timeout=1s period=3s #success=1 #failure=3
# vim liveness-http.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget-pod namespace: default spec: containers: - name: liveness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 livenessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3 # kubectl create -f liveness-http.yaml # kubectl exec -it liveness-httpget-pod -- /bin/sh rm /usr/share/nginx/html/index.html # kubectl describe pod liveness-httpget-pod Restart Count: 1 Liveness: http-get http://:http/index.html delay=1s timeout=1s period=3s #success=1 #failure=3
# vim readiness-http.yaml apiVersion: v1 kind: Pod metadata: name: readiness-httpget-pod namespace: default spec: containers: - name: readiness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 readinessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3 #kubectl create -f readiness-http.yaml # kubectl exec -it readiness-httpget-pod -- /bin/sh / # rm -f /usr/share/nginx/html/index.html # kubectl get pods -w readiness-httpget-pod 0/1 Running 0 1m 此docker的狀態是0,不會對外提供服務
# vim pod-postStart.yaml apiVersion: v1 kind: Pod metadata: name: poststart-pod namespace: default spec: containers: - name: busybox-httpd image: busybox:latest imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: ['/bin/sh','-c','echo Home_Page >> /tmp/index.html'] command: ['/bin/httpd'] args: ['-f','-h /tmp']
管理pod的中間層,運行於咱們期待的狀態。
自動擴容和縮容,取代了以前的ReplicationController,用於管理無狀態的docker。Google不建議直接使用 1. 用戶指望的pod副本 2. label標籤選擇器,用於管理pod副本 3. 新建pod根據pod template
1. vim replicaset.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: qa spec: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 2. kubectl create -f replicaset.yaml 3. kubectl get pods NAME READY STATUS RESTARTS AGE myapp-c6f58 1/1 Running 0 3s myapp-lvjk2 1/1 Running 0 3s 4. kubectl delete pod myapp-c6f58 5. kubectl get pods (生成一個新的pod) NAME READY STATUS RESTARTS AGE myapp-lvjk2 1/1 Running 0 2m myapp-s9hgr 1/1 Running 0 10s 6. kubectl edit rs myapp replicas: 2改成5 7. kubectl get pods (pod自動增加爲5個) NAME READY STATUS RESTARTS AGE myapp-h2j68 1/1 Running 0 5s myapp-lvjk2 1/1 Running 0 8m myapp-nsv6z 1/1 Running 0 5s myapp-s9hgr 1/1 Running 0 6m myapp-wnf2b 1/1 Running 0 5s # curl 10.244.2.17 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> 8. kubectl get pods ikubernetes/myapp:v1更改成v2 此時運行着的docker不會自到升級 刪除一個pod # kubectl delete pod myapp-h2j68 新的docker myapp-4qg8c 已經升級爲v2 # curl 10.244.2.19 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> 只刪除一個在線pod,新生成的pod會爲新的版本,這種叫金絲雀發佈。 依次把全部舊版本的pod刪除,新版本的pod自動生成,這種是灰度發佈,此種發佈要注意系統負載的變化。 還有一種發佈方式是藍綠髮布,以下圖
一批徹底更新完。如出一轍的環境。新建立 一個RS2,刪除RS1,或者,建立RS2,和RS1並行,更改service,全指向RS2node
經過控制ReplicaSet來控制pod,能提供比ReplicaSet更強大的功能。支持滾動更新和回滾。支持更新節奏和更新邏輯。
# kubectl explain deploy KIND: Deployment VERSION: extensions/v1beta1 (文檔的顯示落後於功能,最新apps/v1) # kubectl explain deploy.spec.strategy rollingUpdate (控制更新粒度) # kubectl explain deploy.spec.strategy.rollingUpdate maxSurge () maxUnavailable (最多幾個不可用) 兩個不可能同時爲零,即不能多也不能少。 # kubectl explain deploy.spec revisionHistoryLimit (保持多少個歷史)
# vim deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 # kubectl apply -f deploy.yaml # kubectl get rs NAME DESIRED CURRENT READY AGE myapp-deploy-69b47bc96d 2 2 2 1m (69b47bc96d是模板的hash值) # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-f4bp4 1/1 Running 0 3m myapp-deploy-69b47bc96d-qllnm 1/1 Running 0 3m # 更改replicas: 3 # kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-f4bp4 1/1 Running 0 4m myapp-deploy-69b47bc96d-qllnm 1/1 Running 0 4m myapp-deploy-69b47bc96d-s6t42 1/1 Running 0 17s # kubectl describe deploy myapp-deploy RollingUpdateStrategy: 25% max unavailable, 25% max surge (默認更新策略) # kubectl get pod -w -l app=myapp (動態監控) # 更改image: ikubernetes/myapp:v2 # kubectl apply -f deploy.yaml # kubectl get pod -w -l app=myapp () NAME READY STATUS RESTARTS AGE myapp-deploy-69b47bc96d-f4bp4 1/1 Running 0 6m myapp-deploy-69b47bc96d-qllnm 1/1 Running 0 6m myapp-deploy-69b47bc96d-s6t42 1/1 Running 0 2m myapp-deploy-67f6f6b4dc-tncmc 0/1 Pending 0 1s myapp-deploy-67f6f6b4dc-tncmc 0/1 Pending 0 1s myapp-deploy-67f6f6b4dc-tncmc 0/1 ContainerCreating 0 2s myapp-deploy-67f6f6b4dc-tncmc 1/1 Running 0 4s # kubectl get rs () NAME DESIRED CURRENT READY AGE myapp-deploy-67f6f6b4dc 3 3 3 54s myapp-deploy-69b47bc96d 0 0 0 8m # kubectl rollout history deployment myapp-deploy # kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' 用patch方式更改replicas爲5 # kubectl get pod -w -l app=myapp NAME READY STATUS RESTARTS AGE myapp-deploy-67f6f6b4dc-fc7kj 1/1 Running 0 18s myapp-deploy-67f6f6b4dc-kssst 1/1 Running 0 5m myapp-deploy-67f6f6b4dc-tncmc 1/1 Running 0 5m myapp-deploy-67f6f6b4dc-xdzvc 1/1 Running 0 18s myapp-deploy-67f6f6b4dc-zjn77 1/1 Running 0 5m # kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavaliable":0}}}}' # kubectl describe deployment myapp-deploy RollingUpdateStrategy: 0 max unavailable, 1 max surge # 更新版本也可用setimage kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy # kubectl rollout history deployment myapp-deploy # kubectl rollout undo deployment myapp-deploy --to-revision=1 (回滾到初版本)
確保只運行一個副本,運行在集羣中每個節點上。(也能夠部分節點上只運行一個且只有一個pod副本,如監控ssd硬盤) # kubectl explain ds # vim filebeat.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379
apiVersion: apps/v1 kind: DaemonSet metadata: name: myapp-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info # kubectl apply -f filebeat.yaml # kubectl get pods -l app=filebeat -o wide (運行兩個是由於目前節點數爲2,默認不能運行在master,由於master有污點) filebeat-ds-chxl6 1/1 Running 1 8m 10.244.2.37 node2 filebeat-ds-rmnxq 1/1 Running 0 8m 10.244.1.35 node1 # kubectl logs myapp-ds-r47zj # kubectl expose deployment redis --port=6379 # kubectl describe ds filebeat # 支持在線滾動更新 kubectl set image daemonsets filebeat-ds filebeat=ikubernetes/filebeat:5.6.6-alpine # kubectl explain pod.spec hostNetwork (DaemonSet能夠直接共享主機的網絡名稱,直接對外提供服務)
按照用記指定的數量啓動N個pod資源,要不要從新建立取決於任務有沒有完成
週期任務
有狀態應用,每個pod副本是單獨管理的。須要腳本,加入到模板中 TPR: Third Party Resource(1.2開始 1.7廢棄) CDR:customer defined resource(1.8開始) Operator:封裝(etcd,Prometheus只有幾個支持 )