Pod分爲自主式pod和受controller管理的pod。學習總結(3)中所建立的pod皆爲自主式pod,本篇總結受controller管理的pod。本篇中所涉及到的controller未加特殊說明時都表示pod Controller。
Pod controller
pod controller 是管理pod的中間層,保障pod 資源的狀態與用戶的指望狀態保持一致。
pod controller 有多種:
1) ReplicaSet (代替用戶建立管理無狀態的pod副本,並保證副本數據與用戶指望一致, 實現擴、縮容等操做), 它也是新一代的ReplicationController;kubernetes 不建議直接使用replicaSet,而建議使用Deployment,它工做在replicaSet之上,它不直接控制pod,而是直接控制replicaSet,進而控制 pod。它還支持滾動更新、回滾等,提供了聲明式配置的功能 。
2) DaemonSet : 用於確保cluster中的每個node,只運行一個特定的pod副本。實現一些OS級的守護任務。也能夠在cluster的部分節點上,每一個node 只運行一個特定的pod副本。
3) Deployment、DaemonSet 都是無狀態的,都有守護進程,常駐內存的。
4) Job控制器:保障任務只執行一次。
5) cronJob控制器:週期運行某一任務。
6) StatefullSet: 管理有狀態應用。
node 與 pod 之間沒有一一對應的關係。
controller 配置清單中的核心資源有: 一、用戶指望的副本數; 二、標籤選擇器;三、pod資源模板
例1-ReplicaSet:html
[root@docker79 manifests]# mkdir controller [root@docker79 manifests]# cd controller/ [root@docker79 controller]# vim rs-demo.yaml [root@docker79 controller]# cat rs-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx-rs namespace: default spec: replicas: 2 selector: matchLabels: app: nginx release: canary template: metadata: name: nginx-pod labels: app: nginx release: canary environment: qa spec: containers: - name: nginx-container image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@docker79 controller]# kubectl apply -f rs-demo.yaml replicaset.apps/nginx-rs created [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-rs-hncc9 1/1 Running 0 1m 10.244.1.17 docker78 <none> nginx-rs-wnxnx 1/1 Running 0 1m 10.244.2.11 docker77 <none> [root@docker79 controller]# curl -I http://10.244.1.17 HTTP/1.1 200 OK Server: nginx/1.14.0 Date: Thu, 27 Sep 2018 02:16:14 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Wed, 12 Sep 2018 00:04:31 GMT Connection: keep-alive ETag: "5b98580f-264" Accept-Ranges: bytes [root@docker79 controller]# [root@docker79 controller]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-rs 2 2 2 2m [root@docker79 controller]# kubectl edit rs nginx-rs replicaset.extensions/nginx-rs edited [root@docker79 controller]#
說明:
將yaml中image的版本改成nginx:1.15-alpine並保存退出,而後container 不會自動升爲1.15-alpine,須要手工刪除舊container時,contrller自動建立的新container就會升級爲1.15-alpine了。這種刪除一個pod,而後k8s自動再建立一個新版本的pod,稱爲canary的滾動升級。node
接上 [root@docker79 controller]# kubectl delete pod nginx-rs-hncc9 pod "nginx-rs-hncc9" deleted [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-rs-6tnz5 1/1 Running 0 22s 10.244.1.18 docker78 <none> nginx-rs-wnxnx 1/1 Running 0 11m 10.244.2.11 docker77 <none> [root@docker79 controller]# curl -I 10.244.1.18 HTTP/1.1 200 OK Server: nginx/1.15.4 Date: Thu, 27 Sep 2018 02:27:18 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT Connection: keep-alive ETag: "5baa6f2c-264" Accept-Ranges: bytes [root@docker79 controller]# [root@docker79 controller]# kubectl delete -f rs-demo.yaml replicaset.apps "nginx-rs" deleted
例2-Deployment:nginx
[root@docker79 controller]# vim deploy-demo.yaml [root@docker79 controller]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: default spec: replicas: 2 selector: matchLabels: app: nginx release: canary template: metadata: labels: app: nginx release: canary spec: containers: - name: nginx-container image: nginx:1.14-alpine ports: - name: http containerPort: 80 [root@docker79 controller]# kubectl apply -f deploy-demo.yaml deployment.apps/nginx-deploy created [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-688454bc5d-ft8wm 1/1 Running 0 17s 10.244.1.19 docker78 <none> nginx-deploy-688454bc5d-pnmpn 1/1 Running 0 17s 10.244.2.12 docker77 <none> [root@docker79 controller]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deploy-688454bc5d 2 2 2 35s [root@docker79 controller]# kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deploy 2 2 2 2 46s [root@docker79 controller]#
說明:rs name規律爲 deployment name + template name的hash值 ; pods name規律爲 rs name + hash值redis
[root@docker79 controller]# vim deploy-demo.yaml [root@docker79 controller]# grep replicas deploy-demo.yaml replicas: 3 [root@docker79 controller]# kubectl apply -f deploy-demo.yaml deployment.apps/nginx-deploy configured [root@docker79 controller]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deploy-688454bc5d-ft8wm 1/1 Running 0 5m nginx-deploy-688454bc5d-pnmpn 1/1 Running 0 5m nginx-deploy-688454bc5d-q24pr 1/1 Running 0 7s [root@docker79 controller]#
說明:把 deploy-demo.yaml 中的 relicas 改成3 ,而後再次 kubectl apply -f deploy-demo.yaml docker
[root@docker79 controller]# kubectl describe deploy nginx-deploy ...... Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge ......
說明:
若是StrategyType爲 rollingUpdate,就能夠設置RollingUpdateStrategy字段,該字段值有maxSurge、maxUnavailable (最多有多少個不可用,能夠指定數值或百分比)。Deployment若是要確保Pod的總數最多比DESIRED數多1個,也就是最多1個浪涌值,能夠設置 maxSurge=1 (能夠指定數量或百分比)。shell
[root@docker79 controller]# vim deploy-demo.yaml [root@docker79 controller]# grep image deploy-demo.yaml image: nginx:1.15-alpine [root@docker79 controller]# kubectl apply -f deploy-demo.yaml deployment.apps/nginx-deploy configured [root@docker79 controller]# [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-7488bbd64f-7vnps 1/1 Running 0 3m 10.244.1.22 docker78 <none> nginx-deploy-7488bbd64f-nxlc2 1/1 Running 0 4m 10.244.1.21 docker78 <none> nginx-deploy-7488bbd64f-v5qws 1/1 Running 0 4m 10.244.2.13 docker77 <none> [root@docker79 controller]# curl -I http://10.244.1.22 HTTP/1.1 200 OK Server: nginx/1.15.4 Date: Thu, 27 Sep 2018 02:53:28 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT Connection: keep-alive ETag: "5baa6f2c-264" Accept-Ranges: bytes [root@docker79 controller]#
說明:
把deploy-demo.yaml 中的 image 版本改變爲1.15-alpine,而後再 kubectl apply -f deploy-demo.yaml,而後使用kubectl get pods -l app=nginx -w 查看更新的過程 vim
[root@docker79 controller]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR nginx-deploy-688454bc5d 0 0 0 15m nginx-container nginx:1.14-alpine app=nginx,pod-template-hash=2440106718,release=canary nginx-deploy-7488bbd64f 3 3 3 5m nginx-container nginx:1.15-alpine app=nginx,pod-template-hash=3044668209,release=canary [root@docker79 controller]# [root@docker79 controller]# kubectl rollout history deployment/nginx-deploy deployments "nginx-deploy" REVISION CHANGE-CAUSE 1 <none> 2 <none> [root@docker79 controller]# kubectl rollout status deployment/nginx-deploy deployment "nginx-deploy" successfully rolled out [root@docker79 controller]# kubectl rollout undo deployment/nginx-deploy deployment.extensions/nginx-deploy [root@docker79 controller]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR nginx-deploy-688454bc5d 3 3 3 18m nginx-container nginx:1.14-alpine app=nginx,pod-template-hash=2440106718,release=canary nginx-deploy-7488bbd64f 0 0 0 8m nginx-container nginx:1.15-alpine app=nginx,pod-template-hash=3044668209,release=canary [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deploy-688454bc5d-7nvvm 1/1 Running 0 23s 10.244.2.14 docker77 <none> nginx-deploy-688454bc5d-jts2x 1/1 Running 0 22s 10.244.1.24 docker78 <none> nginx-deploy-688454bc5d-slkn7 1/1 Running 0 24s 10.244.1.23 docker78 <none> [root@docker79 controller]# curl -I 10.244.1.24 HTTP/1.1 200 OK Server: nginx/1.14.0 Date: Thu, 27 Sep 2018 02:58:26 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Wed, 12 Sep 2018 00:04:31 GMT Connection: keep-alive ETag: "5b98580f-264" Accept-Ranges: bytes [root@docker79 controller]#
說明:
kubect get rs -o wide 查看各版本的現狀
kubectl rollout history deployment/deployName 查看上線歷史
kubectl rollout history deployment/deployName --revision=3 查看特定版本詳細信息
kubectl rollout status deployment/deployName 查看回滾狀態
kubectl rollout undo deployment/deployName 回滾到前一個版本,也可使用--to-reversion=1 參考回滾到第一個版本
kubectl rollout pause 暫停回滾
kubectl rollout resume 恢復回滾
手動更改 Deployment yaml 文件中的image版本,而後從新 apply 進行升級( 僅.spec.template段發生更改時纔會觸發Pod的滾動更新操做 )
kubectl set image deployment/deployName containerName=image:version 命令更改Deployment 的鏡像名稱及版本 也可實現滾動升級
還能夠指定 deploy.spec下的 revisionHistoryLimit ,表示滾動升級時最多保存的舊版本數量 api
[root@docker79 controller]# kubectl patch deployment nginx-deploy -p '{"spec":{"replicas":4}}' deployment.extensions/nginx-deploy patched [root@docker79 controller]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deploy-688454bc5d-7nvvm 1/1 Running 0 7m nginx-deploy-688454bc5d-bc94s 1/1 Running 0 11s nginx-deploy-688454bc5d-jts2x 1/1 Running 0 7m nginx-deploy-688454bc5d-slkn7 1/1 Running 0 8m [root@docker79 controller]# kubectl delete -f deploy-demo.yaml deployment.apps "nginx-deploy" deleted [root@docker79 controller]#
例3-DaemonSet:session
[root@docker79 controller]# vim ds-demo.yaml [root@docker79 controller]# cat ds-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine imagePullPolicy: IfNotPresent env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info [root@docker79 controller]# kubectl apply -f ds-demo.yaml daemonset.apps/filebeat-ds created [root@docker79 controller]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE filebeat-ds-mzff4 1/1 Running 0 29s 10.244.1.25 docker78 <none> filebeat-ds-zzl8q 1/1 Running 0 29s 10.244.2.16 docker77 <none> [root@docker79 controller]#
說明:DaemonSet 默認會在每一個非master 節點各運行一個。app
例4-Service:
因爲pod有生命週期,因此在client與pod之間使用了service層,service依賴CoreDNS、kube-dns。
kube-proxy 始終監控着API server上有關 service 的變更信息,此過程稱爲watch 。
service的實現方式有三種,userspace、iptables、ipvs,原理以下圖所示:
[root@docker79 controller]# vim svc-demo.yaml [root@docker79 controller]# cat svc-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis-deploy namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis-container image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: redis-svc namespace: default spec: selector: app: redis role: logstor clusterIP: 10.97.97.97 type: ClusterIP ports: - name: redis-port port: 6379 targetPort: 6379 [root@docker79 controller]# kubectl apply -f svc-demo.yaml deployment.apps/redis-deploy created service/redis-svc created [root@docker79 controller]# kubectl get pods NAME READY STATUS RESTARTS AGE filebeat-ds-mzff4 1/1 Running 0 16m filebeat-ds-zzl8q 1/1 Running 0 16m redis-deploy-7587b96c74-ddbv8 1/1 Running 0 10s [root@docker79 controller]# [root@docker79 controller]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d redis-svc ClusterIP 10.97.97.97 <none> 6379/TCP 38s [root@docker79 controller]# kubectl describe svc redis-svc Name: redis-svc Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"redis-svc","namespace":"default"},"spec":{"clusterIP":"10.97.97.97","ports":[{... Selector: app=redis,role=logstor Type: ClusterIP IP: 10.97.97.97 Port: redis-port 6379/TCP TargetPort: 6379/TCP Endpoints: 10.244.1.27:6379 Session Affinity: None Events: <none> [root@docker79 controller]#
Service Type說明:
clusterIP 只能內部使用。內部資源記錄的格式爲:SVC_NAME.NS_NAME.DOMAIN.LTD。默認後綴爲 svc.cluster.local.
NodePort 可提供外部訪問,訪問路徑 client request --> NodeIP:nodePort --> ClusterIP:ServicePort-->PodIP:containerPort
ExternalName 把集羣外部的一個服務映射到集羣內部使用。它一般是一個FQDN的CNAME名稱 ,並指向另外一個外部的FQDN名稱。能夠達到 讓pods訪問外部服務的目的。
No ClusterIP: 稱爲 Headless Service。它能夠把servicename名稱直接解析到podIP,因此無需clusterIP。
[root@docker79 controller]# vim svc-demo2.yaml [root@docker79 controller]# vim deploy-demo.yaml [root@docker79 controller]# kubectl apply -f deploy-demo.yaml deployment.apps/nginx-deploy created [root@docker79 controller]# cat svc-demo2.yaml apiVersion: v1 kind: Service metadata: name: nginx-svc namespace: default spec: selector: app: nginx release: canary clusterIP: 10.99.99.99 type: NodePort ports: - name: httpport port: 80 targetPort: 80 nodePort: 30080 [root@docker79 controller]# kubectl apply -f svc-demo2.yaml service/nginx-svc created [root@docker79 controller]# [root@docker79 controller]# kubectl get pod -l release=canary NAME READY STATUS RESTARTS AGE nginx-deploy-7488bbd64f-7bp5k 1/1 Running 0 1m nginx-deploy-7488bbd64f-ndqx9 1/1 Running 0 1m [root@docker79 controller]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d nginx-svc NodePort 10.99.99.99 <none> 80:30080/TCP 26s redis-svc ClusterIP 10.97.97.97 <none> 6379/TCP 12m [root@docker79 controller]# 能夠在集羣外的任何一臺host上 訪問 yuandeMacBook-Pro:~ yuanjicai$ curl -I http://192.168.20.79:30080 HTTP/1.1 200 OK Server: nginx/1.15.4 Date: Thu, 27 Sep 2018 04:01:11 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 25 Sep 2018 17:23:56 GMT Connection: keep-alive ETag: "5baa6f2c-264" Accept-Ranges: bytes [root@docker79 controller]# kubectl delete -f svc-demo2.yaml service "nginx-svc" deleted
說明:(能夠經過如下命令設置sessionAffinity選項)
kubectl patch svc nginx-svc -p '{"spec":{"sessionAffinity":"ClientIP"}}
kubectl describe svc nginx-svc
kubectl patch svc nginx-svc -p '{"spec":{"sessionAffinity":"None"}}
[root@docker79 controller]# vim svc-headless-demo.yaml [root@docker79 controller]# cat svc-headless-demo.yaml apiVersion: v1 kind: Service metadata: name: nginx-svc namespace: default spec: selector: app: nginx release: canary clusterIP: None ports: - name: httpport port: 80 targetPort: 80 [root@docker79 controller]# kubectl apply -f svc-headless-demo.yaml service/nginx-svc created [root@docker79 controller]# kubectl describe svc nginx-svc Name: nginx-svc Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-svc","namespace":"default"},"spec":{"clusterIP":"None","ports":[{"name":... Selector: app=nginx,release=canary Type: ClusterIP IP: None Port: httpport 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.28:80,10.244.2.17:80 Session Affinity: None Events: <none> [root@docker79 controller]# [root@docker79 controller]# dig -t A nginx-svc.default.svc.cluster.local. @10.96.0.10 ; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> -t A nginx-svc.default.svc.cluster.local. @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55045 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;nginx-svc.default.svc.cluster.local. IN A ;; ANSWER SECTION: nginx-svc.default.svc.cluster.local. 5 IN A 10.244.1.28 nginx-svc.default.svc.cluster.local. 5 IN A 10.244.2.17 ;; Query time: 1 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: 四 9月 27 12:09:25 CST 2018 ;; MSG SIZE rcvd: 166 [root@docker79 controller]#
說明:No ClusterIP: 稱爲 Headless Service。它能夠把servicename名稱直接解析到podIP,因此無需clusterIP。