ReplicaSet是下一代複本控制器。ReplicaSet和 Replication Controller之間的惟一區別是如今的選擇器支持。Replication Controller只支持基於等式的selector(env=dev或environment!=qa),但ReplicaSet還支持新的,基於集合的selector(version in (v1.0, v2.0)或env notin (dev, qa))。在試用時官方推薦ReplicaSet。html
大多數kubectl支持Replication Controller的命令也支持ReplicaSets。rolling-update命令有一個例外 。若是您想要滾動更新功能,請考慮使用Deployments。此外, rolling-update命令是必須的,而Deployments是聲明式的,所以咱們建議經過rollout命令使用Deployments。node
雖然ReplicaSets能夠獨立使用,可是今天它主要被 Deployments 做爲協調pod建立,刪除和更新的機制。當您使用Deployments時,您沒必要擔憂管理他們建立的ReplicaSets。Deployments擁有並管理其ReplicaSets。linux
ReplicaSet可確保指定數量的pod「replicas」在任何設定的時間運行。然而,Deployments是一個更高層次的概念,它管理ReplicaSets,並提供對pod的聲明性更新以及許多其餘的功能。所以,咱們建議您使用Deployments而不是直接使用ReplicaSets,除非您須要自定義更新編排或根本不須要更新。nginx
這實際上意味着您可能永遠不須要操做ReplicaSet對象:直接使用Deployments並在規範部分定義應用程序。api
[root@k8s-master mnt]# cat rs.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend spec: replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: myapp image: wangyanglinux/myapp:v1 env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 80 [root@k8s-master mnt]#
[root@k8s-master mnt]# kubectl create -f rs.yaml replicaset.apps/frontend created [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-4xs95 1/1 Running 0 12s frontend-gd5th 1/1 Running 0 12s frontend-tn9pn 1/1 Running 0 12s [root@k8s-master mnt]# kubectl delete pod --all pod "frontend-4xs95" deleted pod "frontend-gd5th" deleted pod "frontend-tn9pn" deleted [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-gh2w5 1/1 Running 0 3m58s frontend-rd9pl 1/1 Running 0 3m58s frontend-xg845 1/1 Running 0 3m58s [root@k8s-master mnt]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-gh2w5 1/1 Running 0 4m38s tier=frontend frontend-rd9pl 1/1 Running 0 4m38s tier=frontend frontend-xg845 1/1 Running 0 4m38s tier=frontend [root@k8s-master mnt]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-btqh8 1/1 Running 0 3s tier=frontend frontend-gh2w5 1/1 Running 0 7m43s tier=frontend frontend-rd9pl 1/1 Running 0 7m43s tier=frontend1 frontend-xg845 1/1 Running 0 7m43s tier=frontend [root@k8s-master mnt]# kubectl delete pod --all pod "frontend-btqh8" deleted pod "frontend-gh2w5" deleted pod "frontend-rd9pl" deleted pod "frontend-xg845" deleted [root@k8s-master mnt]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-mnmlm 1/1 Running 0 28s tier=frontend frontend-x8rcp 1/1 Running 0 27s tier=frontend frontend-zqs4n 1/1 Running 0 28s tier=frontend [root@k8s-master mnt]# kubectl delete -f rs.yaml replicaset.apps "frontend" deleted [root@k8s-master mnt]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-mnmlm 0/1 Terminating 0 47s tier=frontend frontend-x8rcp 0/1 Terminating 0 46s tier=frontend frontend-zqs4n 0/1 Terminating 0 47s tier=frontend
Deployment爲Pod和Replica Set(下一代Replication Controller)提供聲明式更新。app
你只須要在Deployment中描述你想要的目標狀態是什麼,Deployment controller就會幫你將Pod和Replica Set的實際狀態改變到你的目標狀態。你能夠定義一個全新的Deployment,也能夠建立一個新的替換舊的Deployment。frontend
一個典型的用例以下:curl
代碼展現:ide
[root@k8s-master mnt]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: wangyanglinux/myapp:v1 ports: - containerPort: 80 [root@k8s-master mnt]#
下面的代碼包括了擴容和回退和更新。url
[root@k8s-master mnt]# kubectl apply -f deployment.yaml --record deployment.apps/nginx-deployment created [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-64ddb75745-fkfxl 0/1 ContainerCreating 0 1s nginx-deployment-64ddb75745-slcwp 0/1 ContainerCreating 0 1s nginx-deployment-64ddb75745-vwnqw 0/1 ContainerCreating 0 1s [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-64ddb75745-fkfxl 1/1 Running 0 55s nginx-deployment-64ddb75745-slcwp 1/1 Running 0 55s nginx-deployment-64ddb75745-vwnqw 1/1 Running 0 55s [root@k8s-master mnt]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-64ddb75745 3 3 3 65s [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-64ddb75745-fkfxl 1/1 Running 0 2m45s 10.244.2.6 k8s-node01 <none> <none> nginx-deployment-64ddb75745-slcwp 1/1 Running 0 2m45s 10.244.1.9 k8s-node02 <none> <none> nginx-deployment-64ddb75745-vwnqw 1/1 Running 0 2m45s 10.244.1.8 k8s-node02 <none> <none> [root@k8s-master mnt]# curl 10.244.2.6 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# kubectl scale deployment nginx-deployment --replicas 10 deployment.apps/nginx-deployment scaled [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-64ddb75745-9c55m 0/1 ContainerCreating 0 3s <none> k8s-node01 <none> <none> nginx-deployment-64ddb75745-9gs9v 0/1 ContainerCreating 0 3s <none> k8s-node01 <none> <none> nginx-deployment-64ddb75745-fkfxl 1/1 Running 0 4m10s 10.244.2.6 k8s-node01 <none> <none> nginx-deployment-64ddb75745-nbl92 0/1 ContainerCreating 0 3s <none> k8s-node01 <none> <none> nginx-deployment-64ddb75745-p4f2z 0/1 ContainerCreating 0 3s <none> k8s-node02 <none> <none> nginx-deployment-64ddb75745-qtmhl 0/1 ContainerCreating 0 3s <none> k8s-node02 <none> <none> nginx-deployment-64ddb75745-rgcsl 0/1 ContainerCreating 0 3s <none> k8s-node02 <none> <none> nginx-deployment-64ddb75745-slcwp 1/1 Running 0 4m10s 10.244.1.9 k8s-node02 <none> <none> nginx-deployment-64ddb75745-vwnqw 1/1 Running 0 4m10s 10.244.1.8 k8s-node02 <none> <none> nginx-deployment-64ddb75745-zzmsn 0/1 ContainerCreating 0 3s <none> k8s-node01 <none> <none> [root@k8s-master mnt]# kubectl describe deployment Name: nginx-deployment Namespace: default CreationTimestamp: Fri, 20 Dec 2019 15:10:03 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=de ployment.y... kubernetes.io/change-cause: kubectl apply --filename=deployment.yaml --record=true Selector: app=nginx Replicas: 10 desired | 10 updated | 10 total | 10 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: wangyanglinux/myapp:v1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-64ddb75745 (10/10 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-64ddb75745 to 3 Normal ScalingReplicaSet 7m28s deployment-controller Scaled up replica set nginx-deployment-64ddb75745 to 10 [root@k8s-master mnt]# kubectl rollout history deployment/nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 kubectl apply --filename=deployment.yaml --record=true [root@k8s-master mnt]# kubectl set image deployment/nginx-deployment nginx=wangyanglinux/myapp:v2 deployment.apps/nginx-deployment image updated [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-64ddb75745-rgcsl 0/1 Terminating 0 30m 10.244.1.12 k8s-node02 <none> <none> nginx-deployment-64ddb75745-slcwp 0/1 Terminating 0 35m 10.244.1.9 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-2qwdf 1/1 Running 0 23s 10.244.2.12 k8s-node01 <none> <none> nginx-deployment-6bd7755b5b-2tpzn 1/1 Running 0 23s 10.244.2.11 k8s-node01 <none> <none> nginx-deployment-6bd7755b5b-5d2fn 1/1 Running 0 23s 10.244.1.14 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-87vdh 1/1 Running 0 23s 10.244.2.13 k8s-node01 <none> <none> nginx-deployment-6bd7755b5b-fl56p 1/1 Running 0 14s 10.244.1.16 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-gpz9q 1/1 Running 0 12s 10.244.1.17 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-hhfbn 1/1 Running 0 11s 10.244.2.15 k8s-node01 <none> <none> nginx-deployment-6bd7755b5b-k5rm7 1/1 Running 0 23s 10.244.1.13 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-skh7w 1/1 Running 0 14s 10.244.1.15 k8s-node02 <none> <none> nginx-deployment-6bd7755b5b-t494q 1/1 Running 0 13s 10.244.2.14 k8s-node01 <none> <none> [root@k8s-master mnt]# curl 10.244.1.14 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-64ddb75745 0 0 0 35m nginx-deployment-6bd7755b5b 10 10 10 64s [root@k8s-master mnt]# kubectl rollout undo deployment/nginx-deployment deployment.apps/nginx-deployment rolled back [root@k8s-master mnt]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-64ddb75745-48d5d 1/1 Running 0 11s nginx-deployment-64ddb75745-4ctzd 1/1 Running 0 13s nginx-deployment-64ddb75745-8g7f9 1/1 Running 0 13s nginx-deployment-64ddb75745-bnwlj 1/1 Running 0 9s nginx-deployment-64ddb75745-gq29d 1/1 Running 0 13s nginx-deployment-64ddb75745-rp52v 1/1 Running 0 13s nginx-deployment-64ddb75745-sxch9 1/1 Running 0 10s nginx-deployment-64ddb75745-tt8tp 0/1 ContainerCreating 0 5s nginx-deployment-64ddb75745-vtx8w 1/1 Running 0 13s nginx-deployment-64ddb75745-zr8c7 1/1 Running 0 7s nginx-deployment-6bd7755b5b-2qwdf 1/1 Terminating 0 3m14s nginx-deployment-6bd7755b5b-2tpzn 1/1 Terminating 0 3m14s nginx-deployment-6bd7755b5b-87vdh 1/1 Terminating 0 3m14s [root@k8s-master mnt]# kubectl get rs NAME DESIRED CURRENT READY AGE nginx-deployment-64ddb75745 10 10 10 38m nginx-deployment-6bd7755b5b 0 0 0 3m29s [root@k8s-master mnt]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-64ddb75745-48d5d 1/1 Running 0 43s 10.244.2.18 k8s-node01 <none> <none> nginx-deployment-64ddb75745-4ctzd 1/1 Running 0 45s 10.244.1.18 k8s-node02 <none> <none> nginx-deployment-64ddb75745-8g7f9 1/1 Running 0 45s 10.244.1.19 k8s-node02 <none> <none> nginx-deployment-64ddb75745-bnwlj 1/1 Running 0 41s 10.244.1.21 k8s-node02 <none> <none> nginx-deployment-64ddb75745-gq29d 1/1 Running 0 45s 10.244.2.17 k8s-node01 <none> <none> nginx-deployment-64ddb75745-rp52v 1/1 Running 0 45s 10.244.1.20 k8s-node02 <none> <none> nginx-deployment-64ddb75745-sxch9 1/1 Running 0 42s 10.244.2.19 k8s-node01 <none> <none> nginx-deployment-64ddb75745-tt8tp 1/1 Running 0 37s 10.244.2.20 k8s-node01 <none> <none> nginx-deployment-64ddb75745-vtx8w 1/1 Running 0 45s 10.244.2.16 k8s-node01 <none> <none> nginx-deployment-64ddb75745-zr8c7 1/1 Running 0 39s 10.244.1.22 k8s-node02 <none> <none> [root@k8s-master mnt]# curl 10.244.2.19 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@k8s-master mnt]# kubectl rollout status deployment/nginx-deployment deployment "nginx-deployment" successfully rolled out [root@k8s-master mnt]# kubectl rollout history deployment/nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 2 kubectl apply --filename=deployment.yaml --record=true 3 kubectl apply --filename=deployment.yaml --record=true