經過yaml格式建立的pod資源咱們手動delete以後,是不會重建的,由於這個屬於自主式pod,不是屬於控制器控制的pod。以前咱們直接經過run啓動是經過控制器來管理的,delete以後還能經過控制器來重構一個新的如出一轍的pod,控制器會嚴格控制其控制的pod數量符合用戶的指望
並且控制器管理端的pod是不建議直接delete的,能夠經過修改控制器管理相應的pod的數量從而達到咱們的預期。
pod控制器主要功能也就是帶咱們去管理端pod的中間層,並幫咱們確保每個pod資源始終處於咱們指望的狀態,例如pod裏面的容器出現故障,控制器會去嘗試重啓容器,當一直重啓不成功,就會基於內部策略來進行從新的編排和部署。
若是容器的數量低於用戶的目標數據就會新建pod資源,多餘則會終止。
控制器是一種泛稱,真正的控制器資源有多種類型:
1:ReplicaSet:(帶用戶建立指定數量的pod副本,並確保pod副本數量一直處於知足用戶指望的數量,另外ReplicaSet還支持擴縮容機制,並且已經替代了ReplicationController)
ReplicaSet的三種核心資源:
1:用戶定義的pod副本
2:標籤選擇器
3:pod模板
ReplicaSet功能如此強大,可是咱們卻不能直接使用ReplicaSet,並且連k8s也建議用戶不直接使用ReplicaSet,而是轉而使用Deployment。
Deployment(也是一種控制器,可是Deployment不是直接替代了ReplicaSet來控制pod的,而是經過控制ReplicaSet,再由ReplicaSet來控制pod,由此Deployment是建構在ReplicaSet之上的,不是創建在pod之上的,除了控制ReplicaSet自身所帶的兩個功能以外,Deployment還支持滾動更新和回滾等強大的功能,還支持聲明式配置的功能,聲明式可使得咱們建立的時候根據聲明的邏輯來定義,方便咱們隨時動態修改在apiservice上定義的目標指望狀態)
Deployment也是目前最好的控制器之一
Deployment指用於管理無狀態應用,指須要關注羣體,無需關注個體的時候更加須要Deployment來完成。
控制器管理pod的工做特色:
1:pod的數量能夠大於node節點數量,pod的數量不和node的數量造成精準匹配的關係,大於node節點數量的pod會經過策略分派不通的node節點上,可能一個node有5個,一個node有3個這樣的狀況,可是這對某些服務來講一個節點出現多個相同pod是徹底沒有必要的,例如elk的日誌收集服務亦或者監控工具等,一個節點只須要跑一個pod便可來完成node節點上全部的pod所產生的日誌收集工做,多起就等於在消耗資源
對於這種狀況,Deployment就不能很好的完成,我既要日誌收集pod數量每一個節點是惟一的,又要保證一旦pod掛掉以後還能精準的從掛掉的pod上重構起來,那麼就須要另一種控制器DaemonSet。
DaemonSet:
用於控制運行的集羣每個節點只運行一個特定的pod副本,這樣不只能規避咱們上面的問題,還能完成當新的節點加入集羣的時候,上面能運行一個特定的pod,那這種控制器控制的pod數量就直接取決於你的集羣的規模,固然pod模板和標籤選擇器依然是不能少的
Job
Job能夠用於指須要在計劃內按照指定的時間節點取執行一次,執行完成以後就退出,無需長期運行在後臺,例如數據庫的備份操做,當備份完成應當當即退出,可是還有特殊的狀況,例如mysql程序鏈接數滿了或者mysql掛了,這個時候job控制器控制的pod就須要把指定的任務完成才能結束,若是中途退出了須要重建來直道任務完成才能退出。Job適合完成一次性的任務。
Cronjob:
Cronjob和job的實現的功能相似,可是適合完成周期性的計劃任務,面對週期性計劃任務咱們須要考慮到就是上一次任務執行尚未完成下一次的時間節點又到了應該怎麼處理。
StatefulSet
StatefulSet就適合管理有狀態的應用,更加關係個體,例如咱們建立了一個redis集羣,若是集羣中某一個redis掛了,新起的pod是沒法替代以前的redis的,由於以前的redis存儲的數據可能被redis一塊兒帶走了。
StatefulSet是將沒一個pod單獨管理的,每個pod都有本身獨有的標識和獨有的數據集,一旦出現故障新的pod加進來以前須要作不少初始化操做才能被加進來,可是咱們對於這些有狀態並且有數據的應用若是是出現故障須要重構的時候,會變得很麻煩,由於redis和mysql重構和主從複製的配置是徹底不同的,這就意味須要將這些內容編寫腳本的形式放到StatefulSet的模板中,這就須要人爲的去作大量的驗證,由於控制器一旦加載模塊都是自動完成的,可能弄很差數據就丟失了。
無論是k8s仍是直接部署的應用,只要是有狀態的應用都會面臨這種難題,一旦故障怎麼保證數據不會丟失,並且能快速用新的應用頂上來接着以前的數據繼續工做,可能在直接部署的應用上完成了,可是移植到k8s上的時候將會面臨的又是另一種狀況。
在k8s上還支持一種特殊類型的資源TPR,可是在1.8版本以後就被CDR取代了,其主要功能就是自定義資源,能夠將目標資源管理成一種獨特的管理邏輯,而後將這種管理邏輯灌注到Operator裏面,可是這種難度會變的很大,以致於到目前支持這種形式的pod資源並很少。
k8s爲了使得使用變得簡單,後面也提供了一種Helm的工具,這個工具相似centos上的yum工具同樣,咱們只須要定義存儲卷在哪裏,使用多少內存空間等等資源,而後直接安裝便可,helm如今已經支持不少主流的應用,可是這些應用不少時候都適用於咱們的環境,因此也致使helm使用的人也不是不少。html
咱們能夠經過kubectl explain rc(ReplicaSet的簡寫)node
[root@www kubeadm]# kubectl explain rc 能夠看到一級字段也咱們看 KIND: ReplicationController VERSION: v1 DESCRIPTION: ReplicationController represents the configuration of a replication controller. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds metadata <Object> If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata spec <Object> Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status <Object> Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status spec: [root@www kubeadm]# kubectl explain rc.spec KIND: ReplicationController VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status ReplicationControllerSpec is the specification of a replication controller. FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas <integer> Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller selector <map[string]string> Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template <Object> Template is the object that describes the pod that will be created if insufficient replicas are detected. This takes precedence over a TemplateRef. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template [root@www kubeadm]#
ReplicaSet的spec中最主要須要定義的內容是:
1:副本數,
2:標籤選擇器,
3:pod模板mysql
案例:
apiVersion: apps/v1
kind: ReplicaSet 使用類型是ReplicaSet
metadata:
name: myapp
namespace: default
spec:
replicas: 2 建立兩個pods資源
selector: 使用什麼樣的標籤選擇器
matchLabels: 若是使用多個標籤就是邏輯域的關係,就須要使用matchLabels字段
app: myapp 可使用多個標籤
release: Public survey 聲明兩個標籤就意味標籤選擇的時候必須知足兩個標籤內容
template: 定義資源模板
metadata: 資源模板下有兩個字段就是matadata和spec,這個用法就是kind類型是pod的同樣了
name: myapp-pod
labels: 注意這裏的labels的標籤必須包含上面matchLabels的兩個標籤,能夠多,可是不能少,若是控制器建立一個發現不能知足就會又建一個,周而復始環境可能被建立的pod給撐死了
app: myapp
release: Public survey
time: current
spec:
containers:
- name: myapp-test
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80git
[root@www TestYaml]# cat pp.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-7ttch 1/1 Running 0 3m31s myapp-8w2f2 1/1 Running 0 3m31s 咱們看到咱們在yaml文件裏面定義的名字控制器會自動的生成在後面跟上隨機串 [root@www TestYaml]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 2 2 2 3m35s [root@www TestYaml]# kubectl describe pods myapp-7ttch Name: myapp-7ttch Namespace: default Priority: 0 PriorityClassName: <none> Node: www.kubernetes.node1.com/192.168.181.140 Start Time: Sun, 07 Jul 2019 16:07:42 +0800 Labels: app=myapp Annotations: <none> Status: Running IP: 10.244.1.27 Controlled By: ReplicaSet/myapp Containers: myapp-containers: Container ID: docker://17288f7aed7f62a983c35cabfd061a22f94c8e315da475fcfe4b276d49b22e33 Image: ikubernetes/myapp:v1 Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513 Port: <none> Host Port: <none> State: Running Started: Sun, 07 Jul 2019 16:07:45 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5ddf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-h5ddf: Type: Secret (a volume populated by a Secret) SecretName: default-token-h5ddf Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned default/myapp-7ttch to www.kubernetes.node1.com Normal Pulled 16m kubelet, www.kubernetes.node1.com Container image "ikubernetes/myapp:v1" already present on machine Normal Created 16m kubelet, www.kubernetes.node1.com Created container myapp-containers Normal Started 16m kubelet, www.kubernetes.node1.com Started container myapp-containers [root@www TestYaml]# kubectl delete pods myapp-7ttch 當咱們刪除7ttch這個pods的時候,發現控制器立馬幫忙建立了一個n8lt4後綴的pods pod "myapp-7ttch" deleted [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myapp-7ttch 1/1 Running 0 18m myapp-8w2f2 1/1 Running 0 18m myapp-7ttch 1/1 Terminating 0 18m myapp-n8lt4 0/1 Pending 0 0s myapp-n8lt4 0/1 Pending 0 0s myapp-n8lt4 0/1 ContainerCreating 0 0s myapp-7ttch 0/1 Terminating 0 18m myapp-n8lt4 1/1 Running 0 2s myapp-7ttch 0/1 Terminating 0 18m myapp-7ttch 0/1 Terminating 0 18m 若是咱們建立一個新的pod,把標籤設置成myapp同樣,這個控制器或怎麼去控制副本的數量 [root@www ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8w2f2 1/1 Running 0 26m app=myapp myapp-n8lt4 1/1 Running 0 7m53s app=myapp [root@www ~]# [root@www TestYaml]# kubectl create -f pod-test.yaml pod/myapp created [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp 0/1 ContainerCreating 0 2s <none> myapp-8w2f2 1/1 Running 1 41m app=myapp myapp-n8lt4 1/1 Running 0 22m app=myapp,time=july mypod-g7rgq 1/1 Running 0 10m app=mypod,time=july mypod-z86bg 1/1 Running 0 10m app=mypod,time=july [root@www TestYaml]# kubectl label pods myapp app=myapp 給新建的pod打上myapp的標籤 pod/myapp labeled [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp 0/1 Terminating 1 53s app=myapp myapp-8w2f2 1/1 Running 1 42m app=myapp myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july mypod-z86bg 1/1 Running 0 11m app=mypod,time=july [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8w2f2 1/1 Running 1 42m app=myapp 能夠發現只要標籤和控制器定義的pod標籤一致了可能就會被誤殺掉 myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july mypod-z86bg 1/1 Running 0 11m app=mypod,time=july
ReplicaSet的特性之一就是指關心集體不關心個體,嚴格按照內部定義的pod數量,標籤來控制pods資源,因此在定義ReplicaSet控制器的時候須要把條件設置複雜,避免出現上面的狀況
使用ReplicaSet建立的集體pods的時候,須要注意到一旦pods的掛了,控制器新起的pods地址確定會變化,這個時候就須要在外面加一層service,讓service的標籤和ReplicaSet一致,經過標籤選擇器關聯至後端的pods,這樣就避免地址變化致使訪問中斷的狀況。
ReplicaSet的動態手動擴縮容也很簡單。golang
[root@www TestYaml]# kubectl edit rs myapp 使用edit參數進入myapp的模板信息,直接修改replicas值便可 ..... spec: replicas: 5 selector: matchLabels: app: myapp ........ replicaset.extensions/myapp edited [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-6d4nd 1/1 Running 0 10s myapp-8w2f2 1/1 Running 1 73m myapp-c85dt 1/1 Running 0 10s myapp-n8lt4 1/1 Running 0 54m myapp-prdmq 1/1 Running 0 10s mypod-g7rgq 1/1 Running 0 42m mypod-z86bg 1/1 Running 0 42m
[root@www TestYaml]# curl 10.244.2.8 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# kubectl edit rs myapp ....... spec: containers: - image: ikubernetes/myapp:v2 升級爲v2版本 imagePullPolicy: IfNotPresent ....... replicaset.extensions/myapp edited NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 3 3 3 79m myapp-containers ikubernetes/myapp:v2 app=myapp 能夠看到鏡像版本已是v2版本 [root@www TestYaml]# curl 10.244.2.8 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> 可是咱們訪問結果仍是v1的版本,這個是由於pods一直處於運行中,並無被重建,只有重建的pod資源纔會是v2版本 [root@www TestYaml]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6d4nd 1/1 Running 0 10m 10.244.1.30 www.kubernetes.node1.com <none> <none> myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none> myapp-n8lt4 1/1 Running 0 64m 10.244.1.28 www.kubernetes.node1.com <none> <none> mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none> mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none> [root@www TestYaml]# curl 10.244.1.30 咱們訪問myapp-6d4nd版本仍是v1 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# kubectl delete pods myapp-6d4nd 刪除這個pods資源讓其重構 pod "myapp-6d4nd" deleted [root@www TestYaml]# kubectl get pods -o wide 重構以後的pods是myapp-bsdlk NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none> myapp-bsdlk 1/1 Running 0 17s 10.244.2.16 www.kubernetes.node2.com <none> <none> myapp-n8lt4 1/1 Running 0 65m 10.244.1.28 www.kubernetes.node1.com <none> <none> mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none> mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none> [root@www TestYaml]# curl 10.244.2.16 訪問對應的地址,發現如今已是v2版本 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# curl 10.244.2.8 尚未被重構的pods仍是屬於v1版本 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@www TestYaml]# kubectl delete rs myapp mypod replicaset.extensions "myapp" deleted replicaset.extensions "mypod" deleted
這樣有個好處就是在更新版本的時候是平滑過渡的,我留有緩衝期,當訪問v2版本的用戶無問題了,我再快速的更新剩下的v1版本,而後經過腳本等形式發佈v2版本,這個就屬於金絲雀發佈。
如圖:redis
若是是一些重要的pods,可能金絲雀不是一種好的更新方式,咱們可使用藍綠髮布的方式,在建立一個其模板一直,標籤選擇器相似的新pods資源,可是這種狀況須要考慮到訪問地址,因此service須要同時關聯新老兩邊的pods資源。sql
還能夠經過deployment來關聯至後端的多個service上,service在關聯pods資源,例如pods資源副本是3個,此時關閉一個pods資源,同時新建一個版本是v2的pods資源,這個pods資源對應的service是一個新的service資源,這個時候用戶的請求一部分請求會被
deployment引導至新service資源後端的v2版本上,而後在中止一個v1版本的pods資源同時建立v2版本的資源,直到把全部的pods資源更新完畢。docker
一個deployment默認最多隻能管理10個rc控制資源,固然也能夠手動的去調整這個數
deployment還能提供聲明式更新配置,這個時候就不使用create來建立pods,而是使用apply聲明式更新,並且這種形式建立的pods,不須要edit來去改相關的pods模板信息了,能夠經過patch打補丁的形式,直接經過命令行純命令的形式對pods資源的內部進行修改。
對於deployment更新時還能控制更新節奏和更新邏輯
假如如今服務器的ReplicaSet控制的pods數量有5個,這5個剛恰好知足用戶的訪問請求,當咱們使用上面的辦法刪除一個在重建一個的方式就不太可取,由於刪除和建立中間須要消耗時間,這時間足以致使用戶訪問請求過大致使其餘pods承載不了而崩潰,
這個時候就須要咱們採用另外的方式了,咱們能夠指定控制在滾動更新期間能臨時多起幾個pods,咱們徹底能夠控制,控制最多能多餘咱們定義的副本數量幾個,最少能少於咱們定義副本數量的幾個,這樣咱們定義最多多1個出來,這樣更新的適合就是先起一個新的,而後刪除一個老的,在起一個新的,在刪除一個老的。
若是是pods資源過多,一個個更新過慢,能夠一次多起幾個新的,例如一次建立新的5個,刪除5個老的,咱們經過這樣更新也能夠控制更新的粒度。
最少能少於咱們定義副本數量的幾個的更新形式就和最多的反過來,先刪一個老的,在建立新的,先減後加。
那若是是最多多一個,最少少一個,若是基數是5,那麼最少是4個,最可能是6個,這個時候更新就是先加1刪2,而後加2刪2。
基數5,一個都不能少,最多能夠到5個,那麼這種就是直接刪加5刪5,這個就屬於藍綠部署。
這些更新的方式默認是滾動更新。
上面這些更新方式必定要考慮到就緒性狀態和存活性狀態,避免加1的尚未就緒,老的直接就刪掉了。數據庫
上面咱們說明了不少種依賴Deployment更新的方式,那在Deployment下主要會用到這些字段:json
[root@www TestYaml]# kubectl explain deploy(Deployment的簡寫) KIND: Deployment VERSION: extensions/v1beta1 DESCRIPTION: DEPRECATED - This group version of Deployment is deprecated by apps/v1beta2/Deployment. See the release notes for more information. Deployment enables declarative updates for Pods and ReplicaSets. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds metadata <Object> Standard object metadata. spec <Object> Specification of the desired behavior of the Deployment. status <Object> Most recently observed status of the Deployment. 能夠看到所包含的以及字段名稱和ReplicaSet同樣,並且注意這個VERSION: extensions/v1beta1羣組是特殊的,因爲k8s提供的文檔是落後於實際的版本信息的,咱們能夠看到如今已經挪動到另一個羣組了 apps/v1beta2/Deployment屬於apps羣組了 [root@www TestYaml]# kubectl explain deploy.spec spec字段的內容和ReplicaSet區別又不大。 KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the Deployment. DeploymentSpec is the specification of the desired behavior of the Deployment. FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused <boolean> Indicates that the deployment is paused and will not be processed by the deployment controller. progressDeadlineSeconds <integer> The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. This is set to the max value of int32 (i.e. 2147483647) by default, which means "no deadline". replicas <integer> Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit <integer> The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. This is set to the max value of int32 (i.e. 2147483647) by default, which means "retaining all old RelicaSets". rollbackTo <Object> DEPRECATED. The config this deployment is rolling back to. Will be cleared after rollback is done. selector <Object> Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. strategy <Object> The deployment strategy to use to replace existing pods with new ones. template <Object> -required- Template describes the pods that will be created. 除了部分字段和ReplicaSet同樣以外,還多了幾個重要的字段,strategy(定義更新策略) strategy支持的更新策略: [root@www TestYaml]# kubectl explain deploy.spec.strategy KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: strategy <Object> DESCRIPTION: The deployment strategy to use to replace existing pods with new ones. DeploymentStrategy describes how to replace existing pods with new ones. FIELDS: rollingUpdate <Object> Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. type <string> Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 1:Recreate(重建式更新,刪1建1的策略,此類型rollingUpdate對其是無效的) 2:RollingUpdate(滾動更新,若是type的更新類型是RollingUpdate,那麼還可使用上面的rollingUpdate來定義) rollingUpdate(主要功能就是來定義更新粒度的) [root@www TestYaml]# kubectl explain deploy.spec.strategy.rollingUpdate KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: rollingUpdate <Object> DESCRIPTION: Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. Spec to control the desired behavior of rolling update. FIELDS: maxSurge (對應的更新過程當中,最多能超出以前定義的目標副本數有幾個) <string> The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. By default, a value of 1 is used. Example: when this is set to 30%, the new RC can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxSurge有兩種取值方式,一種是 Value can be an absolute number (ex: 5)直接指定數量,還有一種是a percentage of desired pods (ex: 10%).指定百分比 maxUnavailable (定義最多有幾個不可用) <string> The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. By default, a fixed value of 1 is used. Example: when this is set to 30%, the old RC can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old RC can be scaled down further, followed by scaling up the new RC, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 若果這兩個字段都設置爲0,那等於怎麼更新都更新不了,因此這兩個字段只能有一個爲0,另一個爲指定數字 revisionHistoryLimit(表明咱們滾動更新以後,最多能保留幾個歷史版本,方便咱們回滾) [root@www TestYaml]# kubectl explain deploy.spec.revisionHistoryLimit KIND: Deployment VERSION: extensions/v1beta1 FIELD: revisionHistoryLimit <integer> DESCRIPTION: The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. This is set to the max value of int32 (i.e. 2147483647) by default, which means "retaining all old RelicaSets". 默認是10個 paused(暫停,當咱們滾動更新以後,若是不想當即啓動,就能夠經過paused來控制暫停一下子,默認都是不暫停的) [root@www TestYaml]# kubectl explain deploy.spec.paused KIND: Deployment VERSION: extensions/v1beta1 FIELD: paused <boolean> DESCRIPTION: Indicates that the deployment is paused and will not be processed by the deployment controller. template(Deployment會控制ReplicaSet自動來建立pods) [root@www TestYaml]# kubectl explain deploy.spec.template KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: template <Object> DESCRIPTION: Template describes the pods that will be created. PodTemplateSpec describes the data a pod should have when created from a template FIELDS: metadata <Object> Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata spec <Object> Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
[root@www TestYaml]# cat deploy.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: mydeploy release: Internal-measurement template: metadata: labels: app: mydeploy release: Internal-measurement spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl apply -f deploy.test.yaml 這個時候不是使用create來建立了而是使用apply聲明的方式來建立pods資源 deployment.apps/mydeploy created [root@www TestYaml]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE mydeploy 2/2 2 2 2m [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-kq88g 1/1 Running 0 2m4s mydeploy-74b7786d9b-mp2mb 1/1 Running 0 2m4s [root@www TestYaml]# kubectl get rs 能夠看到咱們建立deployment的時候自動幫忙建立了rs pod資源,並且能夠看到命名方式就知道deployment,rs和pods之間的關係了 NAME DESIRED CURRENT READY AGE mydeploy-74b7786d9b 2 2 2 2m40s [root@www TestYaml]# deployment的名字是mydeploy,rs的名字是mydeploy-74b7786d9b(注意這個隨機數值串,它是模板的hash值),pods的名字是mydeploy-74b7786d9b-kq88g 因而可知rs和pods資源是由deployment控制自動去建立的
deployment擴縮容不一樣於rs的擴縮容,咱們直接經過修yaml模板,而後經過apply聲明就能夠達到擴縮容的機制。 [root@www TestYaml]# cat deploy.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 3 直接加到三個 selector: matchLabels: app: mydeploy release: Internal-measurement template: metadata: labels: app: mydeploy release: Internal-measurement spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-4bcln 1/1 Running 0 7s 能夠看到直接加了一個新的pods資源 mydeploy-74b7786d9b-kq88g 1/1 Running 0 13m mydeploy-74b7786d9b-mp2mb 1/1 Running 0 13m [root@www TestYaml]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE mydeploy 3/3 3 3 14m [root@www TestYaml]# kubectl get rs NAME DESIRED CURRENT READY AGE mydeploy-74b7786d9b 3 3 3 14m deployment和rs的狀態數量也隨之更新 咱們改變模板以後,使用apply聲明資源變化狀況,這個變化直接回存儲到etcd或者apiservice裏面,而後通知下游節點作出相應的改變 [root@www TestYaml]# kubectl describe deploy mydeploy Name: mydeploy Namespace: default CreationTimestamp: Sun, 07 Jul 2019 21:31:01 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 咱們每一次的變化都會存在Annotations裏面,並且是自動維護的 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"mydeploy","namespace":"default"},"spec":{"replicas":3,"se... Selector: app=mydeploy,release=Internal-measurement Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate 默認的更新策略就是滾動更新 MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge 這裏的最大和最小都是25% Pod Template: Labels: app=mydeploy release=Internal-measurement Containers: myapp-containers: Image: ikubernetes/myapp:v1 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: <none> NewReplicaSet: mydeploy-74b7786d9b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set mydeploy-74b7786d9b to 2 Normal ScalingReplicaSet 3m42s deployment-controller Scaled up replica set mydeploy-74b7786d9b to 3 對於deployment的更新也很簡單,若是是單純的更新鏡像資源能夠直接使用set image參數來更新,也能夠直接修改配置文件的形式來更新 [root@www TestYaml]# cat deploy.test.yaml ....... spec: containers: - name: myapp-containers image: ikubernetes/myapp:v2 升級到v2版本 [root@www TestYaml]# kubectl apply -f deploy.test.yaml deployment.apps/mydeploy configured [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-8jjvv 1/1 Running 0 82s mydeploy-74b7786d9b-mp84r 1/1 Running 0 84s mydeploy-74b7786d9b-qdzc5 1/1 Running 0 86s mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s 能夠看到更新邏輯是先多一個 mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s mydeploy-6fbdd45d4c-kbcmh 0/1 ContainerCreating 0 0s 而後終止一個,一次的輪詢直到所有完成 mydeploy-6fbdd45d4c-kbcmh 1/1 Running 0 1s mydeploy-74b7786d9b-8jjvv 1/1 Terminating 0 99s mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s mydeploy-6fbdd45d4c-qqgb8 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 100s mydeploy-6fbdd45d4c-qqgb8 1/1 Running 0 1s mydeploy-74b7786d9b-mp84r 1/1 Terminating 0 102s mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s mydeploy-6fbdd45d4c-ng99s 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 103s mydeploy-6fbdd45d4c-ng99s 1/1 Running 0 2s mydeploy-74b7786d9b-qdzc5 1/1 Terminating 0 106s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 107s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s 全成自動完成自動更新,只須要指定版本號。 [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 3 3 3 25m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 0 0 0 33m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement 能夠看到咱們又要兩個版本的鏡像,而後使用v2版本的有三個,使用v1的是沒有的,還能夠看到兩個模板的標籤信息基本是一致的,保留老版本隨時等待回滾。 [root@www TestYaml]# kubectl rollout history deployment mydeploy 咱們還用過命令rollout history來查看滾動更新的次數和痕跡 deployment.extensions/mydeploy REVISION CHANGE-CAUSE 3 <none> 4 <none> [root@www TestYaml]# kubectl rollout undo deployment mydeploy 回滾直接使用rollout undo來進行回滾,它會根據保留的老版本模板來進行回滾,回滾的邏輯和升級的也同樣,加1停1。 deployment.extensions/mydeploy rolled back [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 0 0 0 34m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 3 3 3 41m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement [root@www TestYaml]# 能夠看到v1的版本又回來了
[root@www TestYaml]# kubectl patch --help Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. JSON and YAML formats are accepted. Examples: # Partially update a node using a strategic merge patch. Specify the patch as JSON. kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch. Specify the patch as YAML. kubectl patch node k8s-node-1 -p $'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch. kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key. kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a json patch with positional arrays. kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run=false: If true, only print the object that would be sent, without sending it. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to update -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. --local=false: If true, patch will operate on the content of the file, not the server-side resource. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. -p, --patch='': The patch to be applied to the resource JSON file. --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --type='strategic': The type of patch being provided; one of [json merge strategic] Usage: kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options] Use "kubectl options" for a list of global command-line options (applies to all commands). patch不只能擴充資源還能完成其它的操做 [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"replicas":5}}' -p選項能夠用來指定一級菜單下二級三級菜單指的變更,可是注意的是外面使用單引號,裏面一級字段的詞就須要用雙引號 deployment.extensions/mydeploy patched [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-qnqg2 1/1 Running 0 8m41s mydeploy-74b7786d9b-tz6xk 1/1 Running 0 8m43s mydeploy-74b7786d9b-vt659 1/1 Running 0 8m45s mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-zpcxb 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-hlwbp 1/1 Running 0 2s mydeploy-74b7786d9b-zpcxb 1/1 Running 0 2s 能夠看到更新的過程,由於咱們回滾過版本,可是deploy版本定義的是v2的版本,如今應該是v1有3個,v2有兩個 [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 0 0 0 45m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 5 5 5 52m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement 可是實際不是這樣的,當你只指定某一個字段進行打補丁的時候,是不會改變其它字段的值的,除非將image的版本也給到v2版本 patch的好處在於若是隻想對某些字段的值進行變動,不想去調整yaml模板的值,就可使用patch,可是patch絕對不適合完成不少字段的調整,由於會使得命令行結構變的複雜 [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' 例如咱們去改最少0個,最多1個,就會使得很複雜,若是是不少指,這個結構變的就會複雜,若是是修改多個指,直接apply更加方便 deployment.extensions/mydeploy patched (no change) [root@www TestYaml]# kubectl set image deployment mydeploy myapp-containers=ikubernetes/myapp:v2 && kubectl rollout pause deployment mydeploy 咱們使用直接set image來直接更新鏡像的版本,並且更新1個以後就直接暫停 deployment.extensions/mydeploy image updated deployment.extensions/mydeploy paused 能夠看到更新一個以後就直接paused了 [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-hlwbp 1/1 Running 0 30m mydeploy-74b7786d9b-qnqg2 1/1 Running 0 40m mydeploy-74b7786d9b-tz6xk 1/1 Running 0 40m mydeploy-74b7786d9b-vt659 1/1 Running 0 40m mydeploy-74b7786d9b-zpcxb 1/1 Running 0 30m mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 1/1 Terminating 0 33m mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s mydeploy-6fbdd45d4c-wllm7 0/1 ContainerCreating 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s mydeploy-6fbdd45d4c-phcp4 0/1 ContainerCreating 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m mydeploy-6fbdd45d4c-wllm7 1/1 Running 0 2s mydeploy-6fbdd45d4c-phcp4 1/1 Running 0 3s mydeploy-6fbdd45d4c-dc84z 1/1 Running 0 3s mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m [root@www TestYaml]# kubectl rollout status deployment mydeploy 也可使用其餘命令來監控更新的過程 Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... 由於咱們前面執行暫停了,結果更新幾個以後就暫停下來了,若是咱們已經更新幾個小時了,沒有用戶反饋有問題,想繼續把剩下的更新掉,就可使用resume命令來繼續更新 [root@www ~]# kubectl rollout resume deployment mydeploy 直接繼續更新 deployment.extensions/mydeploy resumed [root@www TestYaml]# kubectl rollout status deployment mydeploy Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 4 of 5 updated replicas are available... deployment "mydeploy" successfully rolled out 能夠看到所有更新完畢,這個就是金絲雀更新。
[root@www TestYaml]# kubectl rollout undo --help Rollback to a previous rollout. Examples: # Rollback to the previous deployment kubectl rollout undo deployment/abc # Rollback to daemonset revision 3 kubectl rollout undo daemonset/abc --to-revision=3 能指定回滾到那個版本 # Rollback to the previous deployment with dry-run kubectl rollout undo --dry-run=true deployment/abc 不指定默認是上一個版本 Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run=false: If true, only print the object that would be sent, without sending it. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to get from a server. -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --to-revision=0: The revision to rollback to. Default to 0 (last revision). Usage: kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags] [options] Use "kubectl options" for a list of global command-line options (applies to all commands). [root@www TestYaml]# kubectl rollout undo deployment mydeploy --to-revision=1 咱們能夠經過命令快速進行版本的回滾操做
DaemonSet的主要是在集羣的每個節點上運行一個指定的pod,並且此pod只有一個副本,或者是符合選擇器的節點上運行指定的pod(例若有些機器是實體機,有些是虛擬機,那麼上面跑的一些程序是不一樣的,這個時候就須要選擇器來選擇運行pod)
還能夠將某些目錄關聯至pod中,來實現某些特定的功能。
[root@www TestYaml]# kubectl explain ds.spec (Daemonset簡寫ds,也是包含5個一級字段) KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: spec <Object> DESCRIPTION: The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status DaemonSetSpec is the specification of a daemon set. FIELDS: minReadySeconds <integer> The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). revisionHistoryLimit(保存歷史版本數) <integer> The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector <Object> A label query over pods that are managed by the daemon set. Must match in order to be controlled. If empty, defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template <Object> -required- An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template templateGeneration <integer> DEPRECATED. A sequence number representing a specific generation of the template. Populated by the system. It can be set only during the creation. updateStrategy (更新策略) <Object> An update strategy to replace existing DaemonSet pods with new pods.
[root@www TestYaml]# cat ds.test.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: myds namespace: default spec: selector: matchLabels: app: myds release: Only template: metadata: labels: app: myds release: Only spec: containers: - name: mydaemonset image: ikubernetes/filebeat:5.6.5-alpine env: 由於filebeat監控日誌須要指定服務名稱和日誌級別,這個不能在啓動以後傳,咱們須要提早定義 - name: REDIS_HOST value: redis.default.svc.cluster.local 這個值是redis名稱+名稱空間default+域 - name: REDIS_LOG value: info 日誌級別咱們定義爲info級別 [root@www TestYaml]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE myds 2 2 1 2 1 <none> 4m28s [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myds-9kt2j 0/1 ImagePullBackOff 0 2m18s myds-jt8kd 1/1 Running 0 2m14s [root@www TestYaml]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myds-9kt2j 0/1 ImagePullBackOff 0 2m24s 10.244.1.43 www.kubernetes.node1.com <none> <none> myds-jt8kd 1/1 Running 0 2m20s 10.244.2.30 www.kubernetes.node2.com <none> <none> 能夠看到整個節點上至跑了兩個pods,不會多也不會少,不管咱們怎麼定義,一個節點只能運行一個由DaemonSet控制的pods資源
[root@www TestYaml]# cat ds.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: loginfo template: metadata: labels: app: redis role: loginfo spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- 能夠將兩個資源定義的yaml寫在一個文件當中,可是須要注意的是這樣寫最好是有關聯的兩個資源對象,若是沒有關聯仍是建議分開寫。 apiVersion: apps/v1 kind: DaemonSet metadata: name: myds namespace: default spec: selector: matchLabels: app: myds release: Only template: metadata: labels: app: myds release: Only spec: containers: - name: mydaemonset image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG value: info 經過定義清單文件,咱們就能經過filebeat來收集redis日誌。
[root@www TestYaml]# kubectl explain ds.spec.updateStrategy KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: updateStrategy <Object> DESCRIPTION: An update strategy to replace existing DaemonSet pods with new pods. FIELDS: rollingUpdate <Object> Rolling update config params. Present only if type = "RollingUpdate". type <string> 默認更新的方式也是有兩種,一種是滾動更新,還有一種是在刪除時候更新 Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is OnDelete. rollingUpdate滾動更新 [root@www TestYaml]# kubectl explain ds.spec.updateStrategy.rollingUpdate KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: rollingUpdate <Object> DESCRIPTION: Rolling update config params. Present only if type = "RollingUpdate". Spec to control the desired behavior of daemon set rolling update. FIELDS: maxUnavailable ds控制器的更新策略只能支持先刪在更新,由於一個節點支持一個pods資源,此處的數量是和節點數量相關的,一次更新幾個節點的pods資源 <string> The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update. [root@www TestYaml]# kubectl set image --help Update existing container image(s) of resources. Possible resources include (case insensitive): pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs) set image目前支持更新的控制器類別 [root@www TestYaml]# kubectl set image daemonsets myds mydaemonset=ikubernetes/filebeat:5.6.6-alpine daemonset.extensions/myds image updated [root@www TestYaml]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE myds 2 2 1 0 1 <none> 19m [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myds-lmw5d 0/1 ContainerCreating 0 7s myds-mhw89 1/1 Running 0 19m redis-fdc8c666b-spqlc 1/1 Running 0 19m [root@www TestYaml]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myds-lmw5d 0/1 ContainerCreating 0 15s 能夠看到更新的時候先停一個,而後去pull鏡像來更新 myds-mhw89 1/1 Running 0 19m redis-fdc8c666b-spqlc 1/1 Running 0 19m ....... myds-546lq 1/1 Running 0 46s 能夠看到一件更新完畢
容器是能夠共享使用主機的網絡名稱空間,這樣容器監聽的端口將是監聽了宿主機至上了
[root@www TestYaml]# kubectl explain pod.spec.hostNetwork
KIND: Pod
VERSION: v1
FIELD: hostNetwork <boolean>
DESCRIPTION:
Host networking requested for this pod. Use the host's network namespace.
If this option is set, the ports that will be used must be specified.
Default to false.
能夠看到pods直接使用主機的網絡名稱空間,那麼在建立ds控制器的時候,直接共享使用宿主機的網絡名稱空間,這樣咱們直接可使用節點ip來進行訪問了,無需經過service來進行暴露端口
還能夠共享的有hostPID,hostIPC等字段。