介紹DaemonSet時咱們先來思考一個問題:相信你們都接觸過監控系統好比zabbix,監控系統須要在被監控機安裝一個agent,安裝agent一般會涉及到如下幾個場景:node
kubernetes中常常涉及到在node上安裝部署應用,它是如何解決上述的問題的呢?答案是DaemonSet。DaemonSet守護進程簡稱DS,適用於在全部節點或部分節點運行一個daemon守護進程,如監控咱們安裝部署時網絡插件kube-flannel和kube-proxy,DaemonSet具備以下特色:linux
DaemonSet適用於每一個node節點均須要部署一個守護進程的場景,常見的場景例如:web
安裝k8s時默認在kube-system命名空間已經安裝了有兩個DaemonSet,分別爲kube-flannel-ds-amd64和kube-proxy,分別負責flannel overlay網絡的互通和service代理的實現,能夠經過以下命令查看:算法
[root@node-1 ~]# kubectl get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 46d kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 46d
DaemonSet的定義和Deployment定義使用相相似,須要定義apiVersion,Kind,metadata和spec屬性信息,spec中不須要定義replicas個數,spec.template即定義DS生成容器的模版信息,以下是運行一個fluentd-elasticsearch鏡像容器的daemon守護進程,運行在每一個node上經過fluentd採集日誌上報到ElasticSearch。docker
[root@node-1 happycloudlab]# cat fluentd-es-daemonset.yaml apiVersion: apps/v1 #api版本信息 kind: DaemonSet #類型爲DaemonSet metadata: #元數據信息 name: fluentd-elasticsearch namespace: kube-system #運行的命名空間 labels: k8s-app: fluentd-logging spec: #DS模版 selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: #容器信息 - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: #resource資源 limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: #掛載存儲,agent須要到這些目錄採集日誌 - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: #將主機的目錄以hostPath的形式掛載到容器Pod中。 - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
DaemonSet定義注意事項:api
[root@node-1 happycloudlab]# kubectl apply -f fluentd-es-daemonset.yaml daemonset.apps/fluentd-elasticsearch created
[root@node-1 happycloudlab]# kubectl get daemonsets -n kube-system fluentd-elasticsearch NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE fluentd-elasticsearch 3 3 3 3 3 <none> 16s
[root@node-1 happycloudlab]# kubectl get pods -n kube-system -o wide |grep fluentd fluentd-elasticsearch-blpqb 1/1 Running 0 3m7s 10.244.2.79 node-3 <none> <none> fluentd-elasticsearch-ksdlt 1/1 Running 0 3m7s 10.244.0.11 node-1 <none> <none> fluentd-elasticsearch-shtkh 1/1 Running 0 3m7s 10.244.1.64 node-2 <none> <none>
[root@node-1 happycloudlab]# kubectl get daemonsets -n kube-system fluentd-elasticsearch -o yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"fluentd-logging"},"name":"fluentd-elasticsearch","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"name":"fluentd-elasticsearch"}},"template":{"metadata":{"labels":{"name":"fluentd-elasticsearch"}},"spec":{"containers":[{"image":"quay.io/fluentd_elasticsearch/fluentd:v2.5.2","name":"fluentd-elasticsearch","resources":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"200Mi"}},"volumeMounts":[{"mountPath":"/var/log","name":"varlog"},{"mountPath":"/var/lib/docker/containers","name":"varlibdockercontainers","readOnly":true}]}],"terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}],"volumes":[{"hostPath":{"path":"/var/log"},"name":"varlog"},{"hostPath":{"path":"/var/lib/docker/containers"},"name":"varlibdockercontainers"}]}}}} creationTimestamp: "2019-10-30T15:19:20Z" generation: 1 labels: k8s-app: fluentd-logging name: fluentd-elasticsearch namespace: kube-system resourceVersion: "6046222" selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/fluentd-elasticsearch uid: c2c02c48-9f93-48f3-9d6c-32bfa671db0e spec: revisionHistoryLimit: 10 selector: matchLabels: name: fluentd-elasticsearch template: metadata: creationTimestamp: null labels: name: fluentd-elasticsearch spec: containers: - image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 imagePullPolicy: IfNotPresent name: fluentd-elasticsearch resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always #重啓策略必須爲Always,保障異常時能自動恢復 schedulerName: default-scheduler #默認調度策略 securityContext: {} terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master volumes: - hostPath: path: /var/log type: "" name: varlog - hostPath: path: /var/lib/docker/containers type: "" name: varlibdockercontainers templateGeneration: 1 updateStrategy: #滾動更新策略 rollingUpdate: maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 3 desiredNumberScheduled: 3 numberAvailable: 3 numberMisscheduled: 0 numberReady: 3 observedGeneration: 1 updatedNumberScheduled: 3
[root@node-1 ~]# kubectl set image daemonsets fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:latest -n kube-system daemonset.extensions/fluentd-elasticsearch image updated
[root@node-1 ~]# kubectl rollout status daemonset -n kube-system fluentd-elasticsearch Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated... Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 of 3 updated pods are available... daemon set "fluentd-elasticsearch" successfully rolled out
[root@node-1 ~]# kubectl rollout history daemonset -n kube-system fluentd-elasticsearch daemonset.extensions/fluentd-elasticsearch REVISION CHANGE-CAUSE 1 <none> 2 <none>
[root@node-1 ~]# kubectl rollout undo daemonset -n kube-system fluentd-elasticsearch --to-revision=1 daemonset.extensions/fluentd-elasticsearch rolled back
[root@node-1 ~]# kubectl delete daemonsets -n kube-system fluentd-elasticsearch daemonset.extensions "fluentd-elasticsearch" deleted [root@node-1 ~]# kubectl get pods -n kube-system |grep fluentd fluentd-elasticsearch-d6f6f 0/1 Terminating 0 110m
DaemonSet經過kubernetes默認的調度器scheduler會在全部的node節點上運行一個Pod副本,能夠經過以下三種方式將Pod運行在部分節點上:網絡
DaemonSet調度算法用於實現將Pod運行在特定的node節點上,以下以經過node affinity親和力將Pod調度到部分的節點上node-2上爲例。app
[root@node-1 happycloudlab]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS node-1 Ready master 47d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master= node-2 Ready <none> 47d v1.15.3 app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux node-3 Ready <none> 47d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux
[root@node-1 happycloudlab]# cat fluentd-es-daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: #優先知足條件 - weight: 1 preference: matchExpressions: - key: app operator: In values: - web requiredDuringSchedulingIgnoredDuringExecution: #要求知足條件 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node-2 - node-3 terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
[root@node-1 happycloudlab]# kubectl delete ds -n kube-system fluentd-elasticsearch daemonset.extensions "fluentd-elasticsearch" deleted [root@node-1 happylau]# kubectl get daemonsets -n kube-system fluentd-elasticsearch NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE fluentd-elasticsearch 1 1 1 1 1 <none> 112s
[root@node-1 happycloudlab]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES <none> fluentd-elasticsearch-9kngs 1/1 Running 0 2m39s 10.244.1.82 node-2 <none> <none>
本文介紹了kubernetes中DaemonSet控制器,DS控制器能確保全部的節點運行一個特定的daemon守護進程,此外經過nodeSelector或node Affinity可以實現將Pod調度到特定的node節點。elasticsearch
DaemonSet:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/分佈式