以常見的kubeadm安裝的k8s集羣來講,默認狀況下kubelet沒有配置kube-reserverd和system-reserverd資源預留。worker node上的pod負載,理論上能夠使用該節點服務器上的全部cpu和內存資源。好比某個deployment controller管理的pod存在bug,運行時沒法正常釋放內存,那麼該worker node上的kubelet進程最終會搶佔不到足夠的內存,沒法向kube-apiserver同步心跳狀態,該worker node節點的狀態進而被標記爲NotReady。隨後deployment controller會在另一個worker節點上建立一個pod副本,又重複前述過程,壓垮第二個worker node,最終整個k8s集羣將面臨「雪崩」危險。node
Node capacity:節點總的資源
kube-reserved:預留給k8s進程的資源(如kubelet, container runtime, node problem detector等)
system-reserved:預留給操做系統的資源(如sshd、udev等)
eviction-threshold:kubelet eviction的閥值
allocatable:留給pod的可用資源=Node capacity - kube-reserved - system-reserved - eviction-thresholdubuntu
以ubuntu 16.04+k8s v1.14的環境舉例,配置步驟以下。api
1.修改/var/lib/kubelet/config.yaml服務器
enforceNodeAllocatable: - pods - kube-reserved - system-reserved systemReserved: cpu: "1" memory: "2Gi" kubeReserved: cpu: "1" memory: "2Gi" systemReservedCgroup: /system.slice kubeReservedCgroup: /system.slice/kubelet.service
參數解釋:ssh
enforce-node-allocatable=pods,kube-reserved,system-reserved #默認爲pod設置,這裏要給kube進程和system預留因此要加上。
kube-reserved-cgroup=/system.slice/kubelet.service #k8s組件對應的cgroup目錄
system-reserved-cgroup=/system.slice #系統組件對應的cgroup目錄
kube-reserved=cpu=1,memory=2Gi #k8s組件資源預留大小
system-reserved=cpu=2,memory=4Gi #系統組件資源預留大小。結合主機配置和系統空載佔用資源量的監控,實際測試肯定。測試
注:根據實際需求,也能夠對ephemeral-storage作預留,例如「kube-reserved=cpu=1,memory=2Gi, ephemeral-storage=10Gi"spa
2.修改/lib/systemd/system/kubelet.service操作系統
因爲cpuset和hugetlb這兩個cgroup subsystem默認沒有初始化system.slice,須要在啓動進程前指定建立。.net
[Unit] Description=kubelet: The Kubernetes Node Agent Documentation=https://kubernetes.io/docs/home/ [Service] ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target
3.重啓kubelet進程rest
systemctl restart kubelet
systemctl status kubelet
4.查看worker node的可用資源
kubectl describe node [Your-NodeName]
Capacity: cpu: 40 ephemeral-storage: 197608716Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131595532Ki pods: 110 Allocatable: cpu: 37 ephemeral-storage: 182116192365 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 125201676Ki pods: 110
參考文檔:
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/