測試的時候,一般須要將 Pod 中的 container 頻繁地殺死,重啓。在這個過程當中,Pod 的狀態常常會出現 CrashLoopBackOff,並且 container 重啓的時間愈來愈長。node
爲了不 container 頻繁地 restart,k8s 對 container restart 過程作了限制,使用 back-off 的方法,官方文檔中的說法是:git
Failed containers that are restarted by Kubelet, are restarted with an exponential back-off delay, the delay is in multiples of sync-frequency 0, 1x, 2x, 4x, 8x … capped at 5 minutes and is reset after 10 minutes of successful execution.github
這裏先直接給出結論:api
kubernetes/pkg/kubelet/kubelet.go
經過源碼發現,kubernetes/pkg/kubelet/kubelet.go 文件中有兩個常量:app
MaxContainerBackOff = 300 * time.Second backOffPeriod = time.Second * 10
使用這兩個變量構造了一個 BackOff 對象,這個是 kubelet 的屬性,對該 node 上全部 pod 都適用ide
klet.backOff = flowcontrol.NewBackOff(backOffPeriod, MaxContainerBackOff)
BackOff 結構以下函數
type Backoff struct { sync.Mutex Clock clock.Clock defaultDuration time.Duration maxDuration time.Duration perItemBackoff map[string]*backoffEntry }
而後在 SyncPod 方法中使用這個對象oop
// Call the container runtime's SyncPod callback result := kl.containerRuntime.SyncPod(pod, apiPodStatus, podStatus, pullSecrets, kl.backOff)
SyncPod 具體作的事有:測試
// SyncPod syncs the running pod into the desired pod by executing following steps: // // 1. Compute sandbox and container changes. // 2. Kill pod sandbox if necessary. // 3. Kill any containers that should not be running. // 4. Create sandbox if necessary. // 5. Create init containers. // 6. Create normal containers. func (m *kubeGenericRuntimeManager) SyncPod(pod *v1.Pod, _ v1.PodStatus, podStatus *kubecontainer.PodStatus, pullSecrets []v1.Secret, backOff *flowcontrol.Backoff) (result kubecontainer.PodSyncResult) {
一樣在這個文件中,有一個關鍵的函數ui
// If a container is still in backoff, the function will return a brief backoff error and // a detailed error message. func (m *kubeGenericRuntimeManager) doBackOff(pod *v1.Pod, container *v1.Container, podStatus *kubecontainer.PodStatus, backOff *flowcontrol.Backoff) (bool, string, error) { var cStatus *kubecontainer.ContainerStatus for _, c := range podStatus.ContainerStatuses { if c.Name == container.Name && c.State == kubecontainer.ContainerStateExited { cStatus = c break } } if cStatus == nil { return false, "", nil } glog.Infof("checking backoff for container %q in pod %q", container.Name, format.Pod(pod)) // Use the finished time of the latest exited container as the start point to calculate whether to do back-off. ts := cStatus.FinishedAt // backOff requires a unique key to identify the container. key := getStableKey(pod, container) if backOff.IsInBackOffSince(key, ts) { if ref, err := kubecontainer.GenerateContainerRef(pod, container); err == nil { m.recorder.Eventf(ref, v1.EventTypeWarning, events.BackOffStartContainer, "Back-off restarting failed container") } err := fmt.Errorf("Back-off %s restarting failed container=%s pod=%s", backOff.Get(key), container.Name, format.Pod(pod)) glog.Infof("%s", err.Error()) return true, err.Error(), kubecontainer.ErrCrashLoopBackOff } backOff.Next(key, ts) return false, "", nil }
其中 backOff.Next 函數定義以下
// move backoff to the next mark, capping at maxDuration func (p *Backoff) Next(id string, eventTime time.Time) { p.Lock() defer p.Unlock() entry, ok := p.perItemBackoff[id] if !ok || hasExpired(eventTime, entry.lastUpdate, p.maxDuration) { entry = p.initEntryUnsafe(id) } else { delay := entry.backoff * 2 // exponential entry.backoff = time.Duration(integer.Int64Min(int64(delay), int64(p.maxDuration))) } entry.lastUpdate = p.Clock.Now() }