在升級以前你須要瞭解各版本間的關係:node
經過查詢命令行幫助:linux
$ kubeadm upgrade -h Upgrade your cluster smoothly to a newer version with this command. Usage: kubeadm upgrade [flags] kubeadm upgrade [command] ` Available Commands: apply Upgrade your Kubernetes cluster to the specified version. diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run node Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself. plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.
命令解析:數據庫
其中node子命令又支持以下子命令和選項:bootstrap
$ kubeadm upgrade node -h Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself. Usage: kubeadm upgrade node [flags] kubeadm upgrade node [command] Available Commands: config Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet. experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance Flags: -h, --help help for node Global Flags: --log-file string If non-empty, use this log file --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages -v, --v Level number for the log level verbosity
命令解析:api
操做環境說明: app
因爲當前環境中的集羣是由kubeadm建立的,其版本爲1.13.1,因此本次實驗將其升級至1.14.0。ide
首先,在第一個控制平面節點也就是主控制平面上操做:post
1. 肯定升級前集羣版本:ui
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
2. 查找可升級的版本:this
apt update apt-cache policy kubeadm # find the latest 1.14 version in the list # it should look like 1.14.x-00, where x is the latest patch 1.14.0-00 500 500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
3. 先升級kubeadm到1.14.0
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubeadm kubelet && \ apt-get update && apt-get install -y kubeadm=1.14.0-00 && \ apt-mark hold kubeadm
如上升級kubeadm到1.14版本,Ubuntu系統有可能會自動升級kubelet到當前最新版本的1.16.0,因此此時就把kubelet也升級下:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
若是確實發生這種狀況,致使了kubeadm和kubelet版本不一致,最終導致後面的升級集羣操做失敗,此時能夠刪除kubeadm、kubelet
刪除:
apt-get remove kubelet kubeadm
再次安裝預期版本:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
肯定kubeadm已升級到預期版本:
root@k8s-master:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} root@k8s-master:~#
4. 運行升級計劃命令:檢測集羣是否能夠升級,及獲取到的升級的版本。
kubeadm upgrade plan
輸出以下:
root@k8s-master:~# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.14.0 Awesome, you're up-to-date! Enjoy!
告訴你集羣能夠升級。
5. 升級控制平面各組件,包含etcd。
root@k8s-master:~# kubeadm upgrade apply v1.14.0 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.14.0" [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.14.0 //輸出 y 確認以後,開始進行升級。 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"... Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514 Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30 [apiclient] Found 1 Pods for label selector component=kube-apiserver [apiclient] Found 0 Pods for label selector component=kube-apiserver [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514 Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477 Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20] [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. root@k8s-master:~#
在最後兩行中,能夠看到,集羣升級成功。
kubeadm upgrade apply 執行了以下操做:
到v1.16版本爲止,kubeadm upgrade apply必須在主控制平面節點上執行。
6. 運行完以後,驗證集羣版本:
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
能夠看到,雖然kubectl版本是在1.13.1,而服務端的控制平面已經升級到了1.14.0
Master組件已正常運行:
root@k8s-master:~# kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
到這裏,第一臺控制平面的Master組件已升級完成,控制平面節點上一般還有kubelet和kubectl,因此這兩個也要作升級。
7. 升級CNI插件。
這一步是可選的,查詢CNI插件是否能夠升級。
8. 升級該控制平面上的kubelet和kubectl
如今能夠升級kubelet了,在升級過程當中,不影響業務Pod的運行。
8.1. 升級kubelet、kubectl
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \ apt-mark hold kubelet kubectl
8.2. 重啓kubelet:
sudo systemctl restart kubelet
9. 查看kubectl版本,與預期一致。
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} root@k8s-master:~#
第一臺控制平面節點已完成升級。
10. 升級其它控制平面節點。
在其它控制平面上執行,與第一個控制平面節點相同,但使用:
sudo kubeadm upgrade node experimental-control-plane
代替:
sudo kubeadm upgrade apply
而 sudo kubeadm upgrade plan 沒有必要再執行了。
kubeadm upgrade node experimental-control-plane執行以下操做:
如今開始升級Node上的各組件:kubeadm、kubelet、kube-proxy。
在不影響集羣訪問的狀況下,一個節點一個節點的執行。
1.將Node標記爲維護狀態。
如今Node還原來的1.13:
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 292d v1.14.0 k8s-node01 Ready node 292d v1.13.1
升級Node以前先將Node標記爲不可用,並逐出全部Pod:
kubectl drain $NODE --ignore-daemonsets
2. 升級kubeadm和kubelet
如今在各Node上一樣的安裝kubeadm、kubelet,由於使用kubeadm升級kubelet。
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubeadm kubelet && \ apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \ apt-mark hold kubeadm kubelet
3. 升級kubelet的配置文件
$ kubeadm upgrade node config --kubelet-version v1.14.0 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. root@k8s-master:~#
4. 從新啓動kubelet
$ sudo systemctl restart kubelet
5. 最後將節點標記爲可調度來使其從新加入集羣
kubectl uncordon $NODE
至此,該Node升級完畢,能夠查看kubelet、kube-proxy的版本已變爲預期版本v1.14.0
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 292d v1.14.0 k8s-node01 Ready node 292d v1.14.0
該STATUS列應全部節點顯示Ready,而且版本號已更新。
到這裏,全部升級流程已完美攻克。
若是kubeadm upgrade失敗而且沒法 回滾(例如因爲執行期間意外關閉),則能夠再次運行kubeadm upgrade。此命令是冪等的,並確保實際狀態最終是您聲明的狀態。
要從不良狀態中恢復,您能夠在不更改集羣運行版本的狀況下運行:
kubeadm upgrade --force。
更多升級信息查看官方升級文檔
從1.14.0升級到1.15.0的升級流程也大體相同,只是升級命令稍有區別。
升級流程 與 從1.13升級至 1.14.0 相同。
1. 查詢可升級版本,安裝kubeadm到預期版本v1.15.0
apt-cache policy kubeadm apt-mark unhold kubeadm kubelet apt-get install -y kubeadm=1.15.0-00
kubeadm已達預期版本:
root@k8s-master:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 執行升級計劃
因爲v1.15版本中,證書到期會自動續費,kubeadm在控制平面升級期間更新全部證書,即 v1.15發佈的kubeadm upgrade,會自動續訂它在該節點上管理的證書。若是不想自動更新證書,能夠加上參數:--certificate-renewal=false。
升級計劃:
kubeadm upgrade plan
能夠看到以下輸出:
root@k8s-master:~# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.14.0 [upgrade/versions] kubeadm version: v1.15.0 I1005 20:45:04.474363 38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15 [upgrade/versions] Latest stable version: v1.15.4 [upgrade/versions] Latest version in the v1.14 series: v1.14.7 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.14.0 v1.14.7 1 x v1.15.0 v1.14.7 Upgrade to the latest version in the v1.14 series: COMPONENT CURRENT AVAILABLE API Server v1.14.0 v1.14.7 Controller Manager v1.14.0 v1.14.7 Scheduler v1.14.0 v1.14.7 Kube Proxy v1.14.0 v1.14.7 CoreDNS 1.3.1 1.3.1 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.7 _____________________________________________________________________ Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.14.0 v1.15.4 1 x v1.15.0 v1.15.4 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.14.0 v1.15.4 Controller Manager v1.14.0 v1.15.4 Scheduler v1.14.0 v1.15.4 Kube Proxy v1.14.0 v1.15.4 CoreDNS 1.3.1 1.3.1 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.15.4 Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4. _____________________________________________________________________
3. 升級控制平面
根據任務指引,升級控制平面:
kubeadm upgrade apply v1.15.0
因爲kubeadm的版本是v1.15.0,因此集羣版本也只能爲v1.15.0。
輸出以下信息:
root@k8s-master:~# kubeadm upgrade apply v1.15.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/version] You have chosen to change the cluster version to "v1.15.0" [upgrade/versions] Cluster version: v1.14.0 [upgrade/versions] kubeadm version: v1.15.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y ... ##正在拉取鏡像 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-scheduler. ... ##已經拉取全部組件的鏡像 [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Successfully prepulled the images for all the control plane components ... ... ##以下自動續訂了全部證書 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate ... [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
4. 升級成功,驗證。
能夠看到,升級成功,此時,再次查詢集羣核心組件版本:
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查該node版本:
NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.14.0 k8s-node01 Ready node 295d v1.14.0
5. 升級該控制平面上的kubelet和kubectl
控制平面核心組件已升級爲v1.15.0,如今升級該節點上的kubelet及kubectl了,在升級過程當中,不影響業務Pod的運行。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \ apt-mark hold kubelet kubectl
6. 重啓kubelet:
sudo systemctl restart kubelet
7. 驗證kubelet、kubectl版本,與預期一致。
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查該node版本:
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.15.0 k8s-node01 Ready node 295d v1.14.0
升級其它控制平面上的三個組件的命令有所不一樣,使用:
1. 升級其它控制平面組件,可是使用以下命令:
$ sudo kubeadm upgrade node
2. 而後,再升級kubelet和kubectl。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ apt-mark hold kubelet kubectl
3. 重啓kubelet
$ sudo systemctl restart kubelet
升級Node與前面一致,此處簡寫。
在全部Node上執行。
1. 升級kubeadm:
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.15.x-00 && \ apt-mark hold kubeadm
查詢kubeadm版本:
root@k8s-node01:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 設置node爲維護狀態:
kubectl cordon $NODE
3. 更新kubelet配置文件
$ sudo kubeadm upgrade node upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
4. 升級kubelet組件和kubectl。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ apt-mark hold kubelet kubectl
5. 重啓kubelet
sudo systemctl restart kubelet
此時kube-proxy也會自動升級並重啓。
6. 取消維護狀態
kubectl uncordon $NODE
Node升級完成。
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.15.0 k8s-node01 NotReady node 295d v1.15.0
在此次升級流程中,升級其它控制平面和升級Node 用的都是 kubeadm upgrade node。
kubeadm upgrade node 在其它控制平面節點執行時:
kubeadm upgrade node 在Node節點上執行如下操做:
從1.15.x升級到1.16.x 與 前面的 從1.14.x升級到1.15.x,升級命令徹底相同,此處就再也不重複。