本文的kubernetes環境:http://www.javashuo.com/article/p-vjmoqwfx-gm.htmlnode
本文章介紹如何將使用kubeadm建立的Kubernetes集羣從版本1.13.x升級到版本1.14.x。bootstrap
只能從一個MINOR版本升級到下一個MINOR版本,或者在同一個MINOR的PATCH版本之間升級。也就是說,升級時不能跳過MINOR版本。例如,能夠從1.y升級到1.y + 1,但不能從1.y升級到1.y + 2。api
升級工做流程以下:服務器
當前kubernetes版本信息:網絡
[root@node-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-01 Ready master 99d v1.13.3 node-02 Ready master 99d v1.13.3 node-03 Ready master 99d v1.13.3 node-04 Ready <none> 99d v1.13.3 node-05 Ready <none> 99d v1.13.3 node-06 Ready <none> 99d v1.13.3
yum install kubeadm-1.14.1 -y
kubeadm version
此命令檢查您的羣集是否能夠升級,並獲取能夠升級到的版本
,您應該看到與此相似的輸出:app
[root@node-01 ~]# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.1 I0505 13:55:58.449783 12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0505 13:55:58.449867 12871 version.go:97] falling back to the local client version: v1.14.1 [upgrade/versions] Latest stable version: v1.14.1 I0505 13:56:08.645796 12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.13.txt": Get https://dl.k8s.io/release/stable-1.13.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0505 13:56:08.645861 12871 version.go:97] falling back to the local client version: v1.14.1 [upgrade/versions] Latest version in the v1.13 series: v1.14.1 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 6 x v1.13.3 v1.14.1 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.3 v1.14.1 Controller Manager v1.13.3 v1.14.1 Scheduler v1.13.3 v1.14.1 Kube Proxy v1.13.3 v1.14.1 CoreDNS 1.2.6 1.3.1 Etcd 3.2.24 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.1 _____________________________________________________________________
kubeadm upgrade apply v1.14.1
你應該看到與此相似的輸出ide
[root@node-01 ~]# kubeadm upgrade apply v1.14.1 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.14.1" [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.1 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-scheduler. [upgrade/prepull] Prepulling image for component kube-controller-manager. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.1"... Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 Static pod: etcd-node-01 hash: 17ddbcfb2ddf1d447ceec2b52c9faa96 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests940835611" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-apiserver-node-01 hash: ff2267bcddb83b815efb49ff766ad897 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-controller-manager-node-01 hash: ff8be061048a4660a1fbbf72db229d0d [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a Static pod: kube-scheduler-node-01 hash: 959a5cdf1468825401daa8d35329351e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
你的容器網絡接口(CNI)提供商可能有本身的升級說明。檢查插件頁面以查找你的CNI提供商,並查看是否須要其餘升級步驟。post
yum install kubectl-1.14.1 kebulet-1.14.1 -y
重啓kubeletfetch
systemctl daemon-reload systemctl restart kubelet
yum install kubeadm-1.14.1 -y
kubeadm upgrade node experimental-control-plane
你能夠看到相似信息:this
[root@node-02 ~]# kubeadm upgrade node experimental-control-plane [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.1"... Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd Static pod: etcd-node-02 hash: 4710a34897e7838519a1bf8fe4dccf07 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests483113569" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-apiserver-node-02 hash: fe1005f40c3f390280358921c3073223 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-controller-manager-node-02 hash: ff8be061048a4660a1fbbf72db229d0d [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a Static pod: kube-scheduler-node-02 hash: 959a5cdf1468825401daa8d35329351e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade] The control plane instance for this node was successfully updated!
yum install kubectl-1.14.1 kebulet-1.14.1 -y
systemctl daemon-reload systemctl restart kubelet
yum install -y kubeadm-1.14.1
經過將節點標記爲不可調度並逐出pod來準備節點以進行維護(此步驟在master上執行)。
kubectl drain $NODE --ignore-daemonsets
[root@node-01 ~]# kubectl drain node-04 --ignore-daemonsets node/node-04 already cordoned WARNING: ignoring DaemonSet-managed Pods: cattle-system/cattle-node-agent-h555m, default/glusterfs-vhdqv, kube-system/canal-mbwvf, kube-system/kube-flannel-ds-amd64-zdfn8, kube-system/kube-proxy-5d64l evicting pod "coredns-55696d4b79-kfcrh" evicting pod "cattle-cluster-agent-66bd75c65f-k7p6n" pod/cattle-cluster-agent-66bd75c65f-k7p6n evicted pod/coredns-55696d4b79-kfcrh evicted node/node-04 evicted
[root@node-04 ~]# kubeadm upgrade node config --kubelet-version v1.14.1 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
yum install kubectl-1.14.1 kebulet-1.14.1 -y
systemctl daemon-reload systemctl restart kubelet
kubectl uncordon $NODE
其餘work節點升級根據以上的步驟執行一遍便可。
在全部節點上升級kubelet以後,經過從kubectl能夠訪問羣集的任何位置運行如下命令來驗證全部節點是否可用:
[root@node-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-01 Ready master 99d v1.14.1 node-02 Ready master 99d v1.14.1 node-03 Ready master 99d v1.14.1 node-04 Ready <none> 99d v1.14.1 node-05 Ready <none> 99d v1.14.1 node-06 Ready <none> 99d v1.14.1
該STATUS列全部節點應顯示Ready,而且應更新版本號。
kubeadm upgrade apply 執行如下操做:
kubeadm upgrade node experimental-control-plane在其餘控制平面節點上執行如下操做:
本次的分享就到此,若有問題歡迎在下面留言交流,但願你們多多關注和點贊,謝謝!