k8s添加node節點操做異常總結

    環境:kubeadm方式不是的k8s集羣
node

問題:docker

    1.查看源有系統安裝的kubeadm、kubelet、kubectl的版本
api

[root@k8s-3 ~]# yum list kubeadm kubelet kubectl 
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn
 * epel: ftp.riken.jp
 * extras: mirrors.nju.edu.cn
 * updates: mirrors.nju.edu.cn
已安裝的軟件包
kubeadm.x86_64                                                                1.17.4-0                                                                @kubernetes
kubectl.x86_64                                                                1.17.4-0                                                                @kubernetes
kubelet.x86_64                                                                1.17.4-0                                                                @kubernetes
可安裝的軟件包
kubeadm.x86_64                                                                1.18.0-0                                                                kubernetes 
kubectl.x86_64                                                                1.18.0-0                                                                kubernetes 
kubelet.x86_64                                                                1.18.0-0                                                                kubernetes

    在添加的node上安裝相同的版本
bash

yum install -y  kubeadm-1.17.4-0   kubectl-1.17.4-0     kubelet-1.17.4-0網絡

    2.執行添加node時提示報錯ide

沒有報錯截圖了,只能寫個大概吧:提示查看kubelet的信息,操做kubelet的時候直接啓動kubelet服務,又報錯以下google

[root@k8s3-1 kubernetes]# journalctl  -xeu kubelet 
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit kubelet.service has finished starting up.
-- 
-- The start-up result is done.
4月 05 23:45:52 k8s3-1 kubelet[5469]: F0405 23:45:52.985015    5469 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed 
4月 05 23:45:52 k8s3-1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
4月 05 23:45:52 k8s3-1 systemd[1]: Unit kubelet.service entered failed state.
4月 05 23:45:52 k8s3-1 systemd[1]: kubelet.service failed.
4月 05 23:46:04 k8s3-1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
4月 05 23:46:04 k8s3-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

查找後發現確實沒有這個文件,想着從其餘的node上拷貝這個文件,在啓動服務,可是仍是報錯,靜下心來回憶了一下,啓動kubelet服務是kubeadm join操做時自動執行的,本身只須要把docker 和kubelet的服務配置城開始啓動就行,查看本身的配置kubelet確實沒有開始啓動,操做以後,執行join命令查看kubelet服務啓動成功,插件

master查看你節點信息正常
3d

[root@k8s-3 ~]# kubectl get nodes -n kube-system  -o wide
NAME     STATUS     ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-3    Ready      master   15d   v1.17.4   192.168.191.30   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s-4    Ready      node     13d   v1.17.4   192.168.191.31   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s3-1   NotReady   <none>   10h   v1.17.4   192.168.191.22   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8

    3.新加入節點的網絡pod啓動問題
rest

    該節點上的flannel的pod一直在初始化狀態,kube-proxy確收正常的,flannel和kube-proxy都是daemonset

[root@k8s-3 ~]# kubectl get pods  -n kube-system  -o wide
NAME                            READY   STATUS     RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-7f9c544f75-hdfjm        1/1     Running    6          15d     10.244.0.15      k8s-3    <none>           <none>
coredns-7f9c544f75-w62rh        1/1     Running    6          15d     10.244.0.14      k8s-3    <none>           <none>
etcd-k8s-3                      1/1     Running    9          12d     192.168.191.30   k8s-3    <none>           <none>
kube-apiserver-k8s-3            1/1     Running    9          15d     192.168.191.30   k8s-3    <none>           <none>
kube-controller-manager-k8s-3   1/1     Running    76         15d     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-dqv9t     1/1     Running    0          15m     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-mw6bq     0/1     Init:0/1   0          15m     192.168.191.22   k8s3-1   <none>           <none>
kube-flannel-ds-amd64-rsl68     1/1     Running    0          15m     192.168.191.31   k8s-4    <none>           <none>
kube-proxy-54kv7                1/1     Running    1          10h     192.168.191.22   k8s3-1   <none>           <none>
kube-proxy-7jwmj                1/1     Running    17         15d     192.168.191.30   k8s-3    <none>           <none>
kube-proxy-psrgh                1/1     Running    4          6d22h   192.168.191.31   k8s-4    <none>           <none>
kube-scheduler-k8s-3            1/1     Running    74         15d     192.168.191.30   k8s-3    <none>           <none>


查看kubelet的日誌,提示flannel的配置文件不存在,想着人工拷貝一份,可是以爲不該該這樣,確定是其餘地方出問題了,最後想到pod啓動時還須要鏡像文件,又查看了docker 的images確實沒有flannel的鏡像,人工導入一份

Apr  6 11:32:58 k8s3-1 kubelet: W0406 11:32:58.346394    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:01 k8s3-1 kubelet: E0406 11:33:01.498881    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:03 k8s3-1 kubelet: W0406 11:33:03.347829    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:06 k8s3-1 kubelet: E0406 11:33:06.530602    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:08 k8s3-1 kubelet: W0406 11:33:08.348352    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:11 k8s3-1 kubelet: E0406 11:33:11.572273    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr  6 11:33:13 k8s3-1 kubelet: W0406 11:33:13.350727    2867 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Apr  6 11:33:16 k8s3-1 kubelet: E0406 11:33:16.599437    2867 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

手動導入flannel的鏡像

[root@k8s3-1 ~]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.17.4             6dec7cfde1e5        3 weeks ago         116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
[root@k8s3-1 ~]# docker load --input flannel.tar 
256a7af3acb1: Loading layer [==================================================>]  5.844MB/5.844MB
d572e5d9d39b: Loading layer [==================================================>]  10.37MB/10.37MB
57c10be5852f: Loading layer [==================================================>]  2.249MB/2.249MB
7412f8eefb77: Loading layer [==================================================>]  35.26MB/35.26MB
05116c9ff7bf: Loading layer [==================================================>]   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64



再次查看master上的node信息

[root@k8s-3 ~]# kubectl get pods  -n kube-system  -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
coredns-7f9c544f75-hdfjm        1/1     Running   6          15d     10.244.0.15      k8s-3    <none>           <none>
coredns-7f9c544f75-w62rh        1/1     Running   6          15d     10.244.0.14      k8s-3    <none>           <none>
etcd-k8s-3                      1/1     Running   9          12d     192.168.191.30   k8s-3    <none>           <none>
kube-apiserver-k8s-3            1/1     Running   9          15d     192.168.191.30   k8s-3    <none>           <none>
kube-controller-manager-k8s-3   1/1     Running   77         15d     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-dqv9t     1/1     Running   0          44m     192.168.191.30   k8s-3    <none>           <none>
kube-flannel-ds-amd64-mw6bq     1/1     Running   0          44m     192.168.191.22   k8s3-1   <none>           <none>
kube-flannel-ds-amd64-rsl68     1/1     Running   0          44m     192.168.191.31   k8s-4    <none>           <none>
kube-proxy-54kv7                1/1     Running   3          11h     192.168.191.22   k8s3-1   <none>           <none>
kube-proxy-7jwmj                1/1     Running   17         15d     192.168.191.30   k8s-3    <none>           <none>
kube-proxy-psrgh                1/1     Running   4          6d23h   192.168.191.31   k8s-4    <none>           <none>
kube-scheduler-k8s-3            1/1     Running   75         15d     192.168.191.30   k8s-3    <none>           <none>
[root@k8s-3 ~]# kubectl get nodes -n kube-system  -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-3    Ready    master   15d   v1.17.4   192.168.191.30   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s-4    Ready    node     13d   v1.17.4   192.168.191.31   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8
k8s3-1   Ready    node     11h   v1.17.4   192.168.191.22   <none>        CentOS Linux 7 (Core)   3.10.0-1062.18.1.el7.x86_64   docker://19.3.8


最後docker爲何沒有下載flannel鏡像,這個沒有肯定緣由


設置kubelet國內的pause鏡像
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF
相關文章
相關標籤/搜索