Kubernetes2-K8s的集羣部署

1、簡介

  一、架構參考

              Kubernetes1-K8s的簡單介紹html

   二、實例架構

    192.168.216.51 master  etcdnode

    192.168.216.53 node1linux

    192.168.216.54 node1nginx

  三、拓撲

  四、軟件版本

[root@master ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[root@master ~]# uname -a
Linux master 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@master ~]# 
[root@master ~]# docker version 
Client:
 Version:         1.13.1
 API version:     1.26

    kubernetes版本請見下面安裝時候的依賴關係git

 

2、部署軟件

  一、修改主機名

    1)按照以下名稱修改主機名每臺主機分別修改

     hostnamectl set-hostname master
     #hostnamectl set-hostname etcd 暫時不用此節點和master公用一個節點
     hostnamectl set-hostname node1
     hostnamectl set-hostname node2

    2)並修改hosts文件

      三臺主機都修改hosts文件github

[root@node2 yum.repos.d]# cat >>/etc/hosts<<eof
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
> 192.168.216.51 master
> 192.168.216.52 etcd   #由於作實驗時候虛擬機有點問題,暫時不用此節點etcd和master一個節點
> 192.168.216.53 node1
> 192.168.216.54 node2
> eof

 

  二、時間同步

    三臺節點都啓用chronyd,保證時間同步docker

    systemctl start chronyd
    systemctl enable chronydjson

 

  三、安裝軟件

    master/etcd:kubenetes flannel etcdvim

 

yum install  kubernetes etcd flannel ntp -y
Installed:
  etcd.x86_64 0:3.3.11-2.el7.centos   flannel.x86_64 0:0.7.1-4.el7   kubernetes.x86_64 0:1.5.2-0.7.git269f928.el7  

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-5.el7_7.2                 docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos        
  docker-client.x86_64 2:1.13.1-103.git7f2769b.el7.centos  docker-common.x86_64 2:1.13.1-103.git7f2769b.el7.centos 
  kubernetes-client.x86_64 0:1.5.2-0.7.git269f928.el7      kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7     
  kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7        libnetfilter_cthelper.x86_64 0:1.0.0-10.el7_7.1         
  libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7_7.1          libnetfilter_queue.x86_64 0:1.0.2-2.el7_2               
  socat.x86_64 0:1.7.3.2-2.el7                            

Updated:
  ntp.x86_64 0:4.2.6p5-29.el7.centos                                                                                

Dependency Updated:
  ntpdate.x86_64 0:4.2.6p5-29.el7.centos                                                                            

Complete!
[root@master backup1]# 
View Code   

    node1/node2centos

[root@node4 ~]# yum install kubernetes flannel ntp -y 
[root@node3 ~]# yum install kubernetes flannel ntp -y 

 

3、配置

  一、配置etct

    1)修改第6,10,23行爲第七、十一、24行的內容便可,也就是標紅部分

[root@etcd ~]# vim /etc/etcd/etcd.conf 

 6 #ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
  7 ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.216.51:2379"
 10 #ETCD_NAME="default"
 11 ETCD_NAME="etcd"   
 23 #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 24 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.51:2379"

    2)配置文件含義

    

  ETCD_NAME="etcd"  
 #etcd節點名稱,若是etcd機器只有一臺etcd能夠不修改,保持默認default   ETCD_DATA_DIR
="/var/lib/etcd/default.etcd"
 #數據存儲目錄   ETCD_LISTEN_CLIENT_URLS
="http://localhost:2379,http://192.168.216.51:2379"
#etcd對外服務監聽地址,通常指定2379端口,若是爲0.0.0.0將會監聽全部端口   ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.51:2379"
#這個是通知客戶端的urls

 

    3)啓動服務

systemctl start etcd
systemctl status etcd
systemctl enable etcd

    4)監聽端口 2379

netstat -antup |grep 2379

    5)查當作員列表

[root@master ~]# etcdctl member list
8e9e05c52164694d: name=etcd peerURLs=http://localhost:2380 clientURLs=http://192.168.216.51:2379 isLeader=true
[root@master ~]#

  

  二、配置master服務器

     1)修改kubernetes配置文件

      修改第22行標紅的

[root@master ~]# vim /etc/kubernetes/config   

22 KUBE_MASTER="--master=http://192.168.216.51:8080"

    2)配置文件意思

  1 ###
  2 # kubernetes system config
  3 #
  4 # The following values are used to configure various aspects of all
  5 # kubernetes services, including
  6 
  7 #   kube-apiserver.service
  8 #   kube-controller-manager.service
  9 #   kube-scheduler.service
 10 #   kubelet.service
 11 #   kube-proxy.service
 12 # logging to stderr means we get it in the systemd journal
 13 KUBE_LOGTOSTDERR="--logtostderr=true"
 14 #---表示錯誤日誌記錄道文件仍是輸出道stderr標準錯誤輸出 15 # journal message level, 0 is debug
 16 KUBE_LOG_LEVEL="--v=0"
 17 #---日誌等級 18 # Should this cluster be allowed to run privileged docker containers
 19 KUBE_ALLOW_PRIV="--allow-privileged=false"
 20 #---是否運行運行特權容器,false是不容許 21 # How the controller-manager, scheduler, and proxy find the apiserver
 22 KUBE_MASTER="--master=http://192.168.216.51:8080"
 23 #---監聽端口 ~                                                            

     3)修改apiserver配置文件

      修改標紅部分

[root@master ~]# vim /etc/kubernetes/apiserver 
  1 ###
  2 # kubernetes system config
  3 #
  4 # The following values are used to configure the kube-apiserver
  5 #
  6 
  7 # The address on the local server to listen to.
  8 #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
  9 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
 10 #---監聽的接口,修改成0.0.0.0監聽全部端口
 11 # The port on the local server to listen on.
 12 # KUBE_API_PORT="--port=8080"
 13 
 14 # Port minions listen on
 15 # KUBELET_PORT="--kubelet-port=10250"
 16 
 17 # Comma separated list of nodes in the etcd cluster
 18 #KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
 19 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.216.51:2379"
 20 #---etcd服務地址,以前配置的etcd服務
 21 # Address range to use for services
 22 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
 23 #---kubernetes能夠分配的ip的範圍,kubernetes啓動的每一個pod以及service都會分配一個地址這裏定義一個ip池
 24 # default admission control policies
 25 #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,
    ServiceAccount,ResourceQuota"
 26 KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
 27 #---不作限制,容許全部節點能夠訪問apiserver,對全部請求的容許
 28 # Add your own! 29 KUBE_API_ARGS="" ~

    4)配置kube-controller-manager配置文件

      這裏保持默認便可,先不用修改

[root@master ~]# cat /etc/kubernetes/controller-manager 
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
[root@master ~]# 

    5)配置kube-schedule

[root@master ~]# vim /etc/kubernetes/scheduler 
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="0.0.0.0"
#---改成監聽全部                                                                                                                 
~                                 

  三、設置etcd網絡

etcdctl mkdir /k8s/network        
#---建立一個k8s/network用於存儲flannel網絡信息
etcdctl set /k8s/network/config '{"Network": "10.255.0.0/16"}'
#---給/k8s/network/config 賦一個字符串的值'{"Network": "10.255.0.0/16"}',這個配置將用於flannel分配給每一個docker的虛擬ip地址段,用於配置在minion上的dockerip地址
 

[root@master ~]# etcdctl get /k8s/network/config

#---查看etcd網絡配置記錄
{"Network": "10.255.0.0/16"}
[root@master ~]#

  四、flanneld

 

[root@master ~]# vim /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.216.51:2379"

#---etcd url位置信息,指向運行的etcd服務器

# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

#---指定網絡配置目錄

# Any additional options that you want to pass
FLANNEL_OPTIONS="--iface=ens33"

#---指定網卡

[root@master ~]# systemctl restart flanneld 

 [root@master ~]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-10-29 17:21:54 CST; 42min ago
Main PID: 12715 (flanneld)
CGroup: /system.slice/flanneld.service
└─12715 /usr/bin/flanneld -etcd-endpoints=http://192.168.216.51:2379 -etcd-prefix=/k8s/network --iface...

 
 

Oct 29 17:21:53 master systemd[1]: Starting Flanneld overlay address etcd agent...
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.022949 12715 main.go:132] Installing signal handlers
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.023985 12715 manager.go:149] Using interface w...6.51
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.024047 12715 manager.go:166] Defaulting extern....51)
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.048791 12715 local_manager.go:134] Found lease...sing
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.068556 12715 manager.go:250] Lease acquired: 1...0/24
Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.069202 12715 network.go:98] Watching for new s...ases
Oct 29 17:21:54 master systemd[1]: Started Flanneld overlay address etcd agent.
Oct 29 17:38:56 master flanneld-start[12715]: I1029 17:38:56.822596 12715 network.go:191] Subnet added: 10....0/24
Oct 29 17:56:05 master flanneld-start[12715]: I1029 17:56:05.501411 12715 network.go:191] Subnet added: 10....0/24
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]#

 

    查看子網信息

 
 

[root@master ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.216.51 netmask 255.255.255.0 broadcast 192.168.216.255
inet6 fe80::3409:e73d:1ef:2e1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::9416:80e8:f210:1e24 prefixlen 64 scopeid 0x20<link>
inet6 fe80::39cb:d8d1:a78b:9be1 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1c:8b:39 txqueuelen 1000 (Ethernet)
RX packets 124978 bytes 149317395 (142.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 47636 bytes 5511781 (5.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 
 

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.16.0 netmask 255.255.0.0 destination 10.255.16.0
inet6 fe80::1837:1885:18c6:5e52 prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 144 (144.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 
 

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 178940 bytes 55467759 (52.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 178940 bytes 55467759 (52.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 
 

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:23:a5:7c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 
 

[root@master ~]#

[root@master ~]# cat /run/flannel/subnet.env

#---查看子網信息,以後會有一個腳本將subnet.env轉寫成一個docker的環境變量文件/run/flannel/docker
FLANNEL_NETWORK=10.255.0.0/16
FLANNEL_SUBNET=10.255.16.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[root@master ~]#

 
[root@etcd ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=10.255.93.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.255.93.1/24 --ip-masq=true --mtu=1472"
[root@etcd ~]# 

  五、配置node1

    1)配置flanneld服務

[root@node1 ~]# vim /etc/sysconfig/flanneld 

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.216.51:2379" #---指定etcd服務器url
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network" #---指定網絡配置目錄
# Any additional options that you want to pass
#FLANNEL_OPTIONS="--iface=ens33"

    2)配置master地址和kube-proxy

[root@node1 ~]# vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.216.51:8080"
#---指定master url地址

    3)配置kube-proxy

      主要是負責service的實現,就是實現內部從pod到service

      這裏保持默認便可

[root@node1 ~]# grep -v '^#' /etc/kubernetes/proxy


KUBE_PROXY_ARGS=""
[root@node1 ~]# 

    4)配置node1的kubelet

      kubelet組件管理pod,pod中容器及容器的鏡像和卷等信息

[root@node1 ~]# vim /etc/kubernetes/kubelet

1 ###
2 # kubernetes kubelet (minion) config
3
4 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
5 KUBELET_ADDRESS="--address=0.0.0.0"
6 #---監聽全部地址
7 # The port for the info server to serve on
8 # KUBELET_PORT="--port=10250"
9
10 # You may leave this blank to use the actual hostname
11 KUBELET_HOSTNAME="--hostname-override=node1"
12
13 # location of the api-server
14 KUBELET_API_SERVER="--api-servers=http://192.168.216.51:8080"
15 #---api-servers-url地址
16 # pod infrastructure container
17 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:lat est"
18
19 # Add your own!
20 KUBELET_ARGS=""
~
~
~

 

    5)啓動相關服務

[root@node1 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@node1 ~]# systemctl enable flanneld kube-proxy kubelet docker
[root@node1 ~]# systemctl status flanneld kube-proxy kubelet docker 
   注意:這裏kubelet可能沒法正常啓動,請見最後排錯 

    6)查看ifconfig信息,及端口監聽

  

[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.41.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:22:ac:66:2f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.216.53 netmask 255.255.255.0 broadcast 192.168.216.255
inet6 fe80::3409:e73d:1ef:2e1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::9416:80e8:f210:1e24 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:79:23:62 txqueuelen 1000 (Ethernet)
RX packets 2490 bytes 802004 (783.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1853 bytes 397450 (388.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.41.0 netmask 255.255.0.0 destination 10.255.41.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 76 bytes 6004 (5.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 76 bytes 6004 (5.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:23:a5:7c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 

 

[root@node1 ~]# netstat -antup |grep proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      918/kube-proxy      
tcp        0      0 192.168.216.53:58700    192.168.216.51:8080     ESTABLISHED 918/kube-proxy      
tcp        0      0 192.168.216.53:58698    192.168.216.51:8080     ESTABLISHED 918/kube-proxy      
[root@node1 ~]# 

    7)驗證

      master節點上經過kubectl命令查看

[root@master ~]# kubectl get node
NAME      STATUS    AGE
node1 Ready 17h
[root@master ~]# 

      看到node1 Ready 17h,就是成功了

 

  六、配置node2,基本和node1一致

    1)從node1發送配置文件到node2

scp /etc/sysconfig/flanneld 192.168.216.54:/etc/sysconfig/
scp /etc/kubernetes/config 192.168.216.54:/etc/kubernetes/
scp /etc/kubernetes/proxy 192.168.216.54:/etc/kubernetes/
scp /etc/kubernetes/kubelet 192.168.216.54:/etc/kubernetes/

    2)只須要改一個文件

      把node1換成node2

[root@node2 ~]# vim /etc/kubernetes/kubelet 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address="0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node2"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.216.51:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
~
~

    3)啓動服務,並開機運行

systemctl restart flanneld kube-proxy kubelet docker
systemctl enable flanneld kube-proxy kubelet docker

  七、master驗證節點是否加入進來

    看到status 爲Ready狀態證實成功加入到集羣了

[root@master ~]# kubectl get node
NAME      STATUS    AGE
node1 Ready 17h node2 Ready 1m
[root@master ~]# 

 

4、排錯

    針對沒法啓用kubelet的問題

  一、查看啓動詳情

[root@node1 ~]# systemctl status -l kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2019-10-25 15:39:18 CST; 2s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 71003 ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS (code=exited, status=204/MEMORY)
 Main PID: 71003 (code=exited, status=204/MEMORY)

Oct 25 15:39:17 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=204/MEMORY
Oct 25 15:39:17 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:17 node1 systemd[1]: kubelet.service failed.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Oct 25 15:39:18 node1 systemd[1]: start request repeated too quickly for kubelet.service
Oct 25 15:39:18 node1 systemd[1]: Failed to start Kubernetes Kubelet Server.
Oct 25 15:39:18 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service failed.
[root@node1 ~]# journalctl -f -u kubelet
-- Logs begin at Thu 2019-10-24 19:20:13 CST. --
Oct 25 15:39:17 node1 systemd[1]: Started Kubernetes Kubelet Server.
Oct 25 15:39:17 node1 systemd[1]: Starting Kubernetes Kubelet Server...
Oct 25 15:39:17 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=204/MEMORY
Oct 25 15:39:17 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:17 node1 systemd[1]: kubelet.service failed.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Oct 25 15:39:18 node1 systemd[1]: start request repeated too quickly for kubelet.service
Oct 25 15:39:18 node1 systemd[1]: Failed to start Kubernetes Kubelet Server.
Oct 25 15:39:18 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service failed.        

  二、查看日誌

[root@master ~]# kubectl logs nginx-2187705812-0vkvm 
Error from server (BadRequest): container "nginx" in pod "nginx-2187705812-0vkvm" is waiting to start: ContainerCreating
[root@master ~]# kubectl describe pod
Name:           nginx-2187705812-0vkvm
Namespace:      default
Node:           node1/192.168.216.53
Start Time:     Mon, 04 Nov 2019 16:26:33 +0800
Labels:         pod-template-hash=2187705812
                run=nginx
Status:         Pending
IP:
Controllers:    ReplicaSet/nginx-2187705812
Containers:
  nginx:
    Container ID:
    Image:                      docker.io/nginx
    Image ID:
    Port:                       9000/TCP
    State:                      Waiting
      Reason:                   ContainerCreating
    Ready:                      False
    Restart Count:              0
    Volume Mounts:              <none>
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
No volumes.
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason             Message
  ---------     --------        -----   ----                    -------------   --------        ------             -------
  3m            3m              1       {default-scheduler }                    Normal          Scheduled          Successfully assigned nginx-2187705812-0vkvm to node1
  <invalid>     <invalid>       20      {kubelet node1}                         Warning         MissingClusterDNS  kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  <invalid>     <invalid>       20      {kubelet node1}                         Warning         FailedSync         Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: Error response from daemon: {\"message\":\"oci runtime error: container_linux.go:235: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"Cannot set property TasksAccounting, or unknown property.\\\\\\\"\\\"\\n\"}"
View Code

  三、交換內存 

        對應方法是禁用 swap

           swapoff -a

  四、kubelet文件驅動默認cgroupfs, 而咱們安裝的docker使用的文件驅動是systemd,更改驅動

    1)方法1,這個方法不行,修改後啓動docker有問題

#修改daemon.json
vi /etc/docker/daemon.json
#添加以下屬性
"exec-opts": [
    "native.cgroupdriver=systemd"
]

    2)方法2,修改docker.service

# 修改docker.service
vi /lib/systemd/system/docker.service
找到
--exec-opt native.cgroupdriver=systemd \
修改成:
--exec-opt native.cgroupdriver=cgroupfs \
查看是否更改爲功
docker info

 

  四、虛擬機問題,就從新作了一遍就ok了

    若是上面都不行,估計和我這裏同樣應該是虛擬機問題,從新恢復鏡像從新作了一遍發現沒問題

 

 

5、配置流程總結:

 

 

 

轉載請註明出處:http://www.javashuo.com/article/p-gjuxhdwe-hq.html

相關文章
相關標籤/搜索