Kubernetes 第三章 kubeadm

kubeadmnode

 https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/git

Kubeadm是一個工具,旨在提供kubeadm init和kubeadm join最佳實踐「快速路徑」,用於建立Kubernetes集羣。

kubeadm執行必要的操做以使最小的可行羣集啓動並運行。按照設計,它只關心自舉,而不關心配置機器。一樣,安裝各類有用的插件,如Kubernetes Dashboard,監控解決方案和特定於雲的插件,不在範圍內。

相反,咱們但願在kubeadm之上構建更高級別和更多定製的工具,理想狀況下,使用kubeadm做爲全部部署的基礎將更容易建立符合要求的集羣

 

主要功能github

kubeadm init來引導Kubernetes控制平面節點
kubeadm join以引導Kubernetes工做節點並將其加入羣集
kubeadm upgrade 將Kubernetes集羣升級到更新版本
kubeadm config若是使用kubeadm v1.7.x或更低版本初始化羣集,則爲其配置羣集kubeadm upgrade
用於管理令牌的kubeadm令牌kubeadm join
kubeadm reset 以恢復由kubeadm init或對此主機所作的任何更改kubeadm join
kubeadm version 打印kubeadm版本
kubeadm alpha 可預覽一組可用於收集社區反饋的功能

 

測試:docker

使用kubeadm  在三臺主機上部署json

master: 10.2.61.21   bootstrap

安裝 // docker-ce kubelet kubeadm kubectlapi

node:  10.2.61.22bash

node : 10.2.61.23網絡

 

1. 配置 master app

[root@localhost yum.repos.d]# yum install docker-ce kubelet kubeadm kubectl -y  

啓動 br_netfilter 功能

功能介紹
bridge-netfilter代碼啓用如下功能: {Ip,Ip6,Arp}表能夠過濾橋接的IPv4 / IPv6 / ARP數據包,即便封裝在802.1Q VLAN或PPPoE報頭中也是如此。這啓用了有狀態透明防火牆的功能。 所以,3個工具的全部過濾,日誌記錄和NAT功能均可以用於橋接幀。 結合ebtables,bridge-nf代碼所以使Linux成爲一個很是強大的透明防火牆。 這使得fe可以建立透明的假裝機器(即全部本地主機都認爲它們直接鏈接到因特網)。 讓{ip,ip6,arp}表看到橋接流量可使用適當的proc條目禁用或啓用,位於 /proc/sys/net/bridge/: bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-call-ip6tables 此外,容許上述防火牆工具看到橋接的802.1Q VLAN和PPPoE封裝的數據包能夠在同一目錄中禁用或啓用proc條目: bridge-nf-filter-vlan-tagged bridge-nf-filter-pppoe-tagged 這些proc條目只是常規文件。將「1」寫入文件(echo 1 > file)可啓用特定功能,而向文件寫入「0」則禁用該功能。
[root@localhost sysctl.d]# lsmod |grep br_netfilter
[root@localhost sysctl.d]# modprobe br_netfilter      #裝載模塊
[root@localhost sysctl.d]# lsmod |grep br_netfilter
br_netfilter           22256  0 
bridge                146976  1 br_netfilter
[root@localhost sysctl.d]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@localhost sysctl.d]# cat k8s.conf          #設置策略  
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@localhost sysctl.d]# 

 

編輯docker.service 配置文件

root@localhost /]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
Environment="HTTPS_PROXY=http://www.ik8s.io:10080 NO_PROXY=10.0.0.0/8"
  //因爲許多鏡像都須要docker 去k8s 官網下載,可是因爲某些緣由沒法訪問,所以使用別人的代理工具進行下載
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecReload=/bin/kill -s HUP $MAINPID

  

2. kubeadm init [flags] 初始化一個master 節點

   Init 命令的工做流程

kubeadm init 命令經過執行下列步驟來啓動一個 Kubernetes master 節點。

1. 在作出變動前運行一系列的預檢項來驗證系統狀態。 一些檢查項目僅僅觸發警告,其它的則會被視爲錯誤而且退出 kubeadm,
	除非問題被解決或者用戶指定了 --ignore-preflight-errors=<list-of-errors> 參數。
	
2. 生成一個自簽名的 CA證書 (或者使用現有的證書,若是提供的話) 來爲集羣中的每個組件創建身份標識。
	若是用戶已經經過 --cert-dir 配置的證書目錄(缺省值爲 /etc/kubernetes/pki)提供了他們本身的 CA證書 以及/或者 密鑰,
	那麼將會跳過這個步驟,正如文檔使用自定義證書中所描述的那樣。 若是指定了 --apiserver-cert-extra-sans 參數, 
	APIServer 的證書將會有額外的 SAN 條目,若是必要的話,將會被轉爲小寫。
	
3. 將 kubeconfig 文件寫入 /etc/kubernetes/ 目錄以便 kubelet、controller-manager 和 scheduler 用來鏈接到 API server,
	它們每個都有本身的身份標識,同時生成一個名爲 admin.conf 的獨立的 kubeconfig 文件,用於管理操做。
	
4. 若是 kubeadm 被調用時附帶了 --feature-gates=DynamicKubeletConfig 參數, 它會將 kubelet 的初始化配置寫入 /var/lib/kubelet/config/init/kubelet 文件中。
	參閱 經過配置文件設置 Kubelet 參數以及 在一個現有的集羣中從新配置節點的 Kubelet 設置來獲取更多關於動態配置 Kubelet 的信息。
	這個功能如今是默認關閉的,正如你所見它經過一個功能開關控制開閉, 可是在將來的版本中頗有可能會默認啓用。
	
5. 爲 API server、controller manager 和 scheduler 生成靜態 Pod 的清單文件。假使沒有提供一個外部的 etcd 服務的話,也會爲 etcd 生成一份額外的靜態 Pod 清單文件。

6. 靜態 Pod 的清單文件被寫入到 /etc/kubernetes/manifests 目錄; kubelet 會監視這個目錄以便在系統啓動的時候建立 Pods。

	一旦 control plane 的 Pods 都運行起來, kubeadm init 的工做流程就繼續往下執行。

 

root@localhost ~]# cat /etc/docker/daemon.json 
{
"registry-mirrors": ["https://registry.docker-cn.com"],

"exec-opts": ["native.cgroupdriver=systemd"]
//Cgroup Driver: systemd 方式選擇 systemd } [root@localhost ~]#

  進行初始化

//在初始化以前,我經過 docker Hub 和github 自動建立鏡像的方式將所須要的鏡像都下載到了本地,而後tag 下標籤
//經過 [kubeadm config images  list] 命令查看所須要的image 及版本
//經過--ignore-preflight-errors=Swap 參數不能消除初始化錯誤,在啓動時仍是致使 kubelet 沒法啓動,所以須要將awapoff -a 關掉才行。
//在執行 kubeadm init 時前面報錯沒有執行成功,從新執行時須要kubeadm reset 一下,清理下前面生成的數據,不然會提示已存在沒法執行。
[root@localhost ~]# kubeadm init --kubernetes-version=v1.15.0 --ignore-preflight-errors=Swap [init] Using Kubernetes version: v1.15.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.61.21] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 34.502758 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 1awp15.lkz231yb9nbhbdx4 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user:
    //提示要使用kuberetes 集羣須要執行如下幾個命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.2.61.21:6443 --token 1awp15.lkz231yb9nbhbdx4 \ --discovery-token-ca-cert-hash sha256:fe9433078dc9ea4eba963ab00b7dd388a24c2367152dff5ac07ac89ef8856849 [root@localhost ~]#

  

 因爲使用root 所以不配置屬主屬組

  mkdir -p $HOME/.kube
  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 

Kubectl  

Kubectl是一個命令行界面,用於運行鍼對Kubernetes集羣的命令。kubectl在$ HOME / .kube目錄中查找名爲config的文件。您能夠經過設置KUBECONFIG環境變量或設置標誌來指定其餘kubeconfig文件--kubeconfig。

 

  

root@localhost /]# kubectl get pods --all-namespaces
// 查看當前pods 全部名稱空間狀態 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-bzvgt 0/1 Pending 0 21h kube-system coredns-5c98db65d4-c2zw8 0/1 Pending 0 21h kube-system etcd-localhost.localdomain 1/1 Running 0 21h kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 21h kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 21h kube-system kube-flannel-ds-amd64-wkmx6 0/1 Init:ImagePullBackOff 0 17m
  //提示flannel 網絡模塊一直沒法加載,節點狀態NotReady
kube-system kube-proxy-lb4jf 1/1 Running 0 21h kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 21h [root@localhost /]#

  

——————————————————————————————————————————————————————————————————————————————————-

//上面配置有點問題, https://stackoverflow.com/questions/52098214/kube-flannel-in-crashloopbackoff-status 說須要指定--pod-network-cidr 網絡

經過kubeadm reset   重置,同時須要刪除 $HOME 目錄下的.kube 文件,而後提示什麼文件還存在就刪除

 kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

[root@localhost lib]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.61.21]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.002842 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wxgx55.vjdl3ampsahtlkl3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.2.61.21:6443 --token wxgx55.vjdl3ampsahtlkl3 \
    --discovery-token-ca-cert-hash sha256:caf8238dcdcbc374eb304612a08f13d296186cbe01e3941d3c919d97a7820809 
[root@localhost lib]# 

  

自動部署flannel 網絡

https://github.com/coreos/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@localhost /]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS                  RESTARTS   AGE
kube-system   coredns-5c98db65d4-pvkq7                        0/1     Pending                 0          21m
kube-system   coredns-5c98db65d4-wwfqp                        0/1     Pending                 0          21m
kube-system   etcd-localhost.localdomain                      1/1     Running                 0          20m
kube-system   kube-apiserver-localhost.localdomain            1/1     Running                 0          20m
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running                 0          20m
kube-system   kube-flannel-ds-amd64-rns7x                     0/1     Init:ImagePullBackOff   0          11m
kube-system   kube-proxy-qg9lx                                1/1     Running                 0          21m
kube-system   kube-scheduler-localhost.localdomain            1/1     Running                 0          20m
[root@localhost /]# 

  

  

 問題排查使用  kubectl describe pod -n kube-system kube-flannel-ds-amd64-rns7x

[root@localhost /]# kubectl describe pod -n kube-system kube-flannel-ds-amd64-rns7x
Name:           kube-flannel-ds-amd64-rns7x
Namespace:      kube-system
Priority:       0
Node:           localhost.localdomain/10.2.61.21
Start Time:     Wed, 10 Jul 2019 15:48:59 +0800
Labels:         app=flannel
                controller-revision-hash=7f489b5c67
                pod-template-generation=1
                tier=node
Annotations:    <none>
Status:         Pending
IP:             10.2.61.21
Controlled By:  DaemonSet/kube-flannel-ds-amd64
Init Containers:
  install-cni:
    Container ID:  
    Image:         quay.io/coreos/flannel:v0.11.0-amd64
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-2jfdz (ro)
Containers:
  kube-flannel:
    Container ID:  
    Image:         quay.io/coreos/flannel:v0.11.0-amd64
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-amd64-rns7x (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-2jfdz (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-2jfdz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-2jfdz
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age               From                            Message
  ----     ------     ----              ----                            -------
  Normal   Scheduled  14m               default-scheduler               Successfully assigned kube-system/kube-flannel-ds-amd64-rns7x to localhost.localdomain
  Warning  Failed     3m14s             kubelet, localhost.localdomain  Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = context canceled
  Warning  Failed     3m14s             kubelet, localhost.localdomain  Error: ErrImagePull
  Normal   BackOff    3m13s             kubelet, localhost.localdomain  Back-off pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  Warning  Failed     3m13s             kubelet, localhost.localdomain  Error: ImagePullBackOff
  Normal   Pulling    3m (x2 over 14m)  kubelet, localhost.localdomain  Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  //根據提示信息說明仍是
quay.io/coreos/flannel:v0.11.0-amd64 鏡像pull 不下來。經過Docker hub 加github 從新pull 一下
 // 保證 docker image ls 時能看到 quay.io/coreos/flannel:v0.11.0-amd64 鏡像 ,而後從新執行
//kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 
[root@localhost /]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged
[root@localhost /]# 
[root@localhost /]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-pvkq7                        1/1     Running   0          66m
kube-system   coredns-5c98db65d4-wwfqp                        1/1     Running   0          66m
kube-system   etcd-localhost.localdomain                      1/1     Running   0          66m
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          66m
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          65m
kube-system   kube-flannel-ds-amd64-rns7x                     1/1     Running   0          56m
kube-system   kube-proxy-qg9lx                                1/1     Running   0          66m
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          65m

[root@localhost /]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
localhost.localdomain   Ready    master   67m   v1.15.0
[root@localhost /]# 
[root@localhost /]# 

  

  

 Node 配置

 

注意事項:swapoff -a  關掉swap

啓用  modprobe br_netfilter 

開啓 bridge ,提早下載 docker image 

# yum install docker-ce kubelet kubeadm kubectl

 

[root@localhost sysctl.d]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

 

 

 

#我經過 docker image save|load 命令線下導入了鏡像
#[root@localhost ~]# docker image save -o /root/kube.tar k8s.gcr.io/kube-apiserver:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 quay.io/coreos/flannel:v0.11.0-amd64
 
[root@localhost ~]# docker image load -i kube.tar 

[root@localhost sysctl.d]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.15.0 d235b23c3570 3 weeks ago 82.4MB k8s.gcr.io/kube-apiserver v1.15.0 201c7a840312 3 weeks ago 207MB k8s.gcr.io/kube-scheduler v1.15.0 2d3813851e87 3 weeks ago 81.1MB k8s.gcr.io/kube-controller-manager v1.15.0 8328bb49b652 3 weeks ago 159MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 5 months ago 52.6MB k8s.gcr.io/coredns 1.3.1 eb516548c180 5 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 7 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 18 months ago 742kB [root@localhost sysctl.d]#

 

#經過在master節點生成的token 將node 加入集羣
#在加入集羣是發生了報錯,--token 1thhr1.6t0yv35khnamf6zz 失效了
#kubeadm token list 列出token ,kubeadm token create 生成新的token ,token 有效時間24小時

  [root@localhost ~]# kubeadm token list
  TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
  1thhr1.6t0yv35khnamf6zz 23h 2019-07-12T16:43:46+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
  wxgx55.vjdl3ampsahtlkl3 <invalid> 2019-07-11T15:38:46+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:def  ault-node-token
  [root@localhost ~]#

 
[root@localhost sysctl.d]# kubeadm join 10.2.61.21:6443 --token 1thhr1.6t0yv35khnamf6zz     --discovery-token-ca-cert-hash sha256:caf8238dcdcbc374eb304612a08f13d296186cbe01e3941d3c919d97a7820809
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@localhost sysctl.d]# 

 

若是想在 node 執行 kubectl 須要將master 節點的 /root/.kube/config 複製到本機的相同位置

#node 添加成功
[root@localhost ~]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
kube.node2              Ready    <none>   26m   v1.15.0
localhost.localdomain   Ready    master   25h   v1.15.0
[root@localhost ~]# 

  

因爲上述配置中主機名沒有指定,所以都從新進行了構建

 

kubeadm join 10.2.61.21:6443 --token 8gak7a.ncl9l1kvjzyqgar2 \
    --discovery-token-ca-cert-hash sha256:f30655d78e55a6efe0702d19af1f247e78d5a63586a913a614084b9af048f5d0

 

 

    

 

root@kube sysctl.d]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
kube.master   Ready    master   22h   v1.15.0
kube.node1    Ready    <none>   57m   v1.15.0
kube.node2    Ready    <none>   22h   v1.15.0
[root@kube sysctl.d]# 
相關文章
相關標籤/搜索