kubernetes之kubeadm 安裝kubernetes 高可用集羣

1. 架構信息

系統版本:CentOS 7.6
內核:3.10.0-957.el7.x86_64 Kubernetes: v1.14.1 Docker-ce: 18.09.5 推薦硬件配置:4核8G Keepalived保證apiserever服務器的IP高可用 Haproxy實現apiserver的負載均衡 

2. 節點信息 

目前測試爲 6 臺虛擬機,etcd採用 rpm 安裝、kubernetes 使用二進制安裝,使用 systemd 來作管理,網絡組件採用 flannel,Master 實現了 HA, 集羣開啓 RBAC;master 不負載 pod,在分發證書等階段將在另一臺主機上執行,該主機對集羣內全部節點配置了 ssh 祕鑰登陸,基本環境以下node

hostname ip 組件 內存 cpu
node-01 172.19.8.111 kube-apiserver、kube-controller-manager、etcd、haproxy、keepalived 8G 4c
node-02 172.19.8.112 kube-apiserver、kube-controller-manager、etcd、haproxy、keepalived 8G 4c
node-03 172.19.8.113 kube-apiserver、kube-controller-manager、etcd 8G 4c
node-04 172.19.8.114 node 8G 4c
node-05 172.19.8.115 node 8G 4c
node-06 172.19.8.116 node 8G 4c
VIP 172.19.8.250      

 

 

 

 


3.1  關閉防火牆和selinux3. 部署前準備工做

[root@node-01 ~]# sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
[root@node-01 ~]# setenforce 0
[root@node-01 ~]# systemctl disable firewalld
[root@node-01 ~]# systemctl stop firewalld

 

3.2 關閉swap

[root@node-01 ~]# swapoff -a
注:修改/etc/fstab,註銷swap相關信息

3.3 添加host記錄

[root@node-01 ~]# cat >>/etc/hosts<<EOF
172.19.8.111 node-01
172.19.8.112 node-02
172.19.8.113 node-03
172.19.8.114 node-04
172.19.8.115 node-05
172.19.8.116 node-06 EOF

3.4 打通ssh, node-01免密登陸其餘服務器

[root@node-01 ~]# ssh-keygen
Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:uckCmzy46SfU6Lq9jRbugn0U8vQsr5H+PtfGBsvrfCA root@node-01 The key's randomart image is: +---[RSA 2048]----+ | | | | | | | . o . | | *.+ S | | +o==E.oo | |.=.oBo.o+* | |o.**oooo+ * | |oBO=++o++= | +----[SHA256]-----+

 分發node-01公鑰,用於免密登陸其餘服務器linux

 
 
[root@node-01 ~]# for n in `seq -w 01 06`;do ssh-copy-id node-$n;done

3.5  配置內核參數,須要重啓服務器,不然後面初始化的時候會報錯。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF sysctl --system

報錯處理,沒有橋接網絡致使,須要安裝docker,並啓動後纔會出現橋接網絡git

[root@node-01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 沒有那個文件或目錄 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 沒有那個文件或目錄

3.6 若是kube-proxy使用ipvs模式,須要加載ipvs模塊

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

3.7  添加yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF

考慮到國內沒法拉取google源,可使用阿里雲源github

$ cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo

以上部署須要在每一個節點執行。redis

4.  部署keepalived和haproxy

4.1 在node-01和node-02上面安裝keepalived和haproxy

$ yum install -y keepalived haproxy

4.2 配置keepalived

node-01 配置信息docker

[root@node-01 ~]# cat /etc/keepalived/keepalived.conf
! Configuratile for keepalived global_defs { notification_email { 995958026@qq.com } notification_email_from keepalived@ptmind.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node-01 } vrrp_script check_apiserver { script "/workspace/crontab/check_apiserver" interval 5 weight -20 fall 3 rise 1 } vrrp_instance VIP_250 { state MASTER interface eth0 virtual_router_id 250 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 890iop } track_script { check_apiserver } virtual_ipaddress { 172.19.8.250 } }

檢查腳本配置json

$ cat /workspace/crontab/check_apiserver
#!/bin/bash curl 127.0.0.1:8080 &>/dev/null if [ $? -eq 0 ];then exit 0 else #systemctl stop keepalived exit 1 fi
$ chmod 755 /workspace/crontab/check_apiserver

node-02 配置bootstrap

[root@node-02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs { notification_email { 435002493@qq.com } notification_email_from keepalived@ptmind.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node-02 } vrrp_instance VI_250 { state BACKUP interface eth0 virtual_router_id 250 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 890iop } virtual_ipaddress { 172.19.8.250 } }

4.3 配置haproxy

 

node-01和node-02的haproxy配置是同樣的。此處咱們監聽的是172.19.8.250的8443端口,由於haproxy是和k8s apiserver是部署在同一臺服務器上,都用6443會衝突。 
[root@node-01 ~]# cat /etc/haproxy/haproxy.cfg
global
        chroot  /var/lib/haproxy daemon group haproxy user haproxy # log warning pidfile /var/lib/haproxy.pid maxconn 20000 spread-checks 3 nbproc 8 defaults log global mode tcp retries 3 option redispatch listen https-apiserver bind 0.0.0.0:8443 mode tcp balance roundrobin timeout server 900s timeout connect 15s server apiserver01 172.19.8.111:6443 check port 6443 inter 5000 fall 5 server apiserver02 172.19.8.112:6443 check port 6443 inter 5000 fall 5 server apiserver03 172.19.8.113:6443 check port 6443 inter 5000 fall 5

4.4 啓動服務

systemctl enable keepalived && systemctl start keepalived 
systemctl enable haproxy && systemctl start haproxy 

5 安裝docker後端

因爲kubeadm對docker的版本是有要求的,須要安裝與kubeadm匹配的版本。本文docker採用docker-cecentos

yum-config-manager \
  --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

yum install docker-ce
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

6 安裝kubectl和kubeadm

yum -y install kubeadm-1.14.1 kubectl-1.14.1 --disableexcludes=kubernetes

設置kubelet開機啓動

systemctl enable kubelet 

7 配置

7.1  修改初始化配置

使用 kubeadm config print init-defaults > kubeadm-init.yaml 打印出默認配置,而後在根據本身的環境修改配置.

[root@node-01 ~]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@node-01 ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.19.8.111
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: k8s-test
controlPlaneEndpoint: "172.19.8.250:8443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.245.0.0/16
scheduler: {}
--- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" 上述部分是配置 kube-proxy 使用ipvs模式,默認爲iptables模式,若是使用iptables,能夠不添加紅色部分。

kube-proxy說明

在k8s中,提供相同服務的一組pod能夠抽象成一個service,經過service提供的統一入口對外提供服務,每一個service都有一個虛擬IP地址(clusterip)和端口號供客戶端訪問。
Kube-proxy存在於各個node節點上,主要用於Service功能的實現,具體來講,就是實現集羣內的客戶端pod訪問service,或者是集羣外的主機經過NodePort等方式訪問service。
kube-proxy默認使用的是iptables模式,經過各個node節點上的iptables規則來實現service的負載均衡,可是隨着service數量的增大,iptables模式因爲線性查找匹配、全量更新等特色,其性能會顯著降低。
IPVS是LVS的核心組件,是一種四層負載均衡器。IPVS具備如下特色:
與Iptables一樣基於Netfilter,但使用的是hash表;
支持TCP, UDP,SCTP協議,支持IPV4,IPV6;
支持多種負載均衡策略:rr, wrr, lc, wlc, sh, dh, lblc…
支持會話保持;
LVS主要由兩部分組成:

ipvs(ip virtual server):即ip虛擬服務,是工做在內核空間上的一段代碼,主要是實現調度的代碼,它是實現負載均衡的核心。
ipvsadm: 工做在用戶空間,負責爲ipvs內核框架編寫規則,用於定義誰是集羣服務,誰是後端真實服務器。咱們能夠經過ipvsadm指令建立集羣服務

7.2  預下載鏡像

[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.0
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

7.2.1  若是是國內環境,因爲被牆,可能拉取失敗,須要手動拉取國內鏡像,而後修改tag

獲取須要的鏡像列表

[root@node-01 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

可從阿里雲的鏡像替換爲谷歌的鏡像

#!/bin/bash
images=(
kube-apiserver:v1.14.1
kube-controller-manager:v1.14.1
kube-scheduler:v1.14.1
kube-proxy:v1.14.1
pause:3.1
etcd:3.3.10
coredns:1.3.1
kubernetes-dashboard-amd64:v1.10.1
heapster-influxdb-amd64:v1.3.3
heapster-amd64:v1.4.2
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

每一個節點都要拉取。

7.3  初始化

報錯:前面已經修改了內核,可是沒有生效,須要重啓

[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

解決:

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

或者重啓服務器。

從新初始化

[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [172.19.8.111 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [172.19.8.111 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.245.0.1 172.19.8.111 172.19.8.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.502727 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.19.8.250:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:89accff8b4514d49be4b88906c50fdab4ba8a211788da7252b880c925af77671 \
    --experimental-control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.19.8.250:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:89accff8b4514d49be4b88906c50fdab4ba8a211788da7252b880c925af77671

遇到報錯:

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.


Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

報錯分析: 這種狀況較難分析,沒有明確的報錯信息,在系統日誌中很難發現端疑,幾種狀況列舉一下
 1.拉取鏡像失敗,國內拉取google失敗,能夠換成阿里雲,須要修改kubeadm-init.yaml ,imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
 2.檢查容器是否正常啓動
 3.配置的vip不能被訪問,致使不能鏈接apiserver,檢查防火牆配置。這也是致使個人初始化報錯的緣由。
 4.若是失敗,則清空初始化信息,執行kubeadm reset , 關閉docker,重啓防火牆,若是etcd是外部的,將看到之前集羣的狀態,須要刪除etcd數據,例如etcdctl del "" --prefix
 
kubeadm init主要執行了如下操做:
[init]:指定版本進行初始化操做
[preflight] :初始化前的檢查和下載所須要的Docker鏡像文件
[kubelet-start] :生成kubelet的配置文件」/var/lib/kubelet/config.yaml」,沒有這個文件kubelet沒法啓動,因此初始化以前的kubelet實際上啓動失敗。
[certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。
[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通訊須要使用對應文件。
[control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。
[wait-control-plane]:等待control-plan部署的Master組件啓動。
[apiclient]:檢查Master組件服務狀態。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,經過註釋的方式記錄。
[mark-control-plane]:爲當前節點打標籤,打了角色Master,和不可調度標籤,這樣默認就不會使用Master節點來運行Pod。
[bootstrap-token]:生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時會用到
[addons]:安裝附加組件CoreDNS和kube-proxy 

7.4  爲kubectl準備kubeconfig文件

kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這裏是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。
在該配置文件中,記錄了API Server的訪問地址,因此後面直接執行kubectl命令就能夠正常鏈接到API Server中。 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

7.5  查看組件狀態

[root@node-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@node-01 ~]# kubectl get node
NAME      STATUS     ROLES    AGE   VERSION
node-01   NotReady   master   37m   v1.14.1

目前只有一個節點,角色是master,狀態是NotReady。是由於沒有網絡插件的緣由。

7.6 添加其餘master節點

將node-01將證書文件拷貝至其餘master節點
USER=root
CONTROL_PLANE_IPS="node-02 node-03"
for host in ${CONTROL_PLANE_IPS}; do
    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done

在其餘master執行,注意--experimental-control-plane參數,下面具體命令要根據kubeadm輸出
[root@node-02 ~]# kubeadm join 172.19.8.250:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:30d13676940237d9c4f0c5c05e67cbeb58cc031f97e3515df27174e6cb777f60 \
    --experimental-control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-02 localhost] and IPs [172.19.8.112 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-02 localhost] and IPs [172.19.8.112 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.245.0.1 172.19.8.112 172.19.8.250]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node node-02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

注意:token有效期是有限的,若是舊的token過時,可使用kubeadm token create --print-join-command從新建立一條token。

爲kubectl準備kubeconfig文件
kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這裏是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。
在該配置文件中,記錄了API Server的訪問地址,因此後面直接執行kubectl命令就能夠正常鏈接到API Server中。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@node-02 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
node-01   NotReady   master   90m   v1.14.1
node-02   NotReady   master   36s   v1.14.1

7.7 部署node節點

在node-0四、node-0五、node-06執行,注意沒有--experimental-control-plane參數,下面具體命令要根據kubeadm輸出

kubeadm join 172.19.8.250:8443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:89accff8b4514d49be4b88906c50fdab4ba8a211788da7252b880c925af77671

7.8 部署網絡插件flannel

Master節點NotReady的緣由就是由於沒有使用任何的網絡插件,此時Node和Master的鏈接還不正常。目前最流行的Kubernetes網絡插件有Flannel、Calico、Canal、Weave這裏選擇使用flannel。

[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

這將在每一個節點上運行flannel的daemonset

7.9  查看節點狀態,須要幾秒鐘纔會變化

[root@node-01 ~]# kubectl get node
NAME      STATUS   ROLES    AGE    VERSION
node-01   Ready    master   163m   v1.14.1
node-02   Ready    master   74m    v1.14.1
node-03   Ready    master   68m    v1.14.1
node-04   Ready    <none>   66m    v1.14.1
node-05   Ready    <none>   40m    v1.14.1
node-06   Ready    <none>   62m    v1.14.1

查看pod

[root@node-01 ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS              RESTARTS   AGE
coredns-fb8b8dccf-5hwwz           0/1     ContainerCreating   0          165m
coredns-fb8b8dccf-r6z4q           0/1     ContainerCreating   0          165m
etcd-node-01                      1/1     Running             0          163m
etcd-node-02                      1/1     Running             0          75m
etcd-node-03                      1/1     Running             0          70m
kube-apiserver-node-01            1/1     Running             0          163m
kube-apiserver-node-02            1/1     Running             0          75m
kube-apiserver-node-03            1/1     Running             0          70m
kube-controller-manager-node-01   1/1     Running             1          163m
kube-controller-manager-node-02   1/1     Running             0          75m
kube-controller-manager-node-03   1/1     Running             0          70m
kube-flannel-ds-amd64-2p8cd       0/1     CrashLoopBackOff    3          110s
kube-flannel-ds-amd64-9rjm9       0/1     CrashLoopBackOff    3          110s
kube-flannel-ds-amd64-bvhdn       0/1     Error               4          110s
kube-flannel-ds-amd64-l7bzb       0/1     CrashLoopBackOff    3          110s
kube-flannel-ds-amd64-qb5h6       0/1     CrashLoopBackOff    3          110s
kube-flannel-ds-amd64-w2jvq       0/1     Error               4          110s
kube-proxy-57vgk                  1/1     Running             0          63m
kube-proxy-gkz7g                  1/1     Running             0          70m
kube-proxy-h2kcg                  1/1     Running             0          67m
kube-proxy-lc5bj                  1/1     Running             0          41m
kube-proxy-rmxjs                  1/1     Running             0          165m
kube-proxy-wlfrx                  1/1     Running             0          75m
kube-scheduler-node-01            1/1     Running             1          164m
kube-scheduler-node-02            1/1     Running             0          75m
kube-scheduler-node-03            1/1     Running             0          70m

注意上面的報錯信息,kube-flannel-ds 在報錯,緣由是kubeadm-init.yaml中沒有配置networking.podSubnet,從新配置須要全部節點執行kubeadm rest,再執行kubeadm init,從新導證書。
修復後檢查
[root@node-01 ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-6qsvj           1/1     Running   0          23m
coredns-fb8b8dccf-tvm9c           1/1     Running   0          23m
etcd-node-01                      1/1     Running   0          22m
etcd-node-02                      1/1     Running   0          10m
etcd-node-03                      1/1     Running   0          10m
kube-apiserver-node-01            1/1     Running   0          22m
kube-apiserver-node-02            1/1     Running   0          10m
kube-apiserver-node-03            1/1     Running   0          8m55s
kube-controller-manager-node-01   1/1     Running   1          22m
kube-controller-manager-node-02   1/1     Running   0          10m
kube-controller-manager-node-03   1/1     Running   0          9m5s
kube-flannel-ds-amd64-49f8b       1/1     Running   0          6m41s
kube-flannel-ds-amd64-8vhc8       1/1     Running   0          6m41s
kube-flannel-ds-amd64-fhh85       1/1     Running   0          6m41s
kube-flannel-ds-amd64-hg27k       1/1     Running   0          6m41s
kube-flannel-ds-amd64-m6wxf       1/1     Running   0          6m41s
kube-flannel-ds-amd64-qqpnp       1/1     Running   0          6m41s
kube-proxy-6jhqr                  1/1     Running   0          23m
kube-proxy-frsd8                  1/1     Running   0          7m9s
kube-proxy-fstbk                  1/1     Running   0          7m20s
kube-proxy-pk9qf                  1/1     Running   0          10m
kube-proxy-pshmk                  1/1     Running   0          10m
kube-proxy-tpbcm                  1/1     Running   0          7m2s
kube-scheduler-node-01            1/1     Running   1          22m
kube-scheduler-node-02            1/1     Running   0          10m
kube-scheduler-node-03            1/1     Running   0          9m

至此使用kubeadm部署k8s已經完成。


 

簡單介紹calico網絡插件

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

此處須要修改calico.yaml,該文件裏面指定了pod使用的網絡爲 "192.168.0.0/16」 ,要保證 kubeadm-init.yaml  和 calico.yaml 中的配置相同。本文中kubeadm-init.yaml 中配置了 podSubnet: "10.244.0.0/16」,所以須要修改calico.yaml

而後執行 

kubectl apply -f calico.yaml
網絡插件安裝完成後,能夠經過檢查coredns pod的運行狀態來判斷網絡插件是否正常運行:
kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE
kube-system   calico-node-lxz4c                      0/2       ContainerCreating   0          4m
kube-system   coredns-78fcdf6894-7xwn7               0/1       Pending             0          5m
kube-system   coredns-78fcdf6894-c2pq8               0/1       Pending             0          5m
kube-system   etcd-iz948lz3o7sz                      1/1       Running             0          5m
kube-system   kube-apiserver-iz948lz3o7sz            1/1       Running             0          5m
kube-system   kube-controller-manager-iz948lz3o7sz   1/1       Running             0          5m
kube-system   kube-proxy-wcj2r                       1/1       Running             0          5m
kube-system   kube-scheduler-iz948lz3o7sz            1/1       Running             0          4m
 
# 注:coredns 啓動須要必定時間,剛開始是Pending

等待coredns pod的狀態變成Running。
相關文章
相關標籤/搜索