kubeadm高可用master節點(三主兩從)

一、安裝要求

在開始以前,部署Kubernetes集羣機器須要知足如下幾個條件:node

  • 五臺機器,操做系統 CentOS7.5+(mini)
  • 硬件配置:2GBRAM,2vCPU+,硬盤30GB+
  • 集羣中全部機器之間網絡互通,且可訪問外網。

二、安裝步驟

角色 IP
k8s-lb 192.168.50.100
master1 192.168.50.128
master2 192.168.50.129
master3 192.168.50.130
node1 192.168.50.131
node2 192.168.50.132

2.一、安裝前預處理操做

(1)配置主機名

master1節點設置:linux

~]# hostnamectl set-hostname master1

master2節點設置:nginx

~]# hostnamectl set-hostname master2

master3節點設置:git

~]# hostnamectl set-hostname master3

node1從節點設置:github

~]# hostnamectl set-hostname node1

node2從節點設置:算法

~]# hostnamectl set-hostname node2

執行bash命令以加載新設置的主機名docker

(2)添加hosts

全部的節點都要添加hosts解析記錄shell

~]# cat >>/etc/hosts <<EOF
192.168.50.100 k8s-lb
192.168.50.128 master1
192.168.50.129 master2
192.168.50.130 master3
192.168.50.131 node1
192.168.50.132 node2
EOF

(3)配置免密

master1節點生成密鑰對,並分發給其餘的全部主機。json

[root@master1 ~]# ssh-keygen -t rsa -b 1200
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:OoMw1dARsWhbJKAQL2hUxwnM4tLQJeLynAQHzqNQs5s root@localhost.localdomain
The key's randomart image is:
+---[RSA 1200]----+
|*=X=*o*+         |
|OO.*.O..         |
|BO= + +          |
|**o* o           |
|o E .   S        |
|   o . .         |
|    . +          |
|       o         |
|                 |
+----[SHA256]-----+
分發公鑰
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@master1
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@master2
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@master3
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@node1
[root@master1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@node2

(4)升級內核

經過下載kernel image的rpm包進行安裝。bootstrap

centos7系統:http://elrepo.org/linux/kernel/el7/x86_64/RPMS/

kubeadm高可用master節點(三主兩從)

編寫shell腳本升級內核

#!/bin/bash
# ----------------------------
# upgrade kernel by bomingit@126.com
# ----------------------------

yum localinstall -y kernel-lt*
if [ $? -eq 0 ];then
    grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
    grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
fi
echo "please reboot your system quick!!!"

注意:必定要重啓機器

驗證內核版本
[root@master1 ~]# uname -r
4.4.229-1.el7.elrepo.x86_64

(5)關閉防火牆selinux

~]# systemctl disable --now firewalld
~]# setenforce 0
~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

上面的是臨時關閉,固然也能夠永久關閉,即在/etc/fstab文件中將swap掛載所在的行註釋掉便可。

(6)關閉swap分區

~]# swapoff -a
~]# sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab

第一條是臨時關閉,固然也可使用第二條永久關閉,後者手動在/etc/fstab文件中將swap掛載所在的行註釋掉便可。

(7)優化內核

~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

使其當即生效

~]# sysctl --system

(8)配置yum

全部的節點均採用阿里雲官網的baseepel

~]# mv /etc/yum.repos.d/* /tmp
~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

(9)時區與時間同步

~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
~]# yum install dnf ntpdate -y
~]# ntpdate ntp.aliyun.com

(10)編寫shell

將上面的第5-8步驟寫成shell腳本自動化快速完成

#!/bin/sh
#****************************************************************#
# ScriptName: init.sh
# Author: boming
# Create Date: 2020-06-23 22:19
#***************************************************************#

#關閉防火牆
systemctl disable --now firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
#關閉swap分區
swapoff -a
sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab
#優化系統
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
#當即生效
sysctl --system
#配置阿里雲的base和epel源
mv /etc/yum.repos.d/* /tmp
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#安裝dnf工具

yum install dnf -y
dnf makecache
#安裝ntpdate工具
dnf install ntpdate -y
#同步阿里雲時間
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate ntp.aliyun.com

在其餘的節點執行此腳本跑一下便可。

2.二、安裝docker

(1)添加docker軟件yum

方法:瀏覽器打開mirrors.aliyun.com網站,找到docker-ce,便可看到鏡像倉庫源

kubeadm高可用master節點(三主兩從)

~]# curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
...
...

(2)安裝docker-ce組件

列出全部能夠安裝的版本

~]# dnf list docker-ce --showduplicates
docker-ce.x86_64       3:18.09.6-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.7-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.8-3.el7               docker-ce-stable
docker-ce.x86_64       3:18.09.9-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.0-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.1-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.2-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.3-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.4-3.el7               docker-ce-stable
docker-ce.x86_64       3:19.03.5-3.el7               docker-ce-stable
.....

這裏咱們安裝最新版本的docker,全部的節點都須要安裝docker服務

~]# dnf install -y  docker-ce docker-ce-cli

(3)啓動docker並設置開機自啓動

~]# systemctl enable --now docker

查看版本號,檢測docker是否安裝成功

~]# docker --version
Docker version 19.03.12, build 48a66213fea

上面的這種查看docker client的版本的。建議使用下面這種方法查看docker-ce版本號,這種方法把dockerclient端和server端的版本號查看的一清二楚。

~]# docker version
Client:
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false

(4)更換docker的鏡像倉庫源

默認的鏡像倉庫地址是docker官方的,國內訪問異常緩慢,所以更換爲我的阿里雲的源。

~]# cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://f1bhsuge.mirror.aliyuncs.com"]
}
EOF

因爲從新加載docker倉庫源,因此須要重啓docker

~]# systemctl restart docker

2.三、安裝kubernetes

(1)添加kubernetes軟件yum

方法:瀏覽器打開mirrors.aliyun.com網站,找到kubernetes,便可看到鏡像倉庫源

kubeadm高可用master節點(三主兩從)

~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

最好是從新生成緩存

~]# dnf clean all
~]# dnf makecache

(2)安裝kubeadmkubeletkubectl組件

全部的節點都須要安裝這幾個組件。

[root@master1 ~]# dnf list kubeadm --showduplicates
kubeadm.x86_64                       1.17.7-0                     kubernetes
kubeadm.x86_64                       1.17.7-1                     kubernetes
kubeadm.x86_64                       1.17.8-0                     kubernetes
kubeadm.x86_64                       1.17.9-0                     kubernetes
kubeadm.x86_64                       1.18.0-0                     kubernetes
kubeadm.x86_64                       1.18.1-0                     kubernetes
kubeadm.x86_64                       1.18.2-0                     kubernetes
kubeadm.x86_64                       1.18.3-0                     kubernetes
kubeadm.x86_64                       1.18.4-0                     kubernetes
kubeadm.x86_64                       1.18.4-1                     kubernetes
kubeadm.x86_64                       1.18.5-0                     kubernetes
kubeadm.x86_64                       1.18.6-0                     kubernetes

因爲kubernetes版本變動很是快,所以列出有哪些版本,選擇一個合適的。咱們這裏安裝1.18.6版本。

[root@master1 ~]# dnf install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6

(3)設置開機自啓動

咱們先設置開機自啓,可是kubelet服務暫時先不啓動。

[root@master1 ~]# systemctl enable kubelet

2.四、Haproxy+Keepalived配置高可用VIP

高可用咱們採用官方推薦的HAproxy+KeepalivedHAproxyKeepalived以守護進程的方式在全部Master節點部署。

(1)安裝keepalivedhaproxy

注意:只須要在三個master節點安裝便可

[root@master1 ~]# dnf install -y keepalived haproxy

(2)配置Haproxy服務

全部master節點的haproxy配置相同,haproxy的配置文件是/etc/haproxy/haproxy.cfgmaster1節點配置完成以後再分發給master二、master3兩個節點。

global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master1    192.168.50.128:6443  check inter 2000 fall 2 rise 2 weight 100
  server master2    192.168.50.129:6443  check inter 2000 fall 2 rise 2 weight 100
  server master3    192.168.50.130:6443  check inter 2000 fall 2 rise 2 weight 100

注意這裏的三個master節點的ip地址要根據你本身的狀況配置好。

(3)配置Keepalived服務

keepalived中使用track_script機制來配置腳本進行探測kubernetesmaster節點是否宕機,並以此切換節點實現高可用。

master1節點的keepalived配置文件以下所示,配置文件所在的位置/etc/keepalived/keepalived.cfg

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_kubernetes {
    script "/etc/keepalived/check_kubernetes.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    mcast_src_ip 192.168.50.128
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.50.100
    }
#    track_script {
#       chk_kubernetes
#    }
}

須要注意幾點(前兩點記得修改):

  • mcast_src_ip:配置多播源地址,此地址是當前主機的ip地址。
  • prioritykeepalived根據此項參數的大小仲裁master節點。咱們這裏讓master節點爲kubernetes提供服務,其餘兩個節點暫時爲備用節點。所以master1節點設置爲100master2節點設置爲99master3節點設置爲98
  • state:咱們將master1節點的state字段設置爲MASTER,其餘兩個節點字段修改成BACKUP
  • 上面的集羣檢查功能是關閉的,等到集羣創建完成後再開啓。

(4)配置健康檢測腳本

我這裏將健康檢測腳本放置在/etc/keepalived目錄下,check_kubernetes.sh檢測腳本以下:

#!/bin/bash
#****************************************************************#
# ScriptName: check_kubernetes.sh
# Author: boming
# Create Date: 2020-06-23 22:19
#***************************************************************#

function chech_kubernetes() {
    for ((i=0;i<5;i++));do
        apiserver_pid_id=$(pgrep kube-apiserver)
        if [[ ! -z $apiserver_pid_id ]];then
            return
        else
            sleep 2
        fi
        apiserver_pid_id=0
    done
}

# 1:running  0:stopped
check_kubernetes
if [[ $apiserver_pid_id -eq 0 ]];then
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

根據上面的注意事項配置master2master3節點的keepalived服務。

(5)啓動KeeplivedHaproxy服務

~]# systemctl enable --now keepalived haproxy

確保萬一,查看一下服務狀態

~]# systemctl status keepalived haproxy
~]# ping 192.168.50.100                    #檢測一下是否通
PING 192.168.50.100 (192.168.50.100) 56(84) bytes of data.
64 bytes from 192.168.50.100: icmp_seq=1 ttl=64 time=0.778 ms
64 bytes from 192.168.50.100: icmp_seq=2 ttl=64 time=0.339 ms

2.五、部署Master節點

(1)生成預處理文件

master節點執行以下指令:

[root@master1 ~]# kubeadm config print init-defaults > kubeadm-init.yaml

這個文件kubeadm-init.yaml,是咱們初始化使用的文件,裏面大概修改這幾項參數。

[root@master1 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.50.100                      #VIP的地址
  bindPort:  6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:                                              #添加以下兩行信息
  certSANs:
  - "192.168.50.100"                                    #VIP地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers   #阿里雲的鏡像站點
controlPlaneEndpoint: "192.168.50.100:8443"             #VIP的地址和端口
kind: ClusterConfiguration
kubernetesVersion: v1.18.3                              #kubernetes版本號
networking:
  dnsDomain: cluster.local  
  serviceSubnet: 10.96.0.0/12                           #選擇默認便可,固然也能夠自定義CIDR
  podSubnet: 10.244.0.0/16                              #添加pod網段
scheduler: {}

注意:上面的advertiseAddress字段的值,這個值並不是當前主機的網卡地址,而是高可用集羣的VIP的地址。

注意:上面的controlPlaneEndpoint這裏填寫的是VIP的地址,而端口則是haproxy服務的8443端口,也就是咱們在haproxy裏面配置的這段信息。

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp

這一段裏面的8443端,若是你自定義了其餘端口,這裏請記得修改controlPlaneEndpoint裏面的端口。

(2)提早拉取鏡像

若是直接採用kubeadm init來初始化,中間會有系統自動拉取鏡像的這一步驟,這是比較慢的,我建議分開來作,因此這裏就先提早拉取鏡像。

[root@master1 ~]# kubeadm config images pull --config kubeadm-init.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.6.5

若是你們看到開頭的兩行warning信息(我這裏沒有打印),沒必要擔憂,這只是警告,不影響咱們完成實驗。

其餘master節點提早拉取鏡像

其餘兩個master節點在初始化以前也儘可能先把鏡像拉取下來,這樣子減小初始化時間

[root@master1 ~]# scp kubeadm-init.yaml root@master2:~
[root@master1 ~]# scp kubeadm-init.yaml root@master3:~

master2節點

# 注意在master2節點執行以下命令
[root@master2 ~]# kubeadm config images pull --config kubeadm-init.yaml

master3節點

# 注意在master3節點執行以下命令
[root@master3 ~]# kubeadm config images pull --config kubeadm-init.yaml

(3)初始化kubenetesmaster1節點

執行以下命令

[root@master1 ~]# kubeadm init --config kubeadm-init.yaml --upload-certs
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.128 192.168.50.100]
...                                         # 省略
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
    --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648

這個過程大概30s的時間就作完了,之因此初始化的這麼快就是由於咱們提早拉取了鏡像。像我上面這樣的沒有報錯信息,而且顯示上面的最後10行相似的信息這些,說明咱們的master1節點是初始化成功的。

在使用集羣以前還須要作些收尾工做,在master1節點執行:

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

再配置一下環境變量

[root@master1 ~]# cat >> ~/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master1 ~]# source ~/.bashrc

好了,此時的master1節點就算是初始化完畢了。

有個重要的點就是最後幾行信息中,其中有兩條kubeadm join 192.168.50.100:8443 開頭的信息。 這分別是其餘master節點和node節點加入kubernetes集羣的認證命令。這個密鑰是系統根據 sha256算法計算出來的,必須持有這樣的密鑰才能夠加入當前的kubernetes集羣。

使用區別

這兩條加入集羣的命令是有一些區別的:

好比這個第一條,咱們看到最後有一行內容--control-plane --certificate-key xxxx,這是控制節點加入集羣的命令,控制節點是kubernetes官方的說法,其實在咱們這裏指的就是其餘的master節點。

kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
    --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539

而最後一條就表示node節點加入集羣的命令,好比:

kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648

因此這兩個節點使用時要看清楚是什麼類型的節點加入集羣的。

查看節點

若是此時查看當前集羣的節點,會發現只有master1節點本身。

[root@master1 ~]# kubectl get node
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   9m58s   v1.18.4

接下來咱們把其餘兩個master節點加入到kubernetes集羣中

2.六、其餘master節點加入kubernetes集羣中

(1)master2節點加入集羣

既然是其餘的master節點加入集羣,那確定是使用以下命令:

[root@master2 ~]#  kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
     --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539

[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The 
......                                  #省略若干
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

看上去沒有報錯,說明加入集羣成功,如今再執行一些收尾工做

[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

加環境變量

[root@master2 ~]# cat >> ~/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master2 ~]# source ~/.bashrc

(2)master3節點加入集羣

[root@master3 ~]#  kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648 \
     --control-plane --certificate-key 4931f39d3f53351cb6966a9dcc53cb5cbd2364c6d5b83e50e258c81fbec69539

作一些收尾工做

[root@master3 ~]# mkdir -p $HOME/.kube
[root@master3 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master3 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master3 ~]# cat >> ~/.bashrc <<EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
[root@master3 ~]# source ~/.bashrc

到此,全部的master節點都已經加入集羣

查看集羣master節點
[root@master1 ~]# kubectl get node
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   25m     v1.18.4
master2   NotReady   master   12m     v1.18.4
master3   NotReady   master   3m30s   v1.18.4

你能夠在任意一個master節點上執行kubectl get node查看集羣節點的命令。

2.七、node節點加入kubernetes集羣中

正如咱們上面所說的,master1節點初始化完成後,第二條kubeadm join xxx(或者說是最後一行內容)內容即是node節點加入集羣的命令。

~]# kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648

注意:node節點加入集羣只須要執行上面的一條命令便可,只要沒有報錯就表示成功。沒必要像master同樣作最後的加入環境變量等收尾工做。

(1)node1節點加入集羣

[root@node1 ~]# kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
....
....
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

當看到倒數第四行內容This node has joined the cluster,這一行信息表示node1節點加入集羣成功。

(2)node2節點加入集羣

[root@node2 ~]# kubeadm join 192.168.50.100:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4c738bc8e2684c5d52d80687d48925613b66ab660403649145eb668d71d85648

(3)查看集羣節點信息

此時咱們能夠在任意一個master節點執行以下命令查看此集羣的節點信息。

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   20h     v1.18.4
master2   NotReady   master   20h     v1.18.4
master3   NotReady   master   20h     v1.18.4
node1     NotReady   <none>   5m15s   v1.18.4
node2     NotReady   <none>   5m11s   v1.18.4

能夠看到集羣的五個節點都已經存在,可是如今還不能用,也就是說如今集羣節點是不可用的,緣由在於上面的第2個字段,咱們看到五個節點都是`NotReady狀態,這是由於咱們尚未安裝網絡插件。

網絡插件有calicoflannel等插件,這裏咱們選擇使用flannel插件。

2.八、安裝網絡插件

(1)默認方法

默認你們從網上看的教程都會使用這個命令來初始化。

~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

事實上不少用戶都不能成功,由於國內網絡受限,因此能夠這樣子來作。

(2)更換flannel鏡像源

master1節點上修改本地的hosts文件添加以下內容以便解析

199.232.28.133  raw.githubusercontent.com

而後下載flannel文件

[root@master1 ~]# curl -o kube-flannel.yml   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

編輯鏡像源,默認的鏡像地址咱們修改一下。把yaml文件中全部的quay.io修改成 quay-mirror.qiniu.com

[root@master1 ~]# sed -i 's/quay.io/quay-mirror.qiniu.com/g' kube-flannel.yml

此時保存保存退出。在master節點執行此命令。

[root@master1 ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

這樣子就能夠成功拉取flannel鏡像了。固然你也可使用我提供給你們的kube-flannel.yml文件

查看flannel是否正常

若是你想查看flannel這些pod運行是否正常,使用以下命令

[root@master1 ~]# kubectl get pods -n kube-system | grep flannel
NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-dp972       1/1     Running   0          66s
kube-flannel-ds-amd64-lkspx       1/1     Running   0          66s
kube-flannel-ds-amd64-rmsdk       1/1     Running   0          66s
kube-flannel-ds-amd64-wp668       1/1     Running   0          66s
kube-flannel-ds-amd64-zkrwh       1/1     Running   0          66s

若是第三字段STATUS不是處於Running狀態的話,說明flannel是異常的,須要排查問題所在。

查看節點是否爲Ready

稍等片刻,執行以下指令查看節點是否可用

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   21h   v1.18.4
master2   Ready    master   21h   v1.18.4
master3   Ready    master   21h   v1.18.4
node1     Ready    <none>   62m   v1.18.4
node2     Ready    <none>   62m   v1.18.4

目前節點狀態是Ready,表示集羣節點如今是可用的

三、測試kubernetes集羣

3.一、kubernetes集羣測試

(1)建立一個nginxpod

如今咱們在kubernetes集羣中建立一個nginxpod,驗證是否能正常運行。

master節點執行一下步驟:

[root@master1 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master1 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

如今咱們查看podservice

[root@master1 ~]# kubectl get pod,svc -o wide

kubeadm高可用master節點(三主兩從)

打印的結果中,前半部分是pod相關信息,後半部分是service相關信息。咱們看service/nginx這一行能夠看出service暴漏給集羣的端口是30249。記住這個端口。

而後從pod的詳細信息能夠看出此時podnode2節點之上。node2節點的IP地址是192.168.50.132

(2)訪問nginx驗證集羣

那如今咱們訪問一下。打開瀏覽器(建議火狐瀏覽器),訪問地址就是:http://192.168.50.132:30249

kubeadm高可用master節點(三主兩從)

3.二、安裝dashboard

(1)建立dashboard

先把dashboard的配置文件下載下來。因爲咱們以前已經添加了hosts解析,所以能夠下載。

[root@master1 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默認Dashboard只能集羣內部訪問,修改ServiceNodePort類型,暴露到外部:

大概在此文件的32-44行之間,修改成以下:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort                        #加上此行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001                   #加上此行,端口30001能夠自行定義
  selector:
    k8s-app: kubernetes-dashboard

運行此yaml文件

[root@master1 ~]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
...
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
查看dashboard運行是否正常
[root@master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-mlnl4   1/1     Running   0          2m31s
kubernetes-dashboard-9774cc786-ccvcf         1/1     Running   0          2m31s

主要是看status這一列的值,若是是Running,而且RESTARTS字段的值爲0(只要這個值不是一直在漸漸變大),就是正常的,目前來看是沒有問題的。咱們能夠繼續下一步。

查看此dashboardpod運行所在的節點

kubeadm高可用master節點(三主兩從)

從上面能夠看出,kubernetes-dashboard-9774cc786-ccvcf運行所在的節點是node2上面,而且暴漏出來的端口是30001,因此訪問地址是:https://192.168.50.132:30001

用火狐瀏覽器訪問,訪問的時候會讓輸入token,今後處能夠查看到token的值。

[root@master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

kubeadm高可用master節點(三主兩從)

把上面的token值輸入進去便可進去dashboard界面。

kubeadm高可用master節點(三主兩從)

不過如今咱們雖然能夠登錄上去,可是咱們權限不夠還查看不了集羣信息,由於咱們尚未綁定集羣角色,同窗們能夠先按照上面的嘗試一下,再來作下面的步驟

(2)cluster-admin管理員角色綁定

[root@master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
[root@master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
[root@master1 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

再使用輸出的token登錄dashboard便可。

kubeadm高可用master節點(三主兩從)

kubeadm高可用master節點(三主兩從)

報錯

(1)其餘master節點沒法加入集羣

[check-etcd] Checking that the etcd cluster is healthy
error execution phase check-etcd: error syncing endpoints with etc: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher

查看集羣的高可用配置是否有問題,好比keepalived的配置中,主備,優先級是否都配置好了。

相關文章
相關標籤/搜索