lvs+keepalived部署k8s v1.16.4高可用集羣

1、部署環境

1.1 主機列表

主機名 Centos版本 ip docker version flannel version Keepalived version 主機配置 備註
lvs-keepalived01 7.6.1810 172.27.34.28 / / v1.3.5 4C4G lvs-keepalived
lvs-keepalived01 7.6.1810 172.27.34.29 / / v1.3.5 4C4G lvs-keepalived
master01 7.6.1810 172.27.34.35 18.09.9 v0.11.0 / 4C4G control plane
master02 7.6.1810 172.27.34.36 18.09.9 v0.11.0 / 4C4G control plane
master03 7.6.1810 172.27.34.37 18.09.9 v0.11.0 / 4C4G control plane
work01 7.6.1810 172.27.34.161 18.09.9 / / 4C4G worker nodes
work02 7.6.1810 172.27.34.162 18.09.9 / / 4C4G worker nodes
work03 7.6.1810 172.27.34.163 18.09.9 / / 4C4G worker nodes
VIP 7.6.1810 172.27.34.222 / / v1.3.5 4C4G 在lvs-keepalived兩臺主機上浮動
client 7.6.1810 172.27.34.85 / / / 4C4G client

共有9臺服務器,2臺爲lvs-keepalived集羣,3臺control plane集羣,3臺work集羣,1臺client。node

1.2 k8s 版本

主機名 kubelet version kubeadm version kubectl version 備註
master01 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
master02 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
master03 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work01 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work02 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
work03 v1.16.4 v1.16.4 v1.16.4 kubectl選裝
client / / v1.16.4 client

2、高可用架構

1. 架構圖

本文采用kubeadm方式搭建高可用k8s集羣,k8s集羣的高可用實際是k8s各核心組件的高可用,這裏使用集羣模式(針對apiserver來說),架構以下:linux

image-20200309100826283

2. 集羣模式高可用架構說明

核心組件 高可用模式 高可用實現方式
apiserver 集羣 lvs+keepalived
controller-manager 主備 leader election
scheduler 主備 leader election
etcd 集羣 kubeadm
  • apiserver 經過lvs-keepalived實現高可用,vip將請求分發至各個control plane節點的apiserver組件;
  • controller-manager k8s內部經過選舉方式產生領導者(由--leader-elect 選型控制,默認爲true),同一時刻集羣內只有一個controller-manager組件運行;
  • scheduler k8s內部經過選舉方式產生領導者(由--leader-elect 選型控制,默認爲true),同一時刻集羣內只有一個scheduler組件運行;
  • etcd 經過運行kubeadm方式自動建立集羣來實現高可用,部署的節點數爲奇數,3節點方式最多容忍一臺機器宕機。

3、Centos7.6安裝

本文全部的服務器都爲Centos7.6,Centos7.6安裝詳見:Centos7.6操做系統安裝及優化全紀錄 nginx

安裝Centos時已經禁用了防火牆和selinux並設置了阿里源。git

4、k8s集羣安裝準備工做

control plane和work節點都執行本部分操做,以master01爲例記錄搭建過程。github

1. 配置主機名

1.1 修改主機名

[root@centos7 ~]# hostnamectl set-hostname master01
[root@centos7 ~]# more /etc/hostname             
master01

退出從新登錄便可顯示新設置的主機名master01,各服務器修改成對應的主機名。算法

1.2 修改hosts文件

[root@master01 ~]# cat >> /etc/hosts << EOF
172.27.34.35    master01
172.27.34.36   master02
172.27.34.37    master03
172.27.34.161   work01 
172.27.34.162   work02
172.27.34.163   work03
EOF

image-20200309101133622

2. 驗證mac地址uuid

[root@master01 ~]# cat /sys/class/net/ens160/address
[root@master01 ~]# cat /sys/class/dmi/id/product_uuid

image-20200309101255911

保證各節點mac和uuid惟一docker

3. 禁用swap

3.1 臨時禁用

[root@master01 ~]# swapoff -a

3.2 永久禁用

若須要重啓後也生效,在禁用swap後還需修改配置文件/etc/fstab,註釋swapjson

[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab

image-20200309101324503

4. 內核參數修改

本文的k8s網絡使用flannel,該網絡須要設置內核參數bridge-nf-call-iptables=1,修改這個參數須要系統有br_netfilter模塊。centos

4.1 br_netfilter模塊加載

查看br_netfilter模塊:api

[root@master01 ~]# lsmod |grep br_netfilter

若是系統沒有br_netfilter模塊則執行下面的新增命令,若有則忽略。

臨時新增br_netfilter模塊:

[root@master01 ~]# modprobe br_netfilter

該方式重啓後會失效

永久新增br_netfilter模塊:

[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules

image-20200309101351910

4.2 內核參數臨時修改

[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1

4.3 內核參數永久修改

[root@master01 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

image-20200309101414595

5. 設置kubernetes源

5.1 新增kubernetes源

[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  • [] 中括號中的是repository id,惟一,用來標識不一樣倉庫
  • name 倉庫名稱,自定義
  • baseurl 倉庫地址
  • enable 是否啓用該倉庫,默認爲1表示啓用
  • gpgcheck 是否驗證從該倉庫得到程序包的合法性,1爲驗證
  • repo_gpgcheck 是否驗證元數據的合法性 元數據就是程序包列表,1爲驗證
  • gpgkey=URL 數字簽名的公鑰文件所在位置,若是gpgcheck值爲1,此處就須要指定gpgkey文件的位置,若是gpgcheck值爲0就不須要此項了

5.2 更新緩存

[root@master01 ~]# yum clean all
[root@master01 ~]# yum -y makecache

6. 免密登陸

配置master01到master0二、master03免密登陸,本步驟只在master01上執行。

6.1 建立祕鑰

[root@master01 ~]# ssh-keygen -t rsa

image-20200309101451722

6.2 將祕鑰同步至master02/master03

[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.35
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.36

image-20200309101519776

image-20200309101535664

6.3 免密登錄測試

[root@master01 ~]# ssh 172.27.34.36
[root@master01 ~]# ssh master03

image-20200309101612630

master01能夠直接登陸master02和master03,不須要輸入密碼。

7. 服務器重啓

重啓各control plane和work節點。

5、Docker安裝

control plane和work節點都執行本部分操做。

1. 安裝依賴包

[root@master01 ~]# yum install -y yum-utils   device-mapper-persistent-data   lvm2

image-20200309101722548

2. 設置Docker源

[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

image-20200309101850544

3. 安裝Docker CE

3.1 docker安裝版本查看

[root@master01 ~]# yum list docker-ce --showduplicates | sort -r

image-20200309162903305

3.2 安裝docker

[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y

image-20200309162930299 指定安裝的docker版本爲18.09.9

4. 啓動Docker

[root@master01 ~]# systemctl start docker
[root@master01 ~]# systemctl enable docker

image-20200309163022673

5. 命令補全

5.1 安裝bash-completion

[root@master01 ~]# yum -y install bash-completion

5.2 加載bash-completion

[root@master01 ~]# source /etc/profile.d/bash_completion.sh

image-20200309163047803

6. 鏡像加速

因爲Docker Hub的服務器在國外,下載鏡像會比較慢,能夠配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里雲加速器、DaoCloud 加速器,本文以阿里加速器配置爲例。

6.1 登錄阿里雲容器模塊

登錄地址爲:https://cr.console.aliyun.com ,未註冊的能夠先註冊阿里雲帳戶

image-20200309163146167

6.2 配置鏡像加速器

配置daemon.json文件

[root@master01 ~]# mkdir -p /etc/docker
[root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"]
}
EOF

重啓服務

[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker

image-20200309163212089

加速器配置完成

7. 驗證

[root@master01 ~]# docker --version
[root@master01 ~]# docker run hello-world

image-20200309163235352

經過查詢docker版本和運行容器hello-world來驗證docker是否安裝成功。

8. 修改Cgroup Driver

8.1 修改daemon.json

修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’

[root@master01 ~]# more /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

8.2 從新加載docker

[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker

修改cgroupdriver是爲了消除告警: [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

6、k8s安裝

control plane和work節點都執行本部分操做

1. 版本查看

[root@master01 ~]# yum list kubelet --showduplicates | sort -r

image-20200309163328603

本文安裝的kubelet版本是1.16.4,該版本支持的docker版本爲1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。

2. 安裝kubelet、kubeadm和kubectl

2.1 安裝三個包

[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4

image-20200309163354550

2.2 安裝包說明

  • kubelet 運行在集羣全部節點上,用於啓動Pod和容器等對象的工具
  • kubeadm 用於初始化集羣,啓動集羣的命令工具
  • kubectl 用於和集羣通訊的命令行,經過kubectl能夠部署和管理應用,查看各類資源,建立、刪除和更新各類組件

2.3 啓動kubelet

啓動kubelet並設置開機啓動

[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet

2.4 kubectl命令補全

[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile

3. 下載鏡像

3.1 鏡像下載的腳本

Kubernetes幾乎全部的安裝組件和Docker鏡像都放在goolge本身的網站上,直接訪問可能會有網絡問題,這裏的解決辦法是從阿里雲鏡像倉庫下載鏡像,拉取到本地之後改回默認的鏡像tag。本文經過運行image.sh腳本方式拉取鏡像。

[root@master01 ~]# more image.sh 
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

url爲阿里雲鏡像倉庫地址,version爲安裝的kubernetes版本。

3.2 下載鏡像

運行腳本image.sh,下載指定版本的鏡像

[root@master01 ~]# ./image.sh
[root@master01 ~]# docker images

image-20200309163431743

7、初始化Master

master01節點執行本部分操做。

1. kubeadm.conf

[root@master01 ~]# more kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
  certSANs:    #填寫全部kube-apiserver節點的hostname、IP、VIP
  - master01
  - master02
  - master03
  - work01
  - work02
  - work03
  - 172.27.34.35
  - 172.27.34.36
  - 172.27.34.37
  - 172.27.34.161
  - 172.27.34.162
  - 172.27.34.163
  - 172.27.34.222
controlPlaneEndpoint: "172.27.34.222:6443"
networking:
  podSubnet: "10.244.0.0/16"

image-20200309163452682

kubeadm.conf爲初始化的配置文件

2. master01起虛ip

在master01上起虛ip:172.27.34.222

[root@master01 ~]# ifconfig ens160:2 172.27.34.222 netmask 255.255.255.0 up

image-20200309163506636

起虛ip目的是爲了執行master01的初始化,待初始化完成後去掉該虛ip

3. master初始化

[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml

image-20200309163545663

記錄kubeadm join的輸出,後面須要這個命令將work節點和其餘control plane節點加入集羣中。

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2 \
    --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4 \
    --control-plane       

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2 \
    --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4

初始化失敗:

若是初始化失敗,可執行kubeadm reset後從新初始化

[root@master01 ~]# kubeadm reset
[root@master01 ~]# rm -rf $HOME/.kube/config

image-20200309163604073

4. 加載環境變量

[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile

本文全部操做都在root用戶下執行,若爲非root用戶,則執行以下操做:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

5. 安裝flannel網絡

在master01上新建flannel網絡

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

image-20200309163624376

因爲網絡緣由,可能會安裝失敗,能夠在文末直接下載kube-flannel.yml文件,而後再執行apply

8、control plane節點加入k8s集羣

1. 證書分發

1.1 master01分發證書

在master01上運行腳本cert-main-master.sh,將證書分發至master02和master03

[root@master01 ~]# ll|grep cert-main-master.sh 
-rwxr--r--  1 root root     638 1月  16 10:25 cert-main-master.sh
[root@master01 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="172.27.34.36 172.27.34.37"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # Quote this line if you are using external etcd
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

image-20200309163649608

1.2 master02移動證書至指定目錄

在master02上運行腳本cert-other-master.sh,將證書移至指定目錄

[root@master02 ~]# more cert-other-master.sh 
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# ./cert-other-master.sh

image-20200309163710983

1.3 master03移動證書至指定目錄

在master03上也運行腳本cert-other-master.sh

[root@master03 ~]# pwd
/root
[root@master03 ~]# ll|grep cert-other-master.sh 
-rwxr--r--  1 root root  484 1月  16 10:30 cert-other-master.sh
[root@master03 ~]# ./cert-other-master.sh

2. master02加入k8s集羣

[root@master03 ~]# kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2     --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4     --control-plane

運行初始化master生成的control plane節點加入集羣的命令

image-20200309163732020

3. master03加入k8s集羣

[root@master03 ~]# kubeadm join 172.27.34.222:6443 --token 0p7rzn.fdanprq4y8na36jh     --discovery-token-ca-cert-hash sha256:fc7a828208d554329645044633159e9dc46b0597daf66769988fee8f3fc0636b     --control-plane

image-20200309163754867

4. 加載環境變量

master02和master03加載環境變量

[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master02 ~]# source .bash_profile 
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master03 ~]# source .bash_profile

image-20200309163807751

該步操做是爲了在master02和master03上也能執行kubectl命令。

5. k8s集羣節點查看

[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system

image-20200309163832464發現master01和master03下載flannel異常,分別在master01和master03上手動下載該鏡像後正常。

[root@master01 ~]# docker pull  registry.cn-hangzhou.aliyuncs.com/loong576/flannel:v0.11.0-amd64
[root@master03 ~]# docker pull  registry.cn-hangzhou.aliyuncs.com/loong576/flannel:v0.11.0-amd64

image-20200309163932412

9、work節點加入k8s集羣

1. work01加入k8s集羣

[root@work01 ~]# kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2     --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4

運行初始化master生成的work節點加入集羣的命令

image-20200309163954129

2. work02加入k8s集羣

[root@work02 ~]# kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2     --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4

image-20200309164016933

3. work03加入k8s集羣

[root@work03 ~]# kubeadm join 172.27.34.222:6443 --token lw90fv.j1lease5jhzj9ih2     --discovery-token-ca-cert-hash sha256:79575e7a39eac086e121364f79e58a33f9c9de2a4e9162ad81d0abd1958b24f4

image-20200309164037410

4. k8s集羣各節點查看

[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system

image-20200309164055268

10、ipvs安裝

lvs-keepalived01和lvs-keepalived02都執行本操做。

1. 安裝ipvs

LVS無需安裝,安裝的是管理工具,第一種叫ipvsadm,第二種叫keepalive。ipvsadm是經過命令行管理,而keepalive讀取配置文件管理。

[root@lvs-keepalived01 ~]# yum -y install ipvsadm

image-20200309164149554

2. 加載ipvsadm模塊

把ipvsadm模塊加載進系統

[root@lvs-keepalived01 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@lvs-keepalived01 ~]# lsmod | grep ip_vs
ip_vs                 145497  0 
nf_conntrack          133095  1 ip_vs
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

image-20200309164204749

lvs相關實踐詳見:LVS+Keepalived+Nginx負載均衡搭建測試

11、keepalived安裝

lvs-keepalived01和lvs-keepalived02都執行本操做。

1. keepalived安裝

[root@lvs-keepalived01 ~]# yum -y install keepalived

image-20200309164222695

2. keepalived配置

lvs-keepalived01配置以下:

[root@lvs-keepalived01 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id lvs-keepalived01   #router_id 機器標識,一般爲hostname,但不必定非得是hostname。故障發生時,郵件通知會用到。
}
vrrp_instance VI_1 {            #vrrp實例定義部分
    state MASTER                #設置lvs的狀態,MASTER和BACKUP兩種,必須大寫 
    interface ens160            #設置對外服務的接口
    virtual_router_id 100       #設置虛擬路由標示,這個標示是一個數字,同一個vrrp實例使用惟一標示 
    priority 100                #定義優先級,數字越大優先級越高,在一個vrrp——instance下,master的優先級必須大於backup
    advert_int 1                #設定master與backup負載均衡器之間同步檢查的時間間隔,單位是秒
    authentication {            #設置驗證類型和密碼
        auth_type PASS          #主要有PASS和AH兩種
        auth_pass 1111          #驗證密碼,同一個vrrp_instance下MASTER和BACKUP密碼必須相同
    }
    virtual_ipaddress {         #設置虛擬ip地址,能夠設置多個,每行一個
        172.27.34.222
    }
}
virtual_server 172.27.34.222 6443 {  #設置虛擬服務器,須要指定虛擬ip和服務端口
    delay_loop 6                     #健康檢查時間間隔
    lb_algo wrr                      #負載均衡調度算法
    lb_kind DR                       #負載均衡轉發規則
    #persistence_timeout 50          #設置會話保持時間,對動態網頁很是有用
    protocol TCP                     #指定轉發協議類型,有TCP和UDP兩種
    real_server 172.27.34.35 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
    real_server 172.27.34.36 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
    real_server 172.27.34.37 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
}

lvs-keepalived02配置以下:

[root@lvs-keepalived02 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id lvs-keepalived02   #router_id 機器標識,一般爲hostname,但不必定非得是hostname。故障發生時,郵件通知會用到。
}
vrrp_instance VI_1 {            #vrrp實例定義部分
    state BACKUP                #設置lvs的狀態,MASTER和BACKUP兩種,必須大寫 
    interface ens160            #設置對外服務的接口
    virtual_router_id 100       #設置虛擬路由標示,這個標示是一個數字,同一個vrrp實例使用惟一標示 
    priority 90                 #定義優先級,數字越大優先級越高,在一個vrrp——instance下,master的優先級必須大於backup
    advert_int 1                #設定master與backup負載均衡器之間同步檢查的時間間隔,單位是秒
    authentication {            #設置驗證類型和密碼
        auth_type PASS          #主要有PASS和AH兩種
        auth_pass 1111          #驗證密碼,同一個vrrp_instance下MASTER和BACKUP密碼必須相同
    }
    virtual_ipaddress {         #設置虛擬ip地址,能夠設置多個,每行一個
        172.27.34.222
    }
}
virtual_server 172.27.34.222 6443 {  #設置虛擬服務器,須要指定虛擬ip和服務端口
    delay_loop 6                     #健康檢查時間間隔
    lb_algo wrr                      #負載均衡調度算法
    lb_kind DR                       #負載均衡轉發規則
    #persistence_timeout 50          #設置會話保持時間,對動態網頁很是有用
    protocol TCP                     #指定轉發協議類型,有TCP和UDP兩種
    real_server 172.27.34.35 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
    real_server 172.27.34.36 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
    real_server 172.27.34.37 6443 {  #配置服務器節點1,須要指定real server的真實IP地址和端口
    weight 10                        #設置權重,數字越大權重越高
    TCP_CHECK {                      #realserver的狀態監測設置部分單位秒
       connect_timeout 10            #鏈接超時爲10秒
       retry 3                       #重連次數
       delay_before_retry 3          #重試間隔
       connect_port 6443             #鏈接端口爲6443,要和上面的保持一致
       }
    }
}

3. master01上去掉vip

[root@master01 ~]# ifconfig ens160:2 172.27.34.222 netmask 255.255.255.0 down

image-20200309164245783

master01上去掉初始化使用的ip 172.27.34.222

4. 啓動keepalived

lvs-keepalived01和lvs-keepalived02都啓動keepalived並設置爲開機啓動

[root@lvs-keepalived01 ~]# service keepalived start
Redirecting to /bin/systemctl start keepalived.service
[root@lvs-keepalived01 ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

5. vip查看

[root@lvs-keepalived01 ~]# ip a

image-20200309164434474

此時vip在lvs-keepalived01上

12、control plane節點配置

control plane都執行本操做。

1. 新建realserver.sh

打開control plane所在服務器的「路由」功能、關閉「ARP查詢」功能並設置迴環ip,三臺control plane配置相同,以下:

[root@master01 ~]# cd /etc/rc.d/init.d/
[root@master01 init.d]# more realserver.sh 
#!/bin/bash
    SNS_VIP=172.27.34.222
    case "$1" in
    start)
        ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
        /sbin/route add -host $SNS_VIP dev lo:0
        echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
        echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
        echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
        sysctl -p >/dev/null 2>&1
        echo "RealServer Start OK"
        ;;
    stop)
        ifconfig lo:0 down
        route del $SNS_VIP >/dev/null 2>&1
        echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
        echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
        echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
        echo "RealServer Stoped"
        ;;
    *)
        echo "Usage: $0 {start|stop}"
        exit 1
    esac
    exit 0

此腳本用於control plane節點綁定 VIP ,並抑制響應 VIP 的 ARP 請求。這樣作的目的是爲了避免讓關於 VIP 的 ARP 廣播時,節點服務器應答( 由於control plane節點都綁定了 VIP ,若是不作設置它們會應答,就會亂套 )。

2 運行realserver.sh腳本

在全部control plane節點執行realserver.sh腳本:

[root@master01 init.d]# chmod u+x realserver.sh 
[root@master01 init.d]# /etc/rc.d/init.d/realserver.sh start
RealServer Start OK

給realserver.sh腳本授予執行權限並運行realserver.sh腳本

image-20200309164522352

3. realserver.sh開啓啓動

[root@master01 init.d]# sed -i '$a /etc/rc.d/init.d/realserver.sh start' /etc/rc.d/rc.local
[root@master01 init.d]# chmod u+x /etc/rc.d/rc.local

十3、client配置

1. 設置kubernetes源

1.1 新增kubernetes源

[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

image-20200309164547306

1.2 更新緩存

[root@client ~]# yum clean all
[root@client ~]# yum -y makecache

2. 安裝kubectl

[root@client ~]# yum install -y kubectl-1.16.4

image-20200309164607902

安裝版本與集羣版本保持一致

3. 命令補全

3.1 安裝bash-completion

[root@client ~]# yum -y install bash-completion

3.2 加載bash-completion

[root@client ~]# source /etc/profile.d/bash_completion.sh

image-20200309164627949

3.3 拷貝admin.conf

[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 172.27.34.35:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@client ~]# source .bash_profile

3.4 加載環境變量

[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile

4. kubectl測試

[root@client ~]# kubectl get nodes 
[root@client ~]# kubectl get cs
[root@client ~]# kubectl cluster-info 
[root@client ~]# kubectl get po -o wide -n kube-system

image-20200309164648821

十4、Dashboard搭建

本節內容都在client節點完成。

1. 下載yaml

[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

若是鏈接超時,能夠多試幾回。recommended.yaml已上傳,也能夠在文末下載。

2. 配置yaml

2.1 修改鏡像地址

[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml

因爲默認的鏡像倉庫網絡訪問不通,故改爲阿里鏡像

2.2 外網訪問

[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml

配置NodePort,外部經過https://NodeIp:NodePort 訪問Dashboard,此時端口爲30001

2.3 新增管理員賬號

[root@client ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

image-20200309164711264

建立超級管理員的帳號用於登陸Dashboard

3. 部署訪問

3.1 部署Dashboard

[root@client ~]# kubectl apply -f recommended.yaml

image-20200309164726665

3.2 狀態查看

[root@client ~]# kubectl get all -n kubernetes-dashboard

image-20200309164751965

3.3 令牌查看

[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin

image-20200309164805099 令牌爲:

eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1SOU1pNGswQnJCVUtCaks2TlBnMGxUdGRSdTlPS0s0MjNjUkdlNzFRVXMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbXRuZ3giLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNWVjOTdkNzItZTgwZi00MDE2LTk2NTEtZDhkMTYwOGJkODViIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.WJPzxkAGYjtq556d3HuXNh6g0sDYm2h6U_FsPDvvfhquYSccPGJ1UzX-lKxhPYyCegc603D7yFCc9zQOzpONttkue3rGdOz8KePOAHCUX7Xp_yTcJg15BPxQDDny6Lebu0fFXh_fpbU2_35nG28lRjiwKG3mV3O5uHdX5nk500RBmLkw3F054ww66hgFBfTH2HVDi1jOlAKWC0xatdxuqp2JkMqiBCZ_8Zwhi66EQYAMT1xu8Sn5-ur_6QsgaNNYhCeNxqHUiEFIZdLNu8QAnsKJJuhxxXd2KhIF6dwMvvOPG1djKCKSyNRn-SGILDucu1_6FoBG1DiNcIr90cPAtA

3.4 訪問

請使用火狐瀏覽器訪問:https://control plane ip:30001,即https://172.27.34.35/36/37:30001/ image-20200309164825977

image-20200309164846452

接受風險 image-20200309164906305 經過令牌方式登陸 image-20200309164924294

登陸的首頁顯示

image-20200309164937984

切換到命名空間kubernetes-dashboard,查看資源。

Dashboard提供了能夠實現集羣管理、工做負載、服務發現和負載均衡、存儲、字典配置、日誌視圖等功能。

爲了豐富dashboard的統計數據和圖表,能夠安裝heapster組件。heapster組件實踐詳見:k8s實踐(十一):heapster+influxdb+grafana實現kubernetes集羣監

十5、k8s集羣高可用測試

1. 組件所在節點查看

經過ipvsadm查看apiserver所在節點,經過leader-elect查看scheduler和controller-manager所在節點:

1.1 apiserver節點查看

在lvs-keepalived01上執行ipvsadm查看apiserver轉發到的服務器

[root@lvs-keepalived01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.35:6443            Route   10     2          0         
  -> 172.27.34.36:6443            Route   10     2          0         
  -> 172.27.34.37:6443            Route   10     2          0

image-20200309165006437

1.2 controller-manager和scheduler節點查看

在client節點上查看controller-manager和scheduler組件所在節點

[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_0a2bcea9-d17e-405b-8b28-5059ca434144","leaseDurationSeconds":15,"acquireTime":"2020-01-19T03:07:51Z","renewTime":"2020-01-19T04:40:20Z","leaderTransitions":2}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_c284cee8-57cf-46e7-a578-6c0a10aedb37","leaseDurationSeconds":15,"acquireTime":"2020-01-19T03:07:51Z","renewTime":"2020-01-19T04:40:30Z","leaderTransitions":2}'

image-20200309165022245

組件名 所在節點
apiserver master0一、master0二、master03
controller-manager master01
scheduler master01

2. master01關機

2.1 關閉master01

關閉master01,模擬宕機

[root@master01 ~]# init 0

2.2 apiserver組件節點查看

lvs-keepalived01上查看apiserver節點連接狀況

[root@lvs-keepalived01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.36:6443            Route   10     4          0         
  -> 172.27.34.37:6443            Route   10     2          0

image-20200309165043615

發現master01的apiserver被移除集羣,即訪問172.27.34.222:64443時不會被調度到master01

2.3 controller-manager和scheduler組件節點查看

client節點上再次運行查看controller-manager和scheduler命令

[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_9481b109-f236-432a-a2cb-8d0c27417396","leaseDurationSeconds":15,"acquireTime":"2020-01-19T04:42:22Z","renewTime":"2020-01-19T04:45:45Z","leaderTransitions":3}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_6d84981b-3ab9-4a00-a86a-47bd2f5c7729","leaseDurationSeconds":15,"acquireTime":"2020-01-19T04:42:23Z","renewTime":"2020-01-19T04:45:48Z","leaderTransitions":3}'
[root@client ~]#

image-20200309165106862

controller-manager和scheduler都被切換到master03節點

組件名 所在節點
apiserver master0二、master03
controller-manager master03
scheduler master03

2.4 集羣功能性測試

全部功能性測試都在client節點完成。

2.4.1 查詢

[root@client ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
master01   NotReady   master   22h   v1.16.4
master02   Ready      master   22h   v1.16.4
master03   Ready      master   22h   v1.16.4
work01     Ready      <none>   22h   v1.16.4
work02     Ready      <none>   22h   v1.16.4
work03     Ready      <none>   22h   v1.16.4

image-20200309165121411

master01狀態爲NotReady

2.4.2 新建pod

[root@client ~]# more nginx-master.yaml 
apiVersion: apps/v1             #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment                #建立資源類型爲Deployment
metadata:                       #該資源元數據
  name: nginx-master            #Deployment名稱
spec:                           #Deployment的規格說明
  selector:
    matchLabels:
      app: nginx 
  replicas: 3                   #指定副本數爲3
  template:                     #定義Pod的模板
    metadata:                   #定義Pod的元數據
      labels:                   #定義label(標籤)
        app: nginx              #label的key和value分別爲app和nginx
    spec:                       #Pod的規格說明
      containers:               
      - name: nginx             #容器的名稱
        image: nginx:latest     #建立容器所使用的鏡像
[root@client ~]# kubectl apply -f nginx-master.yaml 
deployment.apps/nginx-master created
[root@client ~]# kubectl get po -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-master-75b7bfdb6b-9d66p   1/1     Running   0          20s   10.244.3.6   work01   <none>           <none>
nginx-master-75b7bfdb6b-h4bql   1/1     Running   0          20s   10.244.5.5   work03   <none>           <none>
nginx-master-75b7bfdb6b-zmc68   1/1     Running   0          20s   10.244.4.5   work02   <none>           <none>

image-20200309165142390

以新建pod nginx爲例測試集羣是否能正常對外提供服務。

2.5 結論

在3節點的k8s集羣中,當有一個control plane節點宕機時,集羣各項功能不受影響。

3. master02關機

在master01處於關閉狀態下,繼續關閉master02,測試集羣還可否正常對外服務。

3.1 關閉master02

[root@master02 ~]# init 0

3.2 apiserver組件節點查看

[root@lvs-keepalived01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.37:6443            Route   10     6          20

image-20200309165159752

此時對集羣的訪問都轉到master03

3.3 集羣功能測試

[root@client ~]# kubectl get nodes
The connection to the server 172.27.34.222:6443 was refused - did you specify the right host or port?

3.4 結論

在3節點的k8s集羣中,當有兩個control plane節點同時宕機時,etcd集羣崩潰,整個k8s集羣也不能正常對外服務。

十6、lvs-keepalived集羣高可用測試

1. 高可用測試前檢查

1.1 k8s集羣檢查

[root@client ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    master   161m   v1.16.4
master02   Ready    master   144m   v1.16.4
master03   Ready    master   142m   v1.16.4
work01     Ready    <none>   137m   v1.16.4
work02     Ready    <none>   135m   v1.16.4
work03     Ready    <none>   134m   v1.16.4

集羣內個節點運行正常

1.2 vip查看

[root@lvs-keepalived01 ~]# ip a|grep 222
    inet 172.27.34.222/32 scope global ens160

發現vip運行在lvs-keepalived01上

1.3 連接狀況

lvs-keepalived01:

[root@lvs-keepalived01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.35:6443            Route   10     6          0         
  -> 172.27.34.36:6443            Route   10     0          0         
  -> 172.27.34.37:6443            Route   10     38         0

lvs-keepalived02:

[root@lvs-keepalived02 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.35:6443            Route   10     0          0         
  -> 172.27.34.36:6443            Route   10     0          0         
  -> 172.27.34.37:6443            Route   10     0          0

2. lvs-keepalived01關機

關閉lvs-keepalived01,模擬宕機

[root@lvs-keepalived01 ~]# init 0

2.1 k8s集羣檢查

[root@client ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    master   166m   v1.16.4
master02   Ready    master   148m   v1.16.4
master03   Ready    master   146m   v1.16.4
work01     Ready    <none>   141m   v1.16.4
work02     Ready    <none>   139m   v1.16.4
work03     Ready    <none>   138m   v1.16.4

集羣內個節點運行正常

2.2 vip查看

[root@lvs-keepalived02 ~]# ip a|grep 222
    inet 172.27.34.222/32 scope global ens160

發現vip已漂移至lvs-keepalived02

2.3 連接狀況

lvs-keepalived02:

[root@lvs-keepalived02 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.27.34.222:6443 wrr
  -> 172.27.34.35:6443            Route   10     1          0         
  -> 172.27.34.36:6443            Route   10     4          0         
  -> 172.27.34.37:6443            Route   10     1          0

2.4 集羣功能性測試

[root@client ~]# kubectl delete -f nginx-master.yaml 
deployment.apps "nginx-master" deleted
[root@client ~]# kubectl get po -o wide
NAME                            READY   STATUS        RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-master-75b7bfdb6b-9d66p   0/1     Terminating   0          20m   10.244.3.6   work01   <none>           <none>
nginx-master-75b7bfdb6b-h4bql   0/1     Terminating   0          20m   10.244.5.5   work03   <none>           <none>
nginx-master-75b7bfdb6b-zmc68   0/1     Terminating   0          20m   10.244.4.5   work02   <none>           <none>
[root@client ~]# kubectl get po -o wide
No resources found in default namespace.

image-20200309165304669

刪除以前新建的pod nginx,成功刪除。

2.5 結論

當lvs-keepalived集羣有一臺宕機時,對k8s集羣無影響,仍能正常對外提供服務。

本文全部腳本和配置文件已上傳github:lvs-keepalived-install-k8s-HA-cluster

單機版k8s集羣部署詳見:k8s實踐(一):Centos7.6部署k8s(v1.14.2)集羣

主備高可用版k8s集羣部署詳見:k8s實踐(十五):Centos7.6部署k8s v1.16.4高可用集羣(主備模式)

image-20200309002139092

相關文章
相關標籤/搜索