主機列表:node
共有7臺服務器,3臺control plane,3臺work,1臺client。
linux
k8s 版本:nginx
本文采用kubeadm方式搭建高可用k8s集羣,k8s集羣的高可用實際是k8s各核心組件的高可用,這裏使用主備模式,架構以下:git
主備模式高可用架構說明:
github
- apiserver 經過keepalived實現高可用,當某個節點故障時觸發keepalived vip 轉移;
- controller-manager k8s內部經過選舉方式產生領導者(由--leader-elect 選型控制,默認爲true),同一時刻集羣內只有一個controller-manager組件運行;
- scheduler k8s內部經過選舉方式產生領導者(由--leader-elect 選型控制,默認爲true),同一時刻集羣內只有一個scheduler組件運行;
- etcd 經過運行kubeadm方式自動建立集羣來實現高可用,部署的節點數爲奇數,3節點方式最多容忍一臺機器宕機。
control plane和work節點都執行本部分操做。docker
Centos7.6安裝詳見:Centos7.6操做系統安裝及優化全紀錄json
安裝Centos時已經禁用了防火牆和selinux並設置了阿里源。segmentfault
[root@centos7 ~]# hostnamectl set-hostname master01 [root@centos7 ~]# more /etc/hostname master01
退出從新登錄便可顯示新設置的主機名master01centos
[root@master01 ~]# cat >> /etc/hosts << EOF 172.27.34.3 master01 172.27.34.4 master02 172.27.34.5 master03 172.27.34.93 work01 172.27.34.94 work02 172.27.34.95 work03 EOF
[root@master01 ~]# cat /sys/class/net/ens160/address [root@master01 ~]# cat /sys/class/dmi/id/product_uuid
保證各節點mac和uuid惟一api
[root@master01 ~]# swapoff -a
若須要重啓後也生效,在禁用swap後還需修改配置文件/etc/fstab,註釋swap
[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
本文的k8s網絡使用flannel,該網絡須要設置內核參數bridge-nf-call-iptables=1,修改這個參數須要系統有br_netfilter模塊。
查看br_netfilter模塊:
[root@master01 ~]# lsmod |grep br_netfilter
若是系統沒有br_netfilter模塊則執行下面的新增命令,若有則忽略。
臨時新增br_netfilter模塊:
[root@master01 ~]# modprobe br_netfilter
該方式重啓後會失效
永久新增br_netfilter模塊:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF [root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF modprobe br_netfilter EOF [root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 [root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-ip6tables = 1
[root@master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF [root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
- [] 中括號中的是repository id,惟一,用來標識不一樣倉庫
- name 倉庫名稱,自定義
- baseurl 倉庫地址
- enable 是否啓用該倉庫,默認爲1表示啓用
- gpgcheck 是否驗證從該倉庫得到程序包的合法性,1爲驗證
- repo_gpgcheck 是否驗證元數據的合法性 元數據就是程序包列表,1爲驗證
- gpgkey=URL 數字簽名的公鑰文件所在位置,若是gpgcheck值爲1,此處就須要指定gpgkey文件的位置,若是gpgcheck值爲0就不須要此項了
[root@master01 ~]# yum clean all [root@master01 ~]# yum -y makecache
配置master01到master0二、master03免密登陸,本步驟只在master01上執行。
[root@master01 ~]# ssh-keygen -t rsa
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4 [root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5
[root@master01 ~]# ssh 172.27.34.4 [root@master01 ~]# ssh master03
master01能夠直接登陸master02和master03,不須要輸入密碼。
control plane和work節點都執行本部分操做。
[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master01 ~]# yum list docker-ce --showduplicates | sort -r
[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
指定安裝的docker版本爲18.09.9
[root@master01 ~]# systemctl start docker [root@master01 ~]# systemctl enable docker
[root@master01 ~]# yum -y install bash-completion
[root@master01 ~]# source /etc/profile.d/bash_completion.sh
因爲Docker Hub的服務器在國外,下載鏡像會比較慢,能夠配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里雲加速器、DaoCloud 加速器,本文以阿里加速器配置爲例。
登錄地址爲:https://cr.console.aliyun.com ,未註冊的能夠先註冊阿里雲帳戶
配置daemon.json文件
[root@master01 ~]# mkdir -p /etc/docker [root@master01 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"] } EOF
重啓服務
[root@master01 ~]# systemctl daemon-reload [root@master01 ~]# systemctl restart docker
加速器配置完成
[root@master01 ~]# docker --version [root@master01 ~]# docker run hello-world
經過查詢docker版本和運行容器hello-world來驗證docker是否安裝成功。
修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’
[root@master01 ~]# more /etc/docker/daemon.json { "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] }
[root@master01 ~]# systemctl daemon-reload [root@master01 ~]# systemctl restart docker
修改cgroupdriver是爲了消除告警:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
control plane節點都執行本部分操做。
[root@master01 ~]# yum -y install keepalived
master01上keepalived配置:
[root@master01 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master01 } vrrp_instance VI_1 { state MASTER interface ens160 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 } }
master02上keepalived配置:
[root@master02 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master02 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 50 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 } }
master03上keepalived配置:
[root@master03 ~]# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id master03 } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 50 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.27.34.130 }
全部control plane啓動keepalived服務並設置開機啓動
[root@master01 ~]# service keepalived start [root@master01 ~]# systemctl enable keepalived
[root@master01 ~]# ip a
vip在master01上
control plane和work節點都執行本部分操做。
[root@master01 ~]# yum list kubelet --showduplicates | sort -r
本文安裝的kubelet版本是1.16.4,該版本支持的docker版本爲1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
- kubelet 運行在集羣全部節點上,用於啓動Pod和容器等對象的工具
- kubeadm 用於初始化集羣,啓動集羣的命令工具
- kubectl 用於和集羣通訊的命令行,經過kubectl能夠部署和管理應用,查看各類資源,建立、刪除和更新各類組件
啓動kubelet並設置開機啓動
[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile [root@master01 ~]# source .bash_profile
Kubernetes幾乎全部的安裝組件和Docker鏡像都放在goolge本身的網站上,直接訪問可能會有網絡問題,這裏的解決辦法是從阿里雲鏡像倉庫下載鏡像,拉取到本地之後改回默認的鏡像tag。本文經過運行image.sh腳本方式拉取鏡像。
[root@master01 ~]# more image.sh #!/bin/bash url=registry.cn-hangzhou.aliyuncs.com/loong576 version=v1.16.4 images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`) for imagename in ${images[@]} ; do docker pull $url/$imagename docker tag $url/$imagename k8s.gcr.io/$imagename docker rmi -f $url/$imagename done
url爲阿里雲鏡像倉庫地址,version爲安裝的kubernetes版本。
運行腳本image.sh,下載指定版本的鏡像
[root@master01 ~]# ./image.sh [root@master01 ~]# docker images
master01節點執行本部分操做。
[root@master01 ~]# more kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.16.4 apiServer: certSANs: #填寫全部kube-apiserver節點的hostname、IP、VIP - master01 - master02 - master03 - node01 - node02 - node03 - 172.27.34.3 - 172.27.34.4 - 172.27.34.5 - 172.27.34.93 - 172.27.34.94 - 172.27.34.95 - 172.27.34.130 controlPlaneEndpoint: "172.27.34.130:6443" networking: podSubnet: "10.244.0.0/16"
kubeadm.conf爲初始化的配置文件
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml
記錄kubeadm join的輸出,後面須要這個命令將work節點和其餘control plane節點加入集羣中。
You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失敗:
若是初始化失敗,可執行kubeadm reset後從新初始化
[root@master01 ~]# kubeadm reset [root@master01 ~]# rm -rf $HOME/.kube/config
[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master01 ~]# source .bash_profile
本文全部操做都在root用戶下執行,若爲非root用戶,則執行以下操做:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
在master01上新建flannel網絡
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
因爲網絡緣由,可能會安裝失敗,能夠在文末直接下載kube-flannel.yml文件,而後再執行apply
master01分發證書:
在master01上運行腳本cert-main-master.sh,將證書分發至master02和master03
[root@master01 ~]# ll|grep cert-main-master.sh -rwxr--r-- 1 root root 638 1月 2 15:23 cert-main-master.sh [root@master01 ~]# more cert-main-master.sh USER=root # customizable CONTROL_PLANE_IPS="172.27.34.4 172.27.34.5" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt # Quote this line if you are using external etcd scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key done
master02移動證書至指定目錄:
在master02上運行腳本cert-other-master.sh,將證書移至指定目錄
[root@master02 ~]# pwd /root [root@master02 ~]# ll|grep cert-other-master.sh -rwxr--r-- 1 root root 484 1月 2 15:29 cert-other-master.sh [root@master02 ~]# more cert-other-master.sh USER=root # customizable mkdir -p /etc/kubernetes/pki/etcd mv /${USER}/ca.crt /etc/kubernetes/pki/ mv /${USER}/ca.key /etc/kubernetes/pki/ mv /${USER}/sa.pub /etc/kubernetes/pki/ mv /${USER}/sa.key /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt # Quote this line if you are using external etcd mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key [root@master02 ~]# ./cert-other-master.sh
master03移動證書至指定目錄:
在master03上也運行腳本cert-other-master.sh
[root@master03 ~]# pwd /root [root@master03 ~]# ll|grep cert-other-master.sh -rwxr--r-- 1 root root 484 1月 2 15:31 cert-other-master.sh [root@master03 ~]# ./cert-other-master.sh
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \ --control-plane
運行初始化master生成的control plane節點加入集羣的命令
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \ --control-plane
master02和master03加載環境變量
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master02 ~]# source .bash_profile
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@master03 ~]# source .bash_profile
該步操做是爲了在master02和master03上也能執行kubectl命令。
[root@master01 ~]# kubectl get nodes [root@master01 ~]# kubectl get po -o wide -n kube-system
全部control plane節點處於ready狀態,全部的系統組件也正常。
kubeadm join 172.27.34.130:6443 --token qbwt6v.rr4hsh73gv8vrcij \ --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
運行初始化master生成的work節點加入集羣的命令
[root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 44m v1.16.4 master02 Ready master 33m v1.16.4 master03 Ready master 23m v1.16.4 work01 Ready <none> 11m v1.16.4 work02 Ready <none> 7m50s v1.16.4 work03 Ready <none> 3m4s v1.16.4
[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
[root@client ~]# yum clean all [root@client ~]# yum -y makecache
[root@client ~]# yum install -y kubectl-1.16.4
安裝版本與集羣版本保持一致
[root@client ~]# yum -y install bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
[root@client ~]# mkdir -p /etc/kubernetes [root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/ [root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@client ~]# source .bash_profile
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile [root@master01 ~]# source .bash_profile
[root@client ~]# kubectl get nodes [root@client ~]# kubectl get cs [root@client ~]# kubectl get po -o wide -n kube-system
本節內容都在client端完成
[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
若是鏈接超時,能夠多試幾回。recommended.yaml已上傳,也能夠在文末下載。
[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
因爲默認的鏡像倉庫網絡訪問不通,故改爲阿里鏡像
[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
配置NodePort,外部經過https://NodeIp:NodePort 訪問Dashboard,此時端口爲30001
[root@client ~]# cat >> recommended.yaml << EOF --- # ------------------- dashboard-admin ------------------- # apiVersion: v1 kind: ServiceAccount metadata: name: dashboard-admin namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: dashboard-admin subjects: - kind: ServiceAccount name: dashboard-admin namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin
建立超級管理員的帳號用於登陸Dashboard
[root@client ~]# kubectl apply -f recommended.yaml
[root@client ~]# kubectl get all -n kubernetes-dashboard
[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
令牌爲:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh1NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug
請使用火狐瀏覽器訪問:https://VIP:30001
接受風險
經過令牌方式登陸
Dashboard提供了能夠實現集羣管理、工做負載、服務發現和負載均衡、存儲、字典配置、日誌視圖等功能。
本節內容都在client端完成
經過ip查看apiserver所在節點,經過leader-elect查看scheduler和controller-manager所在節點:
[root@master01 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_6caf8003-052f-451d-8dce-4516825213ad","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:23Z","renewTime":"2020-01-03T07:57:55Z","leaderTransitions":2}' [root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master01_720d65f9-e425-4058-95d7-e5478ac951f7","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:20Z","renewTime":"2020-01-03T07:58:03Z","leaderTransitions":2}'
[root@master01 ~]# init 0
vip飄到了master02
[root@master02 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160
controller-manager和scheduler也發生了遷移
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_b3353e8f-a02f-4322-bf17-2f596cd25ba5","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:42Z","renewTime":"2020-01-03T08:06:36Z","leaderTransitions":3}' [root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master03_e0a2ec66-c415-44ae-871c-18c73258dc8f","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:56Z","renewTime":"2020-01-03T08:06:45Z","leaderTransitions":3}'
查詢:
[root@client ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady master 22h v1.16.4 master02 Ready master 22h v1.16.4 master03 Ready master 22h v1.16.4 work01 Ready <none> 22h v1.16.4 work02 Ready <none> 22h v1.16.4 work03 Ready <none> 22h v1.16.4
master01狀態爲NotReady
新建pod:
[root@client ~]# more nginx-master.yaml apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API kind: Deployment #建立資源類型爲Deployment metadata: #該資源元數據 name: nginx-master #Deployment名稱 spec: #Deployment的規格說明 selector: matchLabels: app: nginx replicas: 3 #指定副本數爲3 template: #定義Pod的模板 metadata: #定義Pod的元數據 labels: #定義label(標籤) app: nginx #label的key和value分別爲app和nginx spec: #Pod的規格說明 containers: - name: nginx #容器的名稱 image: nginx:latest #建立容器所使用的鏡像 [root@client ~]# kubectl apply -f nginx-master.yaml deployment.apps/nginx-master created [root@client ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s 10.244.5.6 work03 <none> <none> nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s 10.244.3.3 work01 <none> <none> nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s 10.244.4.5 work02 <none> <none>
當有一個control plane節點宕機時,VIP會發生漂移,集羣各項功能不受影響。
在關閉master01的同時關閉master02,測試集羣還可否正常對外服務。
[root@master02 ~]# init 0
[root@master03 ~]# ip a|grep 130 inet 172.27.34.130/32 scope global ens160
vip漂移至惟一的control plane:master03
[root@client ~]# kubectl get nodes Error from server: etcdserver: request timed out [root@client ~]# kubectl get nodes The connection to the server 172.27.34.130:6443 was refused - did you specify the right host or port?
etcd集羣崩潰,整個k8s集羣也不能正常對外服務。
本文全部腳本和配置文件已上傳:Centos7.6-install-k8s-v1.16.4-HA-cluster