CentOS 使用二進制部署 Kubernetes 1.13集羣

1、概述

kubernetes 1.13 已發佈,這是 2018 年年內第四次也是最後一次發佈新版本。Kubernetes 1.13 是迄今爲止發佈間隔最短的版本之一(與上一版本間隔十週),主要關注 Kubernetes 的穩定性與可擴展性,其中存儲與集羣生命週期相關的三項主要功能已逐步實現廣泛可用性。node

Kubernetes 1.13 的核心特性包括:利用 kubeadm 簡化集羣管理、容器存儲接口(CSI )以及將 CoreDNS 做爲默認 DNS 。linux

利用 kubeadm 簡化集羣管理功能git

大多數與 Kubernetes 接觸頻繁的人或多或少都會親自動手使用 kubeadm ,它是管理集羣生命週期的重要工具,可以幫助從建立到配置再到升級的整個流程。;隨着 1.13 版本的發佈,kubeadm 功能進入 GA 版本,正式廣泛可用。kubeadm 處理現有硬件上的生產集羣的引導,並以最佳實踐方式配置核心 Kubernetes 組件,以便爲新節點提供安全而簡單的鏈接流程並支持輕鬆升級。github

該 GA 版本中最值得注意的是已經畢業的高級功能,尤爲是可插拔性和可配置性。kubeadm 旨在爲管理員與高級自動化系統提供一套工具箱,現在已邁出重要一步。算法

容器存儲接口(CSI)docker

容器存儲接口最初於 1.9 版本中做爲 alpha 測試功能引入,在 1.10 版本中進入 beta 測試,現在終於進入 GA 階段正式廣泛可用。在 CSI 的幫助下,Kubernetes 卷層將真正實現可擴展性。經過 CSI ,第三方存儲供應商將能夠直接編寫可與 Kubernetes 互操做的代碼,而無需觸及任何 Kubernetes 核心代碼。事實上,相關規範也已經同步進入 1.0 階段。json

隨着 CSI 的穩定,插件做者將可以按照本身的節奏開發核心存儲插件,詳見 CSI 文檔。bootstrap

CoreDNS 成爲 Kubernetes 的默認 DNS 服務器vim

在 1.11 版本中,開發團隊宣佈 CoreDNS 已實現基於 DNS 服務發現的廣泛可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成爲 Kubernetes 中的默認 DNS 服務器。CoreDNS 是一種通用的、權威的 DNS 服務器,可以提供與 Kubernetes 向下兼容且具有可擴展性的集成能力。因爲 CoreDNS 自身單一可執行文件與單一進程的特性,所以 CoreDNS 的活動部件數量會少於以前的 DNS 服務器,且可以經過建立自定義 DNS 條目來支持各種靈活的用例。此外,因爲 CoreDNS 採用 Go 語言編寫,它具備強大的內存安全性。後端

CoreDNS 如今是 Kubernetes 1.13 及後續版本推薦的 DNS 解決方案,Kubernetes 已將經常使用測試基礎設施架構切換爲默認使用 CoreDNS ,所以,開發團隊建議用戶也儘快完成切換。KubeDNS 仍將至少支持一個版本,但如今是時候開始規劃遷移了。另外,包括 1.11 中 Kubeadm 在內的許多 OSS 安裝工具也已經進行了切換。

一、安裝環境準備:

部署節點說明

IP地址 主機名 CPU 內存 磁盤
192.168.4.100 master 1C 1G 40G
192.168.4.21 node 1C 1G 40G
192.168.4.56 node1 1C 1G 40G

k8s安裝包下載

連接:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ
提取碼:pm9u

部署網絡說明

二、架構圖

Kubernetes 架構圖

Flannel網絡架構圖

  • 數據從源容器中發出後,經由所在主機的docker0虛擬網卡轉發到flannel0虛擬網卡,這是個P2P的虛擬網卡,flanneld服務監聽在網卡的另一端。
  • Flannel經過Etcd服務維護了一張節點間的路由表,在稍後的配置部分咱們會介紹其中的內容。
  • 源主機的flanneld服務將本來的數據內容UDP封裝後根據本身的路由表投遞給目的節點的flanneld服務,數據到達之後被解包,而後直接進入目的節點的flannel0虛擬網卡,
    而後被轉發到目的主機的docker0虛擬網卡,最後就像本機容器通訊一下的有docker0路由到達目標容器。

三、 Kubernetes工做流程


集羣功能各模塊功能描述:

Master節點:
Master 節點上面主要由四個模塊組成,APIServer,schedule , controller-manager , etcd

APIServer: APIServer 負責對外提供 RESTful 的 kubernetes API 的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給 APIServer 處理後再交給 etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對 kubernetes API 的調用)是直接和 APIServer 交互的。

schedule: schedule 負責調度 Pod 到合適的 Node 上,若是把 scheduler 當作一個黑匣子,那麼它的輸入是 pod 和由多個 Node 組成的列表,輸出是 Pod 和一個 Node 的綁定。 kubernetes 目前提供了調度算法,一樣也保留了接口。用戶根據本身的需求定義本身的調度算法。

controller manager: 若是 APIServer 作的是前臺的工做的話,那麼 controller manager 就是負責後臺的。每個資源都對應一個控制器。而 control manager 就是負責管理這些控制器的,好比咱們經過 APIServer 建立了一個Pod,當這個 Pod 建立成功後,APIServer 的任務就算完成了。

etcd:etcd 是一個高可用的鍵值存儲系統,kubernetes 使用它來存儲各個資源的狀態,從而實現了 Restful 的 API。

Node節點:
每一個Node節點主要由三個模板組成:kublet, kube-proxy

kube-proxy: 該模塊實現了 kubernetes 中的服務發現和反向代理功能。kube-proxy 支持 TCP 和 UDP 鏈接轉發,默認基 Round Robin 算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy 使用etcd 的 watch 機制監控集羣中 service 和 endpoint 對象數據的動態變化,而且維護一個 service 到 endpoint 的映射關係,從而保證了後端 pod 的 IP 變化不會對訪問者形成影響,另外,kube-proxy 還支持 session affinity。

kublet:kublet 是 Master 在每一個 Node 節點上面的 agent,是 Node 節點上面最重要的模塊,它負責維護和管理該 Node 上的全部容器,可是若是容器不是經過 kubernetes 建立的,它並不會管理。本質上,它負責使 Pod 的運行狀態與指望的狀態一致。

2、Kubernetes 安裝及配置

一、初始化環境

1.一、設置關閉防火牆及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

1.二、關閉Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

1.三、設置Docker所需參數

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

1.四、安裝 Docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

1.五、建立安裝目錄

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

1.六、安裝及配置CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.七、建立認證證書

建立 ETCD 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

建立 ETCD CA 配置文件

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

建立 ETCD Server 證書

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.4.100",
    "192.168.4.21",
    "192.168.4.56"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

生成 ETCD CA 證書和私鑰

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

建立 Kubernetes CA 證書

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成 API_SERVER 證書

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.4.100",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

建立 Kubernetes Proxy 證書

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shenzhen",
      "ST": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

1.八、 ssh-key認證

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o root@qas-k8s-master01
The key's randomart image is:
+---[RSA 2048]----+
|o.==o o. ..      |
|ooB+o+ o.  .     |
|B++@o o   .      |
|=X**o    .       |
|o=O. .  S        |
|..+              |
|oo .             |
|* .              |
|o+E              |
+----[SHA256]-----+

# 複製 SSH 密鑰到目標主機,開啓無密碼 SSH 登陸
# ssh-copy-id 192.168.4.21
# ssh-copy-id 192.168.4.56

 

2 、部署ETCD

解壓安裝文件

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.100:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.100:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

建立 etcd的 systemd unit 啓動文件

vim /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

拷貝證書文件

cp ca*pem server*pem  /k8s/etcd/ssl

啓動ETCD服務

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

將啓動文件、配置文件拷貝到 節點一、節點2

cd /k8s/ 
scp -r etcd 192.168.4.21:/k8s/
scp -r etcd 192.168.4.56:/k8s/
scp /usr/lib/systemd/system/etcd.service  192.168.4.21:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service  192.168.4.56:/usr/lib/systemd/system/etcd.service 

#--節點1
vim /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.21:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.21:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.21:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.21:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://172.16.8.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#--節點2
vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.56:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.56:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.56:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.56:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

驗證集羣是否正常運行

[root@master ~]# cd /k8s/etcd/bin/
[root@master bin]# ./etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.100:2379,\
> https://192.168.4.21:2379,\
> https://192.168.4.56:2379" cluster-health

member 2345cdd5020eb294 is healthy: got healthy result from https://192.168.4.100:2379
member 91d74712f79e544f is healthy: got healthy result from https://192.168.4.21:2379
member b313b7e8d0a528cc is healthy: got healthy result from https://192.168.4.56:2379
cluster is healthy


注意:
啓動ETCD集羣同時啓動二個節點,啓動一個節點集羣是沒法正常啓動的(或將處於activing狀態)

 

三、部署Flannel網絡

向 etcd 寫入集羣 Pod 網段信息

cd /k8s/etcd/ssl/

/k8s/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.4.100:2379,\
https://192.168.4.21:2379,https://192.168.4.56:2379" \
set /coreos.com/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
  • flanneld 當前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段數據;
  • 寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 –cluster-cidr 參數值一致;

解壓安裝

tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

配置Flannel

vim /k8s/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 flanneld 的 systemd unit 文件

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,後續 docker 啓動時 使用這個文件中的環境變量配置 docker0 網橋;
  • flanneld 使用系統缺省路由所在的接口與其它節點通訊,對於有多個網絡接口(如內網和公網)的節點,能夠用 -iface 參數指定通訊接口,如上面的 eth0 接口;
  • flanneld 運行時須要 root 權限;

配置Docker啓動指定子網段

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

配置Docker啓動指定子網段

vim /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

將flanneld systemd unit 文件到全部節點

cd /k8s/
scp -r kubernetes 192.168.4.21:/k8s/
scp -r kubernetes 192.168.4.56:/k8s/
scp /k8s/kubernetes/cfg/flanneld 192.168.4.21:/k8s/kubernetes/cfg/flanneld
scp /k8s/kubernetes/cfg/flanneld 192.168.4.56:/k8s/kubernetes/cfg/flanneld
scp /usr/lib/systemd/system/docker.service  192.168.4.21:/usr/lib/systemd/system/docker.service 
scp /usr/lib/systemd/system/docker.service  192.168.4.56:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/flanneld.service  192.168.4.21:/usr/lib/systemd/system/flanneld.service 
scp /usr/lib/systemd/system/flanneld.service  192.168.4.56:/usr/lib/systemd/system/flanneld.service 

# 啓動服務
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

查看是否生效

[root@node ssl]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:a5:99:6a brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.21/16 brd 192.168.255.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::93dc:dfaf:2ddf:1aa9/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:5a:29:34:85 brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.1/24 brd 172.18.58.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 16:6e:22:47:d0:cd brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever

四、部署 master 節點

kubernetes master 節點運行以下組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 能夠以集羣模式運行,經過 leader 選舉產生一個工做進程,其它進程處於阻塞模式。

將二進制文件解壓拷貝到master 節點

tar -xvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

拷貝認證

cp *pem /k8s/kubernetes/ssl/

部署 kube-apiserver 組件

建立 TLS Bootstrapping Token

[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
91af09d8720f467def95b65704862025

[root@master ~]# cat /k8s/kubernetes/cfg/token.csv 
91af09d8720f467def95b65704862025,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

建立apiserver配置文件

vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 \
--bind-address=192.168.4.100 \
--secure-port=6443 \
--advertise-address=192.168.4.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 kube-apiserver systemd unit 文件

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啓動服務

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

查看apiserver是否運行

[root@master ~]# ps -ef |grep kube-apiserver
root      90572 118543  0 10:27 pts/0    00:00:00 grep --color=auto kube-apiserver
root     119804      1  1 Feb26 ?        00:22:45 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 --bind-address=192.168.4.100 --secure-port=6443 --advertise-address=192.168.4.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem

 

部署kube-scheduler

建立kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
  • –address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
  • –kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它鏈接和驗證 kube-apiserver;
  • –leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;

建立kube-scheduler systemd unit 文件

vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啓動服務

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service

查看kube-scheduler是否運行

[root@master ~]# ps -ef |grep kube-scheduler 
root       3591      1  0 Feb25 ?        00:16:17 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      90724 118543  0 10:28 pts/0    00:00:00 grep --color=auto kube-scheduler
[root@master ~]# 
[root@master ~]# systemctl status kube-scheduler 
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 14:58:31 CST; 1 day 19h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 3591 (kube-scheduler)
   Memory: 36.9M
   CGroup: /system.slice/kube-scheduler.service
           └─3591 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 27 10:22:54 master kube-scheduler[3591]: I0227 10:22:54.611139    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:01 master kube-scheduler[3591]: I0227 10:23:01.496338    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:02 master kube-scheduler[3591]: I0227 10:23:02.346595    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:19 master kube-scheduler[3591]: I0227 10:23:19.677905    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:26:36 master kube-scheduler[3591]: I0227 10:26:36.850715    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:21 master kube-scheduler[3591]: I0227 10:27:21.523891    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:22 master kube-scheduler[3591]: I0227 10:27:22.520733    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:12 master kube-scheduler[3591]: I0227 10:28:12.498729    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:33 master kube-scheduler[3591]: I0227 10:28:33.519011    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:50 master kube-scheduler[3591]: I0227 10:28:50.573353    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Hint: Some lines were ellipsized, use -l to show in full.

 

部署kube-controller-manager

建立kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

 

建立kube-controller-manager systemd unit 文件

vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啓動服務

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

查看kube-controller-manager是否運行

[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-26 14:14:18 CST; 20h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 120023 (kube-controller)
   Memory: 76.2M
   CGroup: /system.slice/kube-controller-manager.service
           └─120023 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elec...

Feb 27 10:31:30 master kube-controller-manager[120023]: I0227 10:31:30.722696  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.088697  120023 gc_controller.go:144] GC'ing orphaned
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.094678  120023 gc_controller.go:173] GC'ing unsche...ting.
Feb 27 10:31:34 master kube-controller-manager[120023]: I0227 10:31:34.271634  120023 attach_detach_controller.go:634] pr...4.21"
Feb 27 10:31:35 master kube-controller-manager[120023]: I0227 10:31:35.723490  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.377876  120023 attach_detach_controller.go:634] pr....100"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.498005  120023 attach_detach_controller.go:634] pr...4.56"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.500915  120023 cronjob_controller.go:111] Found 0 jobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505005  120023 cronjob_controller.go:119] Found 0 cronjobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505021  120023 cronjob_controller.go:122] Found 0 groups
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]# 
[root@master ~]# ps -ef|grep  kube-controller-manager
root      90967 118543  0 10:31 pts/0    00:00:00 grep --color=auto kube-controller-manager
root     120023      1  0 Feb26 ?        00:08:42 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem

 

將可執行文件路/k8s/kubernetes/ 添加到 PATH 變量中

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin
​​​​​​​
# 生效變量
source /etc/profile

查看master集羣狀態

[root@master ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

 

五、部署node 節點

kubernetes work 節點運行以下組件:

  • docker 前面已經部署
  • kubelet
  • kube-proxy

部署 kubelet 組件

  • kublet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如exec、run、logs 等;
  • kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況;
  • 爲確保安全,本文檔只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如apiserver、heapster)。

將kubelet 二進制文件拷貝node節點

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.21:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.56:/k8s/kubernetes/bin/

建立 kubelet bootstrap kubeconfig 文件(master節點)

# 在master節點
cd /k8s/kubernetes/ssl/

# 編輯並運行該腳本
vim  environment.sh
# 建立kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=91af09d8720f467def95b65704862025
KUBE_APISERVER="https://192.168.4.100:6443"

# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 建立kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

將bootstrap kubeconfig kube-proxy.kubeconfig 文件拷貝到全部 nodes節點(master節點)

cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.21:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.56:/k8s/kubernetes/cfg/

建立kubelet 參數配置文件拷貝到全部 nodes節點

建立 kubelet 參數配置模板文件

# 節點1
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.21
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true


# 節點2
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.56
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

建立kubelet配置文件

# 節點1
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.21 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

# 節點2
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.56 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

建立kubelet systemd unit 文件(全部節點)

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

將kubelet-bootstrap用戶綁定到系統集羣角色(全部節點)

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

啓動服務(全部節點)

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

approve kubelet CSR 請求

能夠手動或自動 approve CSR 請求。推薦使用自動的方式,由於從 v1.8 版本開始,能夠自動輪轉approve csr 後生成的證書。
手動 approve CSR 請求
查看 CSR 列表:

# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   39m    kubelet-bootstrap   Pending
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   5m5s   kubelet-bootstrap   Pending

# kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs
certificatesigningrequest.certificates.k8s.io/node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 

# kubectl certificate approve node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s  
certificatesigningrequest.certificates.k8s.io/node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s approved
[
# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   41m     kubelet-bootstrap   Approved,Issued
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   7m32s   kubelet-bootstrap   Approved,Issued
  • Requesting User:請求 CSR 的用戶,kube-apiserver 對它進行認證和受權;
  • Subject:請求籤名的證書信息;
  • 證書的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 受權模式會授予該證書的相關權限;

查看集羣狀態

[root@master ssl]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.4.100   Ready             43h   v1.13.0
192.168.4.21    Ready             20h   v1.13.0
192.168.4.56    Ready             20h   v1.13.0

部署 kube-proxy 組件

kube-proxy 運行在全部 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。

建立 kube-proxy 配置文件

vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.100 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • bindAddress: 監聽地址;
  • clientConnection.kubeconfig: 鏈接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根據 –cluster-cidr 判斷集羣內部和外部流量,指定 –cluster-cidr 或 –masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;
  • hostnameOverride: 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 ipvs 規則;
  • mode: 使用 ipvs 模式;

建立kube-proxy systemd unit 文件

vim /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

啓動服務

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@node ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 15:38:16 CST; 1 day 19h ago
 Main PID: 2887 (kube-proxy)
   Memory: 8.2M
   CGroup: /system.slice/kube-proxy.service
           ‣ 2887 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.4.100 --cluster-cidr=10....

Feb 27 11:06:44 node kube-proxy[2887]: I0227 11:06:44.625875    2887 config.go:141] Calling handler.OnEndpointsUpdate

集羣狀態

打node 或者master 節點的標籤

kubectl label node 192.168.4.100  node-role.kubernetes.io/master='master'
kubectl label node 192.168.4.21  node-role.kubernetes.io/node='node'
kubectl label node 192.168.4.56  node-role.kubernetes.io/node='node'
[root@master ~]# kubectl get node,cs
NAME                 STATUS   ROLES    AGE   VERSION
node/192.168.4.100   Ready    master   43h   v1.13.0
node/192.168.4.21    Ready    node     20h   v1.13.0
node/192.168.4.56    Ready    node     20h   v1.13.0

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}
相關文章
相關標籤/搜索