k8s高可用二進制部署使用Calico網絡方案

服務器規劃node

192.168.30.24 k8s-master1
192.168.30.25 k8s-master2
192.168.30.26 k8s-node1
192.168.30.30 k8s-node2
192.168.30.31 k8s-node3
192.168.30.32 k8s-slb1
192.168.30.33 k8s-slb2

生產環境高可用集羣
規格:配置3/5/7個master, 3/5/7etcd集羣,3/5/7個nginx對api作負載均衡,1個slb充當HA來訪問k8s的APIlinux

參考阿里雲配置:nginx

節點規模    Master規格
1-5個節點  4C8G(不建議2C4G)
6-20個節點 4C16G
21-100個節點   8C32G
100-200個節點  16C64G

具體部署步驟git

1、系統初始化
2、頒發ETCD證書
3、部署ETCD集羣
4、頒發K8S相關證書
5、部署Master組件
6、部署Node組件
7、部署CNI插件(Calico插件)
8、部署Coredns插件
9、擴容Node節點
10、縮容Node節點
11、部署高可用HA

1、系統初始化

關閉防火牆:
# systemctl stop firewalld
# systemctl disable firewalld

關閉selinux:
# setenforce 0 # 臨時
# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

關閉swap:
# swapoff -a  # 臨時
# vim /etc/fstab  # 永久

同步系統時間:
# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 

添加hosts:
# vim /etc/hosts
192.168.30.24 k8s-master1
192.168.30.25 k8s-master2
192.168.30.26 k8s-node1
192.168.30.30 k8s-node2

修改主機名:
hostnamectl set-hostname k8s-master1

2、Etcd證書頒發

在k8s中有兩套證書,一套是k8s的一套是etcd的github

證書的頒發有兩種,一種證書能夠是自籤的,另外就是經過權威機構進行頒發的
自籤:
權威機構: 像賽門鐵克 給域名頒發差很少3000左右 另外就是泛域名證書 * .zhaocheng.com,這種的通常價格在幾萬到幾十萬左右docker

無論怎麼頒發,都有一個根證書,根據這個根證書去效驗,只要是這個證書頒發的就是受信任的,若是不是這個頒發的就是不可受信任的json

對於咱們網站去用的,都會使用買的,經過機構進行頒發,證書會頒發兩個,一個是crt,一個是key,crt是數字證書,一個是私鑰,而自簽證書也會頒發這兩個,也就是經過CA這個機構去進行頒發bootstrap

全部部署安裝包以及yaml文件都放在雲盤
連接:https://pan.baidu.com/s/1dbgUyudy_6yhSI6jlcaEZQvim

提取碼評論區要api

2.1 生成etcd證書

[root@k8s-master1 ~]# ls
TLS.tar.gz
[root@k8s-master1 ~]# tar xf TLS.tar.gz 
[root@k8s-master1 ~]# cd TLS/

這裏有兩個目錄,一個是etcd 一個是k8s,也就是爲etcd和k8s都去頒發這麼一個證書

[root@k8s-master1 TLS]# ls
cfssl  cfssl-certinfo  cfssljson  cfssl.sh  etcd  k8s

頒發證書的時候會用到cfssl,這個工具,或者還有openssl,這個主要用來自簽證書的
執行cfssl.sh
這裏把下載的方式直接寫入這個腳本中了

[root@k8s-master1 TLS]# more cfssl.sh 
#curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
#curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
#curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
cp -rf cfssl cfssl-certinfo cfssljson /usr/local/bin
chmod +x /usr/local/bin/cfssl*
[root@k8s-master1 TLS]# bash cfssl.sh

放到這個/usr/local/bin 下,如今就可使這個工具來簽發證書

[root@k8s-master1 TLS]# ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson

自建CA,經過這個CA機構來頒發證書

[root@k8s-master1 TLS]# cd etcd/
[root@k8s-master1 etcd]# ls
ca-config.json  ca-csr.json  generate_etcd_cert.sh  server-csr.json
[root@k8s-master1 etcd]# more generate_etcd_cert.sh 
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@k8s-master1 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master1 etcd]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  generate_etcd_cert.sh  server-csr.json

建立完以後會看到關於ca.pem,之後就能夠拿這些去頒發證書

頒發的話須要讓頒發者寫一個文件,就是要哪一個域名或者哪一個服務來頒發這個證書
如今咱們要爲etcd去頒發一個證書,也就是server-csr.json這個文件,這個服務器生產中若是機器富裕的話可使用單獨的機器去部署

[root@k8s-master1 etcd]# more server-csr.json 
{
    "CN": "etcd",
    "hosts": [
        "192.168.30.24",
        "192.168.30.26",
        "192.168.30.30"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

如今咱們拿這個文件去像CA機構去請求證書
這裏會生成server開頭的pem和key
[root@k8s-master1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

咱們會用到.pem的證書,這就是爲etcd頒發的
[root@k8s-master1 etcd]# ls
.pem
ca-key.pem ca.pem server-key.pem server.pem

3、Etcd集羣的部署

官方地址:https://etcd.io/
k8s高可用二進制部署使用Calico網絡方案
etcd是coreos開源的key-value系統,主要用於服務註冊和服務發現和共享配置,讓其餘的去讀取隨着ETCD和K8S項目的發展,如今etcd也做爲k8s的存儲了,etcd是由多個節點相互通訊來進行提供對外服務的,每一個節點都有存儲的數據,而節點之間又是經過RAFT的協議來保證每一個節點的一致性,而ETCD官方推薦3個或者5個節點來組成一個集羣,奇數來組件一個集羣,3個節點冗餘1個節點出現故障,5個節點冗餘2個節點故障,7個節點冗餘3個節點故障,通常3個節點就夠,若是讀寫量很大的話那麼就部署5個節點,那麼他們節點之間會有一個主節點leader,它主要處理的是寫的操做,好比etcd1來選舉的爲主,其餘的爲從,其餘的爲寫,都會往這個主裏面去發送,而後寫完以後會同步到從裏面,當主掛了以後會從新選舉,若是打不了奇數的話,它是沒法進行選舉的,也就是爲何它使用奇數的方式去部署集羣,當一個主節點掛了,會選舉一個節點出來提供寫的服務。

3.一、部署etcd集羣

這裏有兩個文件,一個是etcd.service,主要用來經過systemctl來管理etcd的服務的,主要用來啓動etcd的,由於使用的是Centos7的系統,一個是etcd的工做目錄

[root@k8s-master1 ~]# ls
etcd.tar.gz  TLS  TLS.tar.gz
[root@k8s-master1 ~]# tar xf etcd.tar.gz 
[root@k8s-master1 ~]# ls
etcd  etcd.service  etcd.tar.gz  TLS  TLS.tar.gz

這裏已是下載好了,在官方能夠下載其餘的版本,若是想換其餘的版本,能夠直接將兩個進行替換掉

[root@k8s-master1 ~]# cd etcd/
[root@k8s-master1 etcd]# ls
bin  cfg  ssl
[root@k8s-master1 etcd]# cd bin/
[root@k8s-master1 bin]# ls
etcd  etcdctl

另外須要將以前的證書刪除掉,這是以前的,須要替換成剛纔咱們生成的etcd的證書文件

[root@k8s-master1 ssl]# ls
ca.pem  server-key.pem  server.pem
[root@k8s-master1 ssl]# rm -rf *

還有一個就是etcd的etcd.conf文件
etcd有兩個重要的端口

ETCD_LISTEN_PEER_URLS="https://192.168.31.61:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.61:2379"

第一個是用於etcd集羣之間內部通訊的地址和端口,也就是節點之間互相的一個通訊,走的是https進行通訊的,這一塊也是須要咱們去配置上證書的
第二個是客戶端監聽的地址,就是讓別的程序經過這個地址和端口來鏈接數據的操做,固然這一塊也須要這麼一個證書,客戶端鏈接的時候須要證書來認證

如今去修改咱們的etcd.conf監聽的地址

[root@k8s-master1 etcd]# more cfg/etcd.conf 

#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.24:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.24:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.24:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.24:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.30.24:2380,etcd-2=https://192.168.30.26:2380,etcd-3=https://192.168.30.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

如今將證書拿到咱們的目錄節點下

[root@k8s-master1 etcd]# cp /root/TLS/etcd/{ca,server,server-key}.pem ssl/
[root@k8s-master1 etcd]# ls
bin  cfg  ssl
[root@k8s-master1 etcd]# cd ssl/
[root@k8s-master1 ssl]# ls
ca.pem  server-key.pem  server.pem

將咱們剛纔配置修改好的etcd目錄及啓動system文件分發到咱們其餘的etcd節點服務器中

[root@k8s-master1 ~]# scp -r etcd 192.168.30.24:/opt
[root@k8s-master1 ~]# scp -r etcd 192.168.30.26:/opt
[root@k8s-master1 ~]# scp -r etcd 192.168.30.30:/opt
再將啓動的system啓動文件也分發到咱們的指定目錄中
[root@k8s-master1 ~]# scp -r etcd.service 192.168.30.24:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp -r etcd.service 192.168.30.26:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp -r etcd.service 192.168.30.30:/usr/lib/systemd/system

修改咱們的其餘的節點的服務端地址
修改的時候要修改集羣編號,以及監聽本地的地址

[root@k8s-node1 ~]# more /opt/etcd/cfg/etcd.conf 

#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.26:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.26:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.26:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.26:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.30.24:2380,etcd-2=https://192.168.30.26:2380,etcd-3=https://192.168.30.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

修改集羣編號和監聽本地的地址

[root@k8s-node2 ~]# more /opt/etcd/cfg/etcd.conf 

#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.30:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.30:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.30:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.30:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.30.24:2380,etcd-2=https://192.168.30.26:2380,etcd-3=https://192.168.30.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new
"

每一個節點都去啓動etcd集羣

[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start etcd
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl start etcd
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl start etcd

每一個節點都設置開機啓動

[root@k8s-master1 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

查看etcd的日誌,經過journalctl,這裏能夠直接看到etcd的版本以及集羣的每一個監聽的地址

[root@k8s-master1 ~]# journalctl -u etcd
Mar 29 20:58:20 k8s-master1 etcd[52701]: etcd Version: 3.3.13
Mar 29 20:58:20 k8s-master1 etcd[52701]: Git SHA: 98d3084
Mar 29 20:58:20 k8s-master1 etcd[52701]: Go Version: go1.10.8
Mar 29 20:58:20 k8s-master1 etcd[52701]: Go OS/Arch: linux/amd64

Mar 29 20:58:20 k8s-master1 etcd[52701]: added member 7d0b0924d5dc6c42 [https://192.168.30.24:2380] to cluster 5463d984b27d1295
Mar 29 20:58:20 k8s-master1 etcd[52701]: added member 976cfd3f7cca5aa2 [https://192.168.30.30:2380] to cluster 5463d984b27d1295
Mar 29 20:58:20 k8s-master1 etcd[52701]: added member f2f52c31a7a3af4c [https://192.168.30.26:2380] to cluster 5463d984b27d1295

查看etcd集羣健康狀態

[root@k8s-master1 ~]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.30.24:2379,https://192.168.30.26:2379,https://192.168.30.30:2379" cluster-health
member 7d0b0924d5dc6c42 is healthy: got healthy result from https://192.168.30.24:2379
member 976cfd3f7cca5aa2 is healthy: got healthy result from https://192.168.30.30:2379
member f2f52c31a7a3af4c is healthy: got healthy result from https://192.168.30.26:2379
cluster is healthy

4、頒發K8S相關證書

4.1 、首先先去部署Apiserver,由於它是集羣的訪問入口,另外就是K8s也是使用證書進行通訊的,如今咱們須要爲K8s也進行頒發證書
這裏也有一套CA,這個CA是不能和ETCD用的,他們都是獨立的一套,這裏還有兩個請求頒發的文件,一個是kube-proxy-csr,json,這個是工做節點Node節點所準備的證書,也是由apiserver這個頒發出來的,server-csr.json,這個是apiserver頒發的證書,爲了啓動https的證書

[root@k8s-master1 TLS]# cd k8s/
[root@k8s-master1 k8s]# ls
ca-config.json  ca-csr.json  generate_k8s_cert.sh  kube-proxy-csr.json  server-csr.json

也就是應用程序會經過服務器的IP----》https API(自籤的證書)
而進行交互的證書的服務器會有咱們的VIP地址,也就是keepalived的地址,還有master的地址,還有就是SLB負載均衡的地址,都會進行交互,因此都要寫進hosts,通常要多寫幾個進行預留
修改可信任的IP

[root@k8s-master1 k8s]# more server-csr.json 

{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "192.168.30.20",
      "192.168.30.24",
      "192.168.30.25",
      "192.168.30.32",
      "192.168.30.33",
      "192.168.30.34"
最後一個沒有逗號

生成關於這些K8s相關的證書

[root@k8s-master1 k8s]# more generate_k8s_cert.sh 
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@k8s-master1 k8s]# bash generate_k8s_cert.sh

這裏會有ca機構的證書,還有爲kube-proxy和APIserver用到的證書

[root@k8s-master1 k8s]# ls *pem
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

5、部署Master組件

5.1 部署apiserver,controller-manager和scheduler
這裏放在文件中,若是是下載的新版,須要將kube-apiserver kube-controller-
manager kubectl kube-scheduler放到kubernetes/bin目錄下

[root@k8s-master1 ~]# tar xf k8s-master.tar.gz 
[root@k8s-master1 ~]# ls
etcd          etcd.tar.gz        kube-apiserver.service           kubernetes              TLS
etcd.service  k8s-master.tar.gz  kube-controller-manager.service  kube-scheduler.service  TLS.tar.gz
[root@k8s-master1 ~]# cd kubernetes/
[root@k8s-master1 kubernetes]# ls
bin  cfg  logs  ssl
[root@k8s-master1 kubernetes]# cd bin/
[root@k8s-master1 bin]# ls
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler

這個目錄時這樣的,bin目錄下都是可執行文件,cfg都是啓動這個組件的配置,logs放日誌的,ssl是放證書的

[root@k8s-master1 kubernetes]# tree
.
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
└── ssl

4 directories, 8 files

將咱們的證書文件將拷貝到咱們的目錄當中來

[root@k8s-master1 kubernetes]# cp /root/TLS/k8s/*.pem ssl/
[root@k8s-master1 kubernetes]# ls ssl/
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

刪除不用的證書

[root@k8s-master1 ssl]# rm -rf kube-proxy-key.pem kube-proxy.pem 
[root@k8s-master1 ssl]# ls
ca-key.pem  ca.pem  server-key.pem  server.pem

進入到cfg目錄下修改鏈接地址

[root@k8s-master1 cfg]# ls
kube-apiserver.conf  kube-controller-manager.conf  kube-scheduler.conf  token.csv
修改etcd的鏈接地址和apiserver的地址
[root@k8s-master1 cfg]# vim kube-apiserver.conf 
--etcd-servers=https://192.168.30.24:2379,https://192.168.30.26:2379,https://192.168.30.30:2379 \
--bind-address=192.168.30.24 \
--secure-port=6443 \
--advertise-address=192.168.30.24 \

再將咱們的配置文件放到咱們的工做目錄/opt下

[root@k8s-master1 ~]# mv kubernetes/ /opt/
[root@k8s-master1 ~]# ls
etcd          etcd.tar.gz        kube-apiserver.service           kube-scheduler.service  TLS.tar.gz
etcd.service  k8s-master.tar.gz  kube-controller-manager.service  TLS
[root@k8s-master1 ~]# mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /usr/lib/systemd/system

啓動kube-apiserver

[root@k8s-master1 ~]# systemctl start kube-apiserver.service 
[root@k8s-master1 ~]# ps -ef |grep kube
root      53921      1 99 22:24 ?        00:00:06 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.30.24:2379,https://192.168.30.26:2379,https://192.168.30.30:2379 --bind-address=192.168.30.24 --secure-port=6443 --advertise-address=192.168.30.24 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
root      53937  50851  0 22:24 pts/1    00:00:00 grep --color=auto kube

日誌文件地址
ERROR錯誤日誌
INFO日誌
WARNING警告日誌

[root@k8s-master1 ~]# ls /opt/kubernetes/logs/
kube-apiserver.ERROR                                             kube-apiserver.k8s-master1.root.log.INFO.20200329-222418.53921
kube-apiserver.INFO                                              kube-apiserver.k8s-master1.root.log.WARNING.20200329-222420.53921
kube-apiserver.k8s-master1.root.log.ERROR.20200329-222424.53921  kube-apiserver.WARNING

啓動其餘兩個組件,日誌也都會落到logs目錄裏面

[root@k8s-master1 ~]# systemctl start kube-controller-manager.service 
[root@k8s-master1 ~]# systemctl start kube-scheduler.service

設置開機啓動
[root@k8s-master1 ~]# for i in $(ls /opt/kubernetes/bin/); do systemctl enable $i;done

將kubectl命令放到咱們的系統變量裏面

[root@k8s-master1 ~]# mv /opt/kubernetes/bin/kubectl /usr/local/bin/
[root@k8s-master1 ~]# kubectl get node
No resources found in default namespace.

查看集羣狀態

[root@k8s-master1 ~]# kubectl get cs
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>
etcd-2               <unknown>
etcd-1               <unknown>

5.2 啓動TLS Bootstrapping
爲kubelet自動頒發證書
格式:token,用戶,uid,用戶組

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/token.csv 
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

給kubelet-bootstrap受權,將用戶綁定到角色裏面

[root@k8s-master1 ~]#  kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

6、部署Node組件

一、docker容器引擎
二、kubelet
三、kube-proxy
啓動流程---》配置文件---》systemd管理組件--》啓動

6.一、如今去node節點去操做
一、二進制安裝docker
k8s高可用二進制部署使用Calico網絡方案
二進制包下載地址:https://download.docker.com/linux/static/stable/x86_64/
解壓壓縮包,解壓docker安裝包

[root@k8s-node1 ~]# tar xf k8s-node.tar.gz 
[root@k8s-node1 ~]# tar xf docker-18.09.6.tgz

將全部docker下的文件都放在系統變量裏面,這樣就可使用docker命令

[root@k8s-node1 ~]# mv docker/* /usr/bin
[root@k8s-node1 ~]# docker
docker        dockerd       docker-init   docker-proxy  
[root@k8s-node1 ~]# mv docker.service /usr/lib/systemd/system
[root@k8s-node1 ~]# mkdir /etc/docker

將docker加速這塊附加進去

[root@k8s-node1 ~]# mv daemon.json /etc/docker/
[root@k8s-node1 ~]# systemctl start docker.service
[root@k8s-node1 ~]# systemctl enable docker.service

查看docker版本以及詳細信息
docker info

二、安裝kubelet

[root@k8s-node1 kubernetes]# tree
.
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   └── kube-proxy.kubeconfig
├── logs
└── ssl

4 directories, 8 files

這個token咱們須要在與node上的bootstrap的時候須要指定一致
在master上將這個token替換成一個新的,由於爲了集羣安全,咱們從新生成一個

[root@k8s-master1 cfg]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cac60aa54b4f2582023b99e819c033d2
[root@k8s-master1 cfg]# vim token.csv 
[root@k8s-master1 cfg]# cat token.csv 
cac60aa54b4f2582023b99e819c033d2,kubelet-bootstrap,10001,"system:node-bootstrapper"

將這個token放在node1節點的 bootstrap.kubeconfig中,另外就是修改成master的鏈接地址

[root@k8s-node1 cfg]# vim bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.30.24:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: cac60aa54b4f2582023b99e819c033d2

Node加入的流程
首先kubelet進行啓動,首先是要bootstrap像api-server進行發送請求,而api會效驗這個token是否是可用的,它會去驗證這個token,進行一個判斷,經過以後它纔會爲這個kubelet頒發證書,這個kubelet才能啓動成功

若是kubelet啓動不成功,通常就是token寫的不對,或者使用的證書不一致,或者bootstrap不對,纔會啓動失敗
修改kube-proxy.kubeconfig文件鏈接api的地址

[root@k8s-node1 cfg]# more kube-proxy.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.30.24:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem

將咱們的kubelet配置修改完以後放到咱們的工做目錄中,啓動kubelet

[root@k8s-node1 ~]# mv kubernetes/ /opt
[root@k8s-node1 ~]# ls
cni-plugins-linux-amd64-v0.8.2.tgz  docker  docker-18.09.6.tgz  k8s-node.tar.gz  kubelet.service  kube-proxy.service
[root@k8s-node1 ~]# mv *service /usr/lib/systemd/system

到master節點將證書放到node的工做目錄中
[root@k8s-master1 ~]# scp /root/TLS/k8s/{ca,kube-proxy-key,kube-proxy}.pem 192.168.30.26:/opt/kubernetes/ssl/

由於這裏新換的token,須要從新啓動kube-apiserver

[root@k8s-master1 ~]# systemctl restart kube-apiserver
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node1 ~]# systemctl enable kubelet

另外就是啓動kubelet的時候會自動頒發證書

[root@k8s-node1 ssl]# ls
ca.pem  kubelet-client-2020-03-30-00-22-59.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key  kube-proxy-key.pem  kube-proxy.pem

查看日誌有沒有報錯
通常若是咱們去替換這個新的token的時候,須要從新啓動一個kube-apiserver,這個token會驗證node節點上的token

[root@k8s-node1 cfg]# tail /opt/kubernetes/logs/kubelet.INFO 
I0330 00:16:28.853703   63824 feature_gate.go:216] feature gates: &{map[]}
I0330 00:16:28.853767   63824 plugins.go:100] No cloud provider specified.
I0330 00:16:28.853777   63824 server.go:526] No cloud provider specified: "" from the config file: ""
I0330 00:16:28.853798   63824 bootstrap.go:119] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
I0330 00:16:28.855492   63824 bootstrap.go:150] No valid private key and/or certificate found, reusing existing private key or creating a new one
I0330 00:16:28.879242   63824 csr.go:69] csr for this node already exists, reusing
I0330 00:16:28.881728   63824 csr.go:77] csr for this node is still valid

查看csr的請求加入

[root@k8s-master1 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-ltMSc51cdCz2-pZlVbe1FX4MUsZ8pr84KKJG_ttajoI   2m20s   kubelet-bootstrap   Pending

[root@k8s-master1 cfg]# kubectl certificate approve node-csr-ltMSc51cdCz2-pZlVbe1FX4MUsZ8pr84KKJG_ttajoI
certificatesigningrequest.certificates.k8s.io/node-csr-ltMSc51cdCz2-pZlVbe1FX4MUsZ8pr84KKJG_ttajoI approved
[root@k8s-master1 cfg]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-ltMSc51cdCz2-pZlVbe1FX4MUsZ8pr84KKJG_ttajoI   7m57s   kubelet-bootstrap   Approved,Issued

已經加入到node節點

[root@k8s-master1 cfg]# kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   40s   v1.16.0

而後部署另一個node
將包拉進Node2中

安裝docker
[root@k8s-node2 ~]# ls
k8s-node.zip
[root@k8s-node2 ~]# unzip k8s-node.zip 
[root@k8s-node2 ~]# cd k8s-node/
[root@k8s-node2 k8s-node]# mv *.service /usr/lib/systemd/system
[root@k8s-node2 k8s-node]# tar xf docker-18.09.6.tgz 
[root@k8s-node2 k8s-node]# mv docker/* /usr/bin/
[root@k8s-node2 k8s-node]# mkdir /etc/docker
[root@k8s-node2 k8s-node]# mv daemon.json /etc/docker/
[root@k8s-node2 k8s-node]# systemctl start docker.service
[root@k8s-node2 k8s-node]# systemctl enable docker.service

k8s會調用docker的API 也就是ls /var/run/docker.sock
修改kubelet和kube-proxy的節點的監聽地址還有token,以及主機名稱把k8s-node1改爲k8s-node2

[root@localhost ]# cp -r kubernetes/ /opt

[root@k8s-node2 opt]# vim kubernetes/cfg/bootstrap.kubeconfig 
[root@k8s-node2 opt]# vim kubernetes/cfg/kube-proxy.kubeconfig 
[root@k8s-node2 opt]# vim kubernetes/cfg/kubelet.conf 
[root@k8s-node2 opt]# vim kubernetes/cfg/kube-proxy-config.yml 

[root@k8s-node2 opt]# grep 192 kubernetes/cfg/*
kubernetes/cfg/bootstrap.kubeconfig:    server: https://192.168.30.24:6443
kubernetes/cfg/kube-proxy.kubeconfig:    server: https://192.168.30.24:6443

將證書也拷貝到node2節點上
[root@k8s-master1 ~]# scp /root/TLS/k8s/{ca,kube-proxy-key,kube-proxy}.pem 192.168.30.30:/opt/kubernetes/ssl/

如今就能夠啓動kubelet和kube-proxy

[root@k8s-node2 opt]# systemctl restart kubelet
[root@k8s-node2 opt]# systemctl restart kube-proxy

master節點收到請求加入的認證並經過

[root@k8s-master1 ~]# kubectl certificate approve node-csr-s4JhRFW5ncRhGL3jaO5btQLaYI89eUhJAy6P8FA6d18 
certificatesigningrequest.certificates.k8s.io/node-csr-s4JhRFW5ncRhGL3jaO5btQLaYI89eUhJAy6P8FA6d18 approved
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-ltMSc51cdCz2-pZlVbe1FX4MUsZ8pr84KKJG_ttajoI   31m   kubelet-bootstrap   Approved,Issued
node-csr-s4JhRFW5ncRhGL3jaO5btQLaYI89eUhJAy6P8FA6d18   23s   kubelet-bootstrap   Approved,Issued
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   24m   v1.16.0
k8s-node2   NotReady   <none>   53s   v1.16.0

查看報錯信息這個開源看到CNI的插件沒有準備就緒,由於它會/etc/cni/net.d這個目錄下讀取cni的插件子網信息

[root@k8s-node2 ~]# tail /opt/kubernetes/logs/kubelet.INFO 
E0330 00:49:09.558366   64374 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
W0330 00:49:13.935566   64374 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

7、部署CNI網絡插件(Calico)

k8s高可用二進制部署使用Calico網絡方案

二進制包下載地址:https://github.com/containernetworking/plugins/releases
這裏是放在壓縮包裏面的,直接解壓

[root@k8s-node1 ~]# tar xf cni-plugins-linux-amd64-v0.8.2.tgz 
[root@k8s-node1 ~]# mkdir /opt/cni/bin /etc/cni/net.d -p
[root@k8s-node1 ~]# tar xf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/

將這個也放在node2上一份

[root@k8s-node1 ~]# scp -r /opt/cni/ 192.168.30.30:/opt
[root@k8s-node2 ~]# mkdir /etc/cni/net.d -p

如今已經都在每臺node上啓動了cni的接口,主要用來接第三方的網絡
確保每臺Node都啓動cni的功能

[root@k8s-node2 ~]# more /opt/kubernetes/cfg/kubelet.conf 
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node2 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=zhaocheng172/pause-amd64:3.0"

部署Calico網絡
7.一、Calico 部署
git clone git@gitee.com:zhaocheng172/calico.git

這裏須要將你的公鑰給我,才能拉下來,否則沒有權限
下載完後還須要修改裏面配置項:
由於Calico使用的etcd一些策略一些網絡配置信息的,還有一些calico的屬性信息都是保存在etcd中的,而etcd也在k8s集羣中部署,因此咱們之間使用現有的k8s的etcd就能夠了,若是使用https還要配置一下證書,而後選擇一些pod的網絡,還有工做模式
具體步驟以下:

配置鏈接etcd地址,若是使用https,還須要配置證書。
(ConfigMap,Secret)
根據實際網絡規劃修改Pod CIDR(CALICO_IPV4POOL_CIDR)
選擇工做模式(CALICO_IPV4POOL_IPIP),支持BGP,IPIP

calico也是使用configmap保存配置文件的,secret是存儲etcd它的https的證書的,分爲3項

etcd-key: null
etcd-cert: null
etcd-ca: null

指定etcd鏈接的地址: etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>;"
當啓動secret掛載到容器中時,它的文件是掛載哪一個文件下,在這裏指定好

etcd_ca: ""   # "/calico-secrets/etcd-ca"
  etcd_cert: "" # "/calico-secrets/etcd-cert"
  etcd_key: ""  # "/calico-secrets/etcd-key"

如今進行一下切換網絡到calico
1、因此修改etcd一共修改3個位置
一、etcd的證書
我放證書的位置是在/opt/etcd/ssl下,可是咱們須要放到secret裏面,須要要轉換成base64編碼才能去存儲,而這樣執行也是由換行的,必須將它拼接成一個總體的字符串
[root@k8s-master1 ~]# cat /opt/etcd/ssl/ca.pem |base64 -w 0

將對應的都添進去,將註釋去掉

# etcd-key: null     將對應ssl下的證書轉換成base64編碼放進來,並去掉註釋
  # etcd-cert: null
  # etcd-ca: null

二、要讀取secret落地到容器中位置,直接將註釋去掉就能夠了

etcd_ca: "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"

三、鏈接etcd的字符串,這與k8s鏈接API的字符串是同樣的
這個是在[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-apiserver.conf 這個目錄下,由於每一個集羣都是本身部署的,位置可能不同
etcd_endpoints: "https://192.168.30.24:2379,https://192.168.30.26:2379,https://192.168.30.30:2379"

將這個證書放進放進calico配置中
2、根據實際網絡規劃修改Pod CIDR
這個位置在這個是默認的,須要改爲本身的

- name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"

能夠在控制器配置的默認的也就是這個10.244.0.0.16這個地址

[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-controller-manager.conf
--cluster-cidr=10.244.0.0/16 \
在配置中改爲這個
 - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

3、選擇工做模式
IPIP

# Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Never"

這個變量問你要不要開啓IPIP,由於有兩種模式,第一種就是IPIP,第二種就是BGP
其實它應用最多就是BGP,將這個Always改爲Never,就是將它關閉的意思

[root@k8s-master1 calico]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-77c84fb6b6-th2bk   1/1     Running   0          29s
kube-system   calico-node-27g8b                          1/1     Running   0          3m48s
kube-system   calico-node-wnc5f                          1/1     Running   0          3m48s
[root@k8s-master1 calico]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   75m   v1.16.0
k8s-node2   Ready    <none>   51m   v1.16.0

7.二、Calico 管理工具
這裏會用到calico的管理工具,用它管理一些calico的配置,好比切換成ipip模式
這裏有兩種方式去獲取calico的網絡,第一種就是經過calicoctl的長鏈接tcp的監聽去獲取
一種是經過etcd去獲取咱們的calico的子網信息
由於環境中沒有在master中部署kubelet組件,因此,這個須要在node節點去安裝
下載工具:https://github.com/projectcalico/calicoctl/releases

# wget -O /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v3.9.1/calicoctl
# chmod +x /usr/local/bin/calicoctl

安裝好這個管理工具以後就能夠查看當前節點BGP的節點狀態

[root@localhost ~]# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.30.30 | node-to-node mesh | up    | 05:01:03 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

能夠經過bird查看監聽信息

[root@localhost ~]# netstat -anpt |grep bird
tcp        0      0 0.0.0.0:179             0.0.0.0:*               LISTEN      74244/bird          
tcp        0      0 192.168.30.26:179       192.168.30.30:50692     ESTABLISHED 74244/bird

查看Pod的logs日誌,默認是須要通過受權才能查看,爲提供安全性,kubelet禁止匿名訪問,必須受權才能夠。

[root@k8s-master1 calico]# kubectl logs calico-node-jq86m   -n kube-system
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log calico-node-jq86m)
[root@k8s-master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml

另一種就是經過etcd去獲取

[root@k8s-master1 ]# # mkdir /etc/calico
# vim /etc/calico/calicoctl.cfg  
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "etcdv3"
  etcdEndpoints: "https://192.168.30.24:2379,https://192.168.30.26:2379,https://192.168.30.30:2379"
  etcdKeyFile: "/opt/etcd/ssl/server-key.pem"
  etcdCertFile: "/opt/etcd/ssl/server.pem"
  etcdCACertFile: "/opt/etcd/ssl/ca.pem"

使用calicocatl get node了,這樣的話就是在etcd中去拿的數據了

[root@k8s-master ~]# calicoctl get node
NAME        
k8s-node1   
k8s-node2

查看 IPAM的IP地址池:

[root@k8s-master ~]# calicoctl get ippool -o wide
NAME                  CIDR            NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
default-ipv4-ippool   10.244.0.0/16   true   Never      Never       false      all()

8、部署Coredns

部署coredns
[root@k8s-master1 calico]# kubectl apply -f coredns.yaml

測試解析dns解析與跨主機網絡容器通訊

[root@localhost ~]# more busybox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

測試是否能夠正常解析

[root@localhost ~]# kubectl exec -it busybox sh
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.0.0.96 nginx.default.svc.cluster.local
/ # ping 192.168.30.24

測試跨主機容器網絡直接的通訊

[root@localhost ~]# kubectl exec -it busybox sh
/ # ping 10.244.36.64
PING 10.244.36.64 (10.244.36.64): 56 data bytes
64 bytes from 10.244.36.64: seq=0 ttl=62 time=0.712 ms
64 bytes from 10.244.36.64: seq=1 ttl=62 time=0.582 ms
^C
--- 10.244.36.64 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.582/0.647/0.712 ms
/ # ping 10.244.36.67
PING 10.244.36.67 (10.244.36.67): 56 data bytes
64 bytes from 10.244.36.67: seq=0 ttl=62 time=0.385 ms
64 bytes from 10.244.36.67: seq=1 ttl=62 time=0.424 ms
^C
--- 10.244.36.67 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.385/0.404/0.424 ms
/ # ping 10.244.169.130
PING 10.244.169.130 (10.244.169.130): 56 data bytes
64 bytes from 10.244.169.130: seq=0 ttl=63 time=0.118 ms
64 bytes from 10.244.169.130: seq=1 ttl=63 time=0.097 ms
^C
--- 10.244.169.130 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.107/0.118 ms
/ # exit

測試部署pod是否能夠正常工做

[root@k8s-master ~]#  kubectl create deployment nginx --image=nginx
[root@k8s-master ~]#  kubectl expose deployment nginx --port=80 --type=NodePort

訪問測試

[root@k8s-master1 ~]# kubectl get pod,svc -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/busybox                  1/1     Running   0          32m   10.244.169.132   k8s-node2   <none>           <none>
pod/nginx-86c57db685-h4gwh   1/1     Running   0          65m   10.244.169.131   k8s-node2   <none>           <none>
pod/nginx-86c57db685-jzcnn   1/1     Running   0          65m   10.244.36.66     k8s-node1   <none>           <none>
pod/nginx-86c57db685-ms8g7   1/1     Running   0          74m   10.244.36.64     k8s-node1   <none>           <none>
pod/nginx-86c57db685-nzzgh   1/1     Running   0          63m   10.244.36.67     k8s-node1   <none>           <none>
pod/nginx-86c57db685-w89gq   1/1     Running   0          65m   10.244.169.130   k8s-node2   <none>           <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        143m   <none>
service/nginx        NodePort    10.0.0.96    <none>        80:30562/TCP   90m    app=nginx

訪問node節點加端口
k8s高可用二進制部署使用Calico網絡方案

9、擴容Node節點

添加一臺新機器,作好初始化
擴容node分爲兩步,第一步部署node相關組件,第二步打通容器網絡之間的通訊

將包拉進Node2中

一、部署node相關組件,安裝docker

[root@k8s-node2 ~]# ls
k8s-node.zip
[root@k8s-node3 ~]# unzip k8s-node.zip 
[root@k8s-node3 ~]# cd k8s-node/
[root@k8s-node3 k8s-node]# mv *.service /usr/lib/systemd/system
[root@k8s-node3 k8s-node]# tar xf docker-18.09.6.tgz 
[root@k8s-node3 k8s-node]# mv docker/* /usr/bin/
[root@k8s-node3 k8s-node]# mkdir /etc/docker
[root@k8s-node3 k8s-node]# mv daemon.json /etc/docker/
[root@k8s-node3 k8s-node]# systemctl start docker.service
[root@k8s-node3 k8s-node]# systemctl enable docker.service

k8s會調用docker的API 也就是ls /var/run/docker.sock
修改kubelet和kube-proxy的節點的監聽地址還有token,以及主機名稱把k8s-node1改爲k8s-node2

[root@k8s-node3 ]# cp -r kubernetes/ /opt

[root@k8s-node3 opt]# vim kubernetes/cfg/bootstrap.kubeconfig 
[root@k8s-node3 opt]# vim kubernetes/cfg/kube-proxy.kubeconfig 
[root@k8s-node3 opt]# vim kubernetes/cfg/kubelet.conf 
[root@k8s-node3 opt]# vim kubernetes/cfg/kube-proxy-config.yml 

[root@k8s-node3 opt]# grep 192 kubernetes/cfg/*
kubernetes/cfg/bootstrap.kubeconfig:    server: https://192.168.30.24:6443
kubernetes/cfg/kube-proxy.kubeconfig:    server: https://192.168.30.24:6443

將證書也拷貝到node3節點上

[root@k8s-master1 ~]#  scp /root/TLS/k8s/{ca,kube-proxy-key,kube-proxy}.pem 192.168.30.31:/opt/kubernetes/ssl/

如今就能夠啓動kubelet和kube-proxy

[root@k8s-node3 opt]# systemctl restart kubelet
[root@k8s-node3 opt]# systemctl restart kube-proxy

master節點收到請求加入的認證並經過

[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-yMrN2KoD8sEi2rssHCWxyFUdqmngvXodCtnKXrfoIMU   15s   kubelet-bootstrap   Pending
[root@k8s-master1 ~]# kubectl certificate approve node-csr-yMrN2KoD8sEi2rssHCWxyFUdqmngvXodCtnKXrfoIMU  
certificatesigningrequest.certificates.k8s.io/node-csr-yMrN2KoD8sEi2rssHCWxyFUdqmngvXodCtnKXrfoIMU approved
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-yMrN2KoD8sEi2rssHCWxyFUdqmngvXodCtnKXrfoIMU   41s   kubelet-bootstrap   Approved,Issued
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS     ROLES    AGE    VERSION
k8s-node1   Ready      <none>   165m   v1.16.0
k8s-node2   Ready      <none>   153m   v1.16.0
k8s-node3   NotReady   <none>   5s     v1.16.0

二、打通容器網絡通訊
因爲咱們使用的CNI插件,因此會自動將新加入的Node加入網絡當中

[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    <none>   168m    v1.16.0
k8s-node2   Ready    <none>   156m    v1.16.0
k8s-node3   Ready    <none>   3m43s   v1.16.0
[root@k8s-master1 ~]# kubectl get pod -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
default       busybox                                    1/1     Running   1          66m    10.244.169.132   k8s-node2   <none>           <none>
default       nginx-86c57db685-ms8g7                     1/1     Running   0          109m   10.244.36.64     k8s-node1   <none>           <none>
kube-system   calico-kube-controllers-77c84fb6b6-nggl5   1/1     Running   0          121m   192.168.30.30    k8s-node2   <none>           <none>
kube-system   calico-node-4xx8g                          1/1     Running   0          121m   192.168.30.30    k8s-node2   <none>           <none>
kube-system   calico-node-9bw46                          1/1     Running   0          4m4s   192.168.30.31    k8s-node3   <none>           <none>
kube-system   calico-node-zfmtt                          1/1     Running   0          121m   192.168.30.26    k8s-node1   <none>           <none>
kube-system   coredns-59fb8d54d6-pq2bt                   1/1     Running   0          139m   10.244.169.128   k8s-node2   <none>

測試容器通訊環境

[root@k8s-master1 ~]# kubectl exec -it busybox sh
/ # ping 10.244.107.192
PING 10.244.107.192 (10.244.107.192): 56 data bytes
64 bytes from 10.244.107.192: seq=0 ttl=62 time=1.023 ms
64 bytes from 10.244.107.192: seq=1 ttl=62 time=0.454 ms
^C
--- 10.244.107.192 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.454/0.738/1.023 ms
/ # ping 10.244.36.68
PING 10.244.36.68 (10.244.36.68): 56 data bytes
64 bytes from 10.244.36.68: seq=0 ttl=62 time=0.387 ms
64 bytes from 10.244.36.68: seq=1 ttl=62 time=0.350 ms
^C
--- 10.244.36.68 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.350/0.368/0.387 ms
/ # ping 192.168.30.26
PING 192.168.30.26 (192.168.30.26): 56 data bytes
64 bytes from 192.168.30.26: seq=0 ttl=63 time=0.359 ms
64 bytes from 192.168.30.26: seq=1 ttl=63 time=0.339 ms
^C
--- 192.168.30.26 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.339/0.349/0.359 ms
/ # ping 192.168.30.30
PING 192.168.30.30 (192.168.30.30): 56 data bytes
64 bytes from 192.168.30.30: seq=0 ttl=64 time=0.075 ms
^C
--- 192.168.30.30 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.075/0.075 ms
/ # ping 192.168.30.31
PING 192.168.30.31 (192.168.30.31): 56 data bytes
64 bytes from 192.168.30.31: seq=0 ttl=63 time=0.377 ms
64 bytes from 192.168.30.31: seq=1 ttl=63 time=0.358 ms

10、縮容Node節點

若是想從kubernetes集羣中刪除節點,正確流程

一、  獲取節點列表
Kubectl get node
二、  設置不可調度
Kubectl cordon $node_name
三、  驅逐節點上額pod
Kubectl drain $node_name –I gnore-daemonsets
四、  移除節點
該節點上已經沒有任何資源了,能夠直接移除節點:
Kubectl delete node $node_node
這樣,咱們平滑移除了一個k8s節點
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   179m   v1.16.0
k8s-node2   Ready    <none>   167m   v1.16.0
k8s-node3   Ready    <none>   14m    v1.16.0
[root@k8s-master1 ~]# kubectl cordon k8s-node3
node/k8s-node3 cordoned
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS                     ROLES    AGE    VERSION
k8s-node1   Ready                      <none>   3h     v1.16.0
k8s-node2   Ready                      <none>   167m   v1.16.0
k8s-node3   Ready,SchedulingDisabled   <none>   14m    v1.16.0
[root@k8s-master1 ~]# kubectl drain k8s-node3
node/k8s-node3 already cordoned
error: unable to drain node "k8s-node3", aborting command...

There are pending nodes to be drained:
 k8s-node3
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-9bw46
[root@k8s-master1 ~]# kubectl drain k8s-node3 --ignore-daemonsets
node/k8s-node3 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-9bw46
evicting pod "nginx-86c57db685-gjswt"
evicting pod "nginx-86c57db685-8cks8"
pod/nginx-86c57db685-gjswt evicted
pod/nginx-86c57db685-8cks8 evicted
node/k8s-node3 evicted
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS                     ROLES    AGE    VERSION
k8s-node1   Ready                      <none>   3h1m   v1.16.0
k8s-node2   Ready                      <none>   169m   v1.16.0
k8s-node3   Ready,SchedulingDisabled   <none>   16m    v1.16.0
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
busybox                  1/1     Running   1          79m     10.244.169.132   k8s-node2   <none>           <none>
nginx-86c57db685-b6xjn   1/1     Running   0          9m23s   10.244.36.68     k8s-node1   <none>           <none>
nginx-86c57db685-mrffs   1/1     Running   0          39s     10.244.36.69     k8s-node1   <none>           <none>
nginx-86c57db685-ms8g7   1/1     Running   0          122m    10.244.36.64     k8s-node1   <none>           <none>
nginx-86c57db685-qfl2f   1/1     Running   0          39s     10.244.169.134   k8s-node2   <none>           <none>
nginx-86c57db685-xwxzv   1/1     Running   0          9m23s   10.244.169.133   k8s-node2   <none>           <none>
[root@k8s-master1 ~]# kubectl delete node k8s-node3
node "k8s-node3" deleted
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
busybox                  1/1     Running   1          80m     10.244.169.132   k8s-node2   <none>           <none>
nginx-86c57db685-b6xjn   1/1     Running   0          9m35s   10.244.36.68     k8s-node1   <none>           <none>
nginx-86c57db685-mrffs   1/1     Running   0          51s     10.244.36.69     k8s-node1   <none>           <none>
nginx-86c57db685-ms8g7   1/1     Running   0          122m    10.244.36.64     k8s-node1   <none>           <none>
nginx-86c57db685-qfl2f   1/1     Running   0          51s     10.244.169.134   k8s-node2   <none>           <none>
nginx-86c57db685-xwxzv   1/1     Running   0          9m35s   10.244.169.133   k8s-node2   <none>           <none>
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1   Ready    <none>   3h2m   v1.16.0
k8s-node2   Ready    <none>   170m   v1.16.0

11、部署高可用集羣

在k8s中,會針對master節點作高可用,若是是單點,一個master節點的話,當咱們去調度任務,或者拉取鏡像的鏡像,調用控制器,一個也會影響咱們的實際工做,因此會部署一個無單點的架構,會有多個Master,那麼多個Master會涉及到node去鏈接一個api去工做,若是隻部署一個nginx的話,也是能夠實現這麼一個負載均衡的,可是若是這個nginx掛了,那就沒法正常提供工做了,因此就須要這麼一個主備的,爲這個nginx加一個備機器,這裏選擇一個四層作負載均衡,nginx支持四層和七層,七層主要代理http,四層主要代理TCP和UDP,四層不考慮是什麼協議傳過來的,只負責轉發,因此性能會更好,而七層會分析應用層的協議,性能會差一點,可是也會知足數據分析的一些需求,好比針對域名的轉發。

這裏會用到一個高可用keepalived,這個keepalived主要要實現健康檢查和故障轉移,好比有兩臺機器,作熱備的正常話就是用戶先去訪問A,當A掛了,用戶會無感知的去訪問B,才能正常去給用戶提供服務,若是兩個機器作雙擊熱備的話,確定須要從用戶的角度去考慮,用戶是從域名或者IP訪問進來,一個域名來解析一個IP ,也就是Keepalived在每臺機器都安裝後,會相互的探測,若是A掛了的話,就接管這個IP,也就是這個虛擬IP,VIP的概念,這個IP不會實際在某個機器上,它是由keepaliced去管控的,它正常工做在其中一個機器上,而域名會解析到這個VIP,正常在Master,也就是在主上,另外一個爲Backup角色,因此用戶訪問時先經過VIP,當A機器掛的話,它會由B機器進行接管,它會拿到這個VIP到backup上,而用戶仍是訪問的這個VIP,而後另一個nginx去處理的請求,同時能夠實現一個雙擊熱備的實現,任何一個掛的話都不會影響

這樣的話,每一個Node都會鏈接這個VIP地址,由原來鏈接的master改成VIP地址,那麼這麼高可用架構就組建起來了
k8s高可用二進制部署使用Calico網絡方案
將master1上的文件拷貝到新的master2上面
[root@k8s-master1 ~]# scp -r /opt/kubernetes/ 192.168.30.25:/opt
在master2建立

mkdir /opt/etcd/ -pv
[root@k8s-master1 ~]# scp -r /opt/etcd/ssl/ 192.168.30.25:/opt/etcd
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.30.25:/usr/lib/systemd/system

修改master2上api-server的監聽地址
都換成監聽的192.168.30.25

啓動每一個組件
[root@k8s-master2 opt]# for i in $(ls /opt/kubernetes/bin/); do systemctl start $i; systemctl enable $i;done

確保每一個組件都啓動起來
ps -ef |grep kube

將kubectl也拷貝到master2節點上

[root@k8s-master1 ~]# scp /usr/local/bin/kubectl 192.168.30.25:/usr/local/bin/
root@192.168.30.25's password: 
kubectl

master2能夠獲取到狀態

[root@k8s-master2 ~]# kubectl get node
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    <none>   5h52m   v1.16.0
k8s-node2   Ready    <none>   5h39m   v1.16.0

每一個節點都安裝nginx負載均衡,配置都是同樣的配置
nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/
[root@slb1 ~]# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

[root@slb2 ~]# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

這裏若是是三個MASTER節點的話,直接在upstream加入第三個master,而後交給Nginx作負載均衡

[root@slb1 ~]# cat /etc/nginx/nginx.conf 

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
                server 192.168.30.24:6443;
                server 192.168.30.25:6443;
            }

    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

啓動Nginx並設置開機啓動

[root@slb1 ~]# systemctl start nginx
[root@slb1 ~]# systemctl enable nginx
[root@slb2 ~]# systemctl start nginx
[root@slb2 ~]# systemctl enable nginx

如今去作keepalived作主備
若是是使用的阿里雲的雲服務器,直接之間使用slb作Nginx的入口

[root@slb1 ~]# yum -y install keepalived
[root@slb2 ~]# yum -y install keepalived

這裏寫好直接放進來

[root@slb1 ~]# rz -E
rz waiting to receive.
[root@slb1 ~]# unzip HA.zip          
[root@slb1 ~]# cd HA/
[root@slb1 HA]# ls
check_nginx.sh  keepalived-backup.conf  keepalived-master.conf  nginx.conf
[root@slb1 HA]# mv keepalived-master.conf keepalived.conf
[root@slb1 HA]# mv keepalived.conf /etc/keepalived/
修改VIP的地址,另外根據本身的網卡寫相應的網卡設備名稱,主這邊設置的是100優先級,backup設置的90
[root@slb1 HA]# vim /etc/keepalived/keepalived.conf

配置文件這個配置主要聲明用於Nginx健康狀態檢查,用來判斷Nginx是否正常工做,若是是正常工做就不會實現故障轉移,要是故障的話,備的機器會收到接管VIP同時提供VIP的服務

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"

將檢查腳本放到指定目錄
這各腳本主要對keepalive的返回碼進行判斷,若是返回碼是1,這個檢查時失敗的,就是掛掉了,那就觸發故障轉移的動做,若是返回0就是正常的,也就是非0的狀態下認爲nginx掛了,因此經過狀態碼的狀況去判斷是否正常

[root@slb1 HA]# ls
check_nginx.sh  keepalived-backup.conf  nginx.conf
[root@slb1 HA]# mv check_nginx.sh /etc/keepalived/
[root@slb1 HA]# more /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi

如今到備的服務器下,將咱們的準備的包放在/etc/keepalived下,並修改VIP地址爲192.168.30.20

[root@slb1 HA]# ls
keepalived-backup.conf  nginx.conf
[root@slb1 HA]# scp keepalived-backup.conf 192.168.30.33:/etc/keepalived/
[root@slb1 HA]# scp /etc/keepalived/check_nginx.sh 192.168.30.33:/etc/keepalived/
[root@slb2 keepalived]# vim keepalived.conf

兩個配置都設置完成以後,將腳本加執行權限,進行啓動

[root@slb1 keepalived]# chmod +x check_nginx.sh 
[root@slb2 keepalived]# chmod +x check_nginx.sh 
[root@slb1 keepalived]# systemctl start keepalived.service 
[root@slb1 keepalived]# systemctl enable keepalived.service 
[root@slb1 keepalived]# ps -ef |grep keepalived
root      60856      1  0 18:41 ?        00:00:00 /usr/sbin/keepalived -D
root      60857  60856  0 18:41 ?        00:00:00 /usr/sbin/keepalived -D
root      60858  60856  0 18:41 ?        00:00:00 /usr/sbin/keepalived -D
root      61792  12407  0 18:43 pts/1    00:00:00 grep --color=auto keepalived
[root@slb2 keepalived]# ps -ef |grep keepalived
root      60816      1  0 18:43 ?        00:00:00 /usr/sbin/keepalived -D
root      60817  60816  0 18:43 ?        00:00:00 /usr/sbin/keepalived -D
root      60820  60816  0 18:43 ?        00:00:00 /usr/sbin/keepalived -D
root      60892  12595  0 18:43 pts/1    00:00:00 grep --color=auto keepalived

在主的master上能夠看到vip地址

[root@slb1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b9:6f:9d brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.32/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.30.20/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::921:4cfb:400e:c875/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:bf:3f:61 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:bf:3f:61 brd ff:ff:ff:ff:ff:ff

判斷是否能夠飄移
停掉這個slb1的Nginx
[root@slb1 ~]# systemctl stop nginx

發現VIP地址已經在slb上的,其實這個slb去訪問這個Nginx的時候,經過這個VIP也是能夠訪問到的,大概切換的過程當中有2秒的切換的時間,

[root@slb2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e9:ce:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.33/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.30.20/24 scope global secondary ens33

如今VIP已經配置好的,如今只須要將node的鏈接地址換成VIP地址

[root@k8s-node1 ~]# cd /opt/kubernetes/cfg/
[root@k8s-node1 cfg]# sed -i 's#192.168.30.24#192.168.30.20#g' *
[root@k8s-node1 cfg]# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.30.20:6443
kubelet.kubeconfig:    server: https://192.168.30.20:6443
kube-proxy.kubeconfig:    server: https://192.168.30.20:6443

[root@k8s-node2 cfg]# sed -i 's#192.168.30.24#192.168.30.20#g' *
[root@k8s-node2 cfg]# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.30.20:6443
kubelet.kubeconfig:    server: https://192.168.30.20:6443
kube-proxy.kubeconfig:    server: https://192.168.30.20:6443

如今讓slb的VIP地址開啓日誌的實時輸出,當咱們啓動kubelet的時候,日誌就會正常輸出

[root@slb1 ~]# tail /var/log/nginx/k8s-access.log  -f

192.168.30.26 192.168.30.24:6443 - [30/Mar/2020:19:10:51 +0800] 200 1160
192.168.30.26 192.168.30.25:6443 - [30/Mar/2020:19:10:51 +0800] 200 1159
192.168.30.30 192.168.30.25:6443 - [30/Mar/2020:19:11:09 +0800] 200 1160
192.168.30.30 192.168.30.24:6443 - [30/Mar/2020:19:11:09 +0800] 200 1160

重啓kubelet和kube-proxy,會發現日誌輸出

[root@k8s-node1 ~]# systemctl restart kubelet
[root@k8s-node1 ~]# systemctl restart kube-proxy
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# systemctl restart kube-proxy

集羣如今也是穩定運行

[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    <none>   7h      v1.16.0
k8s-node2   Ready    <none>   6h48m   v1.16.0
[root@k8s-master1 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       busybox                                    1/1     Running   5          5h18m
default       nginx-86c57db685-b6xjn                     1/1     Running   0          4h7m
default       nginx-86c57db685-mrffs                     1/1     Running   0          3h59m
default       nginx-86c57db685-ms8g7                     1/1     Running   0          6h
default       nginx-86c57db685-qfl2f                     1/1     Running   0          3h59m
default       nginx-86c57db685-xwxzv                     1/1     Running   0          4h7m
kube-system   calico-kube-controllers-77c84fb6b6-nggl5   1/1     Running   0          6h12m
kube-system   calico-node-4xx8g                          1/1     Running   0          6h12m
kube-system   calico-node-zfmtt                          1/1     Running   0          6h12m
kube-system   coredns-59fb8d54d6-pq2bt                   1/1     Running   0          6h30m

測試驗證 如今去訪問這個VIP,也就是間接到訪問到k8s的api,這個token也就是bootstrap的token

[root@k8s-node1 ~]# curl -k --header "Authorization: Bearer 79ed30201a4d72d11ce020c2efbd721e" https://192.168.30.20:6443/version
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
相關文章
相關標籤/搜索