K8S入門系列之集羣二進制部署-->master篇(二)

組件版本和配置策略

組件版本

  • Kubernetes 1.16.2
  • Docker 19.03-ce
  • Etcd 3.3.17 https://github.com/etcd-io/etcd/releases/
  • Flanneld 0.11.0 https://github.com/coreos/flannel/releases/node

  • 插件:
    Coredns
    Dashboard
    Heapster (influxdb、grafana)
    Metrics-Server
    EFK (elasticsearch、fluentd、kibana)linux

  • 鏡像倉庫:
    docker registry
    harborgit

主要配置策略

  • kube-apiserver:
    使用 keepalived 和 haproxy 實現 3 節點高可用;
    關閉非安全端口 8080 和匿名訪問;
    在安全端口 6443 接收 https 請求;
    嚴格的認證和受權策略 (x50九、token、RBAC);
    開啓 bootstrap token 認證,支持 kubelet TLS bootstrapping;
    使用 https 訪問 kubelet、etcd,加密通訊;github

  • kube-controller-manager:
    3 節點高可用;
    關閉非安全端口,在安全端口 10252 接收 https 請求;
    使用 kubeconfig 訪問 apiserver 的安全端口;
    自動 approve kubelet 證書籤名請求 (CSR),證書過時後自動輪轉;
    各 controller 使用本身的 ServiceAccount 訪問 apiserver;docker

  • kube-scheduler:
    3 節點高可用;
    使用 kubeconfig 訪問 apiserver 的安全端口;json

  • kubelet:
    使用 kubeadm 動態建立 bootstrap token,也能夠在 apiserver 中靜態配置;
    使用 TLS bootstrap 機制自動生成 client 和 server 證書,過時後自動輪轉;
    在 KubeletConfiguration 類型的 JSON 文件配置主要參數;
    關閉只讀端口,在安全端口 10250 接收 https 請求,對請求進行認證和受權,拒絕匿名訪問和非受權訪問;
    使用 kubeconfig 訪問 apiserver 的安全端口;bootstrap

  • kube-proxy:
    使用 kubeconfig 訪問 apiserver 的安全端口;
    在 KubeProxyConfiguration 類型的 JSON 文件配置主要參數;
    使用 ipvs 代理模式;vim

  • 集羣插件:
    DNS:使用功能、性能更好的 coredns;
    Dashboard:支持登陸認證;
    Metric:heapster、metrics-server,使用 https 訪問 kubelet 安全端口;
    Log:Elasticsearch、Fluend、Kibana;
    Registry 鏡像庫:docker-registry、harbor;後端

1. 系統初始化

1.01 系統環境&&基本環境配置

[root@localhost ~]# uname -a
Linux localhost.localdomain 4.18.0-80.11.2.el8_0.x86_64 #1 SMP Tue Sep 24 11:32:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 8.0.1905 (Core)

1.02 修改各個節點的對應hostname, 並分別寫入/etc/hosts

hostnamectl set-hostname k8s-master01
...
# 寫入hosts--> 注意是 >> 表示不改變原有內容追加!
cat>> /etc/hosts <<EOF

192.168.2.201 k8s-master01
192.168.2.202 k8s-master02
192.168.2.203 k8s-master03
192.168.2.11 k8s-node01
192.168.2.12 k8s-node02
EOF

1.03 安裝依賴包和經常使用工具

yum install wget vim yum-utils net-tools tar chrony curl jq ipvsadm ipset conntrack iptables sysstat libseccomp -y

1.04 全部節點關閉firewalld, dnsmasq, selinux以及swap

# 關閉防火牆並清空防火牆規則
systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEP

# 關閉dnsmasq不然可能致使docker容器沒法解析域名!(centos8不存在!)
systemctl disable --now dnsmasq 

# 關閉selinux  --->selinux=disabled 需重啓生效!
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 關閉swap --->註釋掉swap那一行, 需重啓生效!
swapoff -a && sed -i '/ swap / s/^\(.*\)$/# \1/g' /etc/fstab

1.05 全部節點設置時間同步

timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0

yum install chrony -y
systemctl enable chronyd && systemctl start chronyd && systemctl status chronyd

1.06 調整內核參數, k8s必備參數!

cat> kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ipt6ables=1
net.ipv6.conf.all.disable_ipv6=1
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

1.07 全部節點建立k8s工做目錄並設置環境變量!

# 在每臺機器上建立目錄:
mkdir -p /opt/k8s/{bin,cert,script,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy}
mkdir -p /opt/etcd/{bin,cert}
mkdir -p /opt/lib/etcd
mkdir -p /opt/flanneld/{bin,cert}
mkdir -p /var/log/kubernetes

# 在每臺機器上添加環境變量:
sh -c "echo 'PATH=/opt/k8s/bin:/opt/etcd/bin:/opt/flanneld/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >> /etc/profile.d/k8s.sh"

source /etc/profile.d/k8s.sh

1.08 無密碼 ssh 登陸其它節點(爲了部署方便!!!)

生成祕鑰對

[root@k8s-master01 ~]# ssh-keygen

將本身的公鑰發給其餘服務器

[root@k8s-master01 ~]# ssh-copy-id root@k8s-master01
[root@k8s-master01 ~]# ssh-copy-id root@k8s-master02
[root@k8s-master01 ~]# ssh-copy-id root@k8s-master03

2. 建立CA根證書和密鑰

  • 爲確保安全, kubernetes 系統各組件須要使用 x509 證書對通訊進行加密和認證。
  • CA (Certificate Authority) 是自簽名的根證書,用來簽名後續建立的其它證書。

2.01 安裝cfssl工具集

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master01 ~]# mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master01 ~]# mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master01 ~]# mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

2.02 建立根證書CA

  • CA 證書是集羣全部節點共享的,只須要建立一個CA證書,後續建立的全部證書都由它簽名。

2.02.01 建立配置文件

  • CA 配置文件用於配置根證書的使用場景 (profile) 和具體參數 (usage,過時時間、服務端認證、客戶端認證、加密等),後續在簽名其它證書時須要指定特定場景。
[root@k8s-master01 ~]# cd /opt/k8s/cert/
[root@k8s-master01 cert]# cat> ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ],
                "expiry": "876000h"
            }
        }
    }
}
EOF
  • signing :表示該證書可用於簽名其它證書,生成的 ca.pem 證書中CA=TRUE;
  • server auth :表示 client 能夠用該該證書對 server 提供的證書進行驗證;
  • client auth :表示 server 能夠用該該證書對 client 提供的證書進行驗證;

2.02.02 建立證書籤名請求文件

[root@k8s-master01 cert]# cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • CN: Common Name ,kube-apiserver 從證書中提取該字段做爲請求的用戶名(User Name),瀏覽器使用該字段驗證網站是否合法;
  • O: Organization ,kube-apiserver 從證書中提取該字段做爲請求用戶所屬的組(Group);
  • kube-apiserver 將提取的 User、Group 做爲 RBAC 受權的用戶標識;

2.02.03 生成CA證書和密鑰

[root@k8s-master01 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 查看是否生成!
[root@k8s-master01 cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

2.02.04 分發證書文件

  • 簡單腳本, 注意傳參! 後期想寫整合腳本的話能夠拿來用!
  • 將生成的 CA 證書、祕鑰文件、配置文件拷貝到全部節點的/opt/k8s/cert 目錄下:
[root@k8s-master01 cert]# vi /opt/k8s/script/scp_k8s_cacert.sh 

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/cert/ca*.pem /opt/k8s/cert/ca-config.json root@${master_ip}:/opt/k8s/cert
done
[root@k8s-master01 cert]# bash /opt/k8s/script/scp_k8s_cacert.sh 192.168.2.201 192.168.2.202 192.168.2.203

3. 部署etcd集羣

  • etcd 是基於Raft的分佈式key-value存儲系統,由CoreOS開發,經常使用於服務發現、共享配置以及併發控制(如leader選舉、分佈式鎖等)
  • kubernetes 使用 etcd 存儲全部運行數據。因此部署三節點高可用!

3.01 下載二進制文件

[root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.17/etcd-v3.3.17-linux-amd64.tar.gz
[root@k8s-master01 ~]# tar -xvf etcd-v3.3.17-linux-amd64.tar.gz

3.02 建立etcd證書和密鑰

  • etcd集羣要與k8s-->apiserver通訊, 因此須要用證書籤名驗證!

3.02.01 建立證書籤名請求

[root@k8s-master01 cert]# cat > /opt/etcd/cert/etcd-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • hosts 字段指定受權使用該證書的 etcd 節點 IP 或域名列表,這裏將 etcd 集羣的三個節點 IP 都列在其中

3.02.02 生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem -ca-key=/opt/k8s/cert/ca-key.pem -config=/opt/k8s/cert/ca-config.json -profile=kubernetes /opt/etcd/cert/etcd-csr.json | cfssljson -bare /opt/etcd/cert/etcd

# 查看是否生成!
[root@k8s-master01 ~]# ls /opt/etcd/cert/*
etcd.csr       etcd-csr.json  etcd-key.pem   etcd.pem

3.02.03 分發生成的證書, 私鑰和etcd安裝文件到各etcd節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_etcd.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /root/etcd-v3.3.17-linux-amd64/etcd* root@${master_ip}:/opt/etcd/bin
        ssh root@${master_ip} "chmod +x /opt/etcd/bin/*"
        scp /opt/etcd/cert/etcd*.pem root@${master_ip}:/opt/etcd/cert/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_etcd.sh 192.168.2.201 192.168.2.202 192.168.2.203

3.03 建立etcd的systemd unit模板及etcd配置文件

建立etcd的systemd unit模板

[root@k8s-master01 ~]# vi /opt/etcd/etcd.service.template

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
User=root
Type=notify
WorkingDirectory=/opt/lib/etcd/
ExecStart=/opt/etcd/bin/etcd \
    --data-dir=/opt/lib/etcd \
    --name ##ETCD_NAME## \
    --cert-file=/opt/etcd/cert/etcd.pem \
    --key-file=/opt/etcd/cert/etcd-key.pem \
    --trusted-ca-file=/opt/k8s/cert/ca.pem \
    --peer-cert-file=/opt/etcd/cert/etcd.pem \
    --peer-key-file=/opt/etcd/cert/etcd-key.pem \
    --peer-trusted-ca-file=/opt/k8s/cert/ca.pem \
    --peer-client-cert-auth \
    --client-cert-auth \
    --listen-peer-urls=https://##MASTER_IP##:2380 \
    --initial-advertise-peer-urls=https://##MASTER_IP##:2380 \
    --listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \
    --advertise-client-urls=https://##MASTER_IP##:2379 \
    --initial-cluster-token=etcd-cluster-0    \
    --initial-cluster=etcd0=https://192.168.2.201:2380,etcd1=https://192.168.2.202:2380,etcd2=https://192.168.2.203:2380 \
    --initial-cluster-state=new    
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 本示例用腳本替換變量-->##ETCD_NAME##, ##MASTER_IP##
  • WorkingDirectory 、 --data-dir:指定工做目錄和數據目錄爲/opt/lib/etcd ,需在啓動服務前建立這個目錄;
  • --name :指定各個節點名稱,當 --initial-cluster-state 值爲new時, --name的參數值必須位於--initial-cluster 列表中;
  • --cert-file 、 --key-file:etcd server 與 client 通訊時使用的證書和私鑰;
  • --trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
  • --peer-cert-file 、 --peer-key-file:etcd 與 peer 通訊使用的證書和私鑰;
  • --peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

3.04 爲各節點建立和分發etcd systemd unit文件

[root@k8s-master01 ~]# vi /opt/k8s/script/etcd_service.sh

ETCD_NAMES=("etcd0" "etcd1" "etcd2")
MASTER_IPS=("$1" "$2" "$3")
#替換模板文件中的變量,爲各節點建立systemd unit文件
for (( i=0; i < 3; i++ ));do
        sed -e "s/##ETCD_NAME##/${ETCD_NAMES[i]}/g" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/g" /opt/etcd/etcd.service.template > /opt/etcd/etcd-${MASTER_IPS[i]}.service
done
#分發生成的systemd unit和etcd的配置文件:
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/etcd/etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service
done
[root@k8s-master01 ~]# bash /opt/k8s/script/etcd_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

3.05 啓動etcd服務

[root@k8s-master01 ~]# vi /opt/k8s/script/etcd.sh

MASTER_IPS=("$1" "$2" "$3")
#啓動 etcd 服務
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"
done
#檢查啓動結果,確保狀態爲 active (running)
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ssh root@${master_ip} "systemctl status etcd|grep Active"
done
#驗證服務狀態,輸出均爲healthy 時表示集羣服務正常
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
            --endpoints=https://${master_ip}:2379 \
            --cacert=/opt/k8s/cert/ca.pem \
            --cert=/opt/etcd/cert/etcd.pem \
            --key=/opt/etcd/cert/etcd-key.pem endpoint health
done
[root@k8s-master01 ~]# bash /opt/k8s/script/etcd.sh 192.168.2.201 192.168.2.202 192.168.2.203

4. 部署flannel網絡

  • kubernetes要求集羣內各節點(包括master節點)能經過Pod網段互聯互通。flannel使用vxlan技術爲各節點建立一個能夠互通的Pod網絡,使用的端口爲UDP 8472,須要開放該端口(如公有云 AWS 等)。
  • flannel第一次啓動時,從etcd獲取Pod網段信息,爲本節點分配一個未使用的 /24段地址,而後建立 flannel.1(也多是其它名稱) 接口。
  • flannel將分配的Pod網段信息寫入/run/flannel/docker文件,docker後續使用這個文件中的環境變量設置docker0網橋。

4.01 下載flannel二進制文件

[root@k8s-master01 ~]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s-master01 ~]# mkdir flanneld
[root@k8s-master01 ~]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz -C flanneld

4.02 建立flannel證書和密鑰

  • flannel從etcd集羣存取網段分配信息,而etcd集羣啓用了雙向x509證書認證,因此須要flanneld 生成證書和私鑰。

4.02.01 建立證書籤名請求

[root@k8s-master01 ~]# cat > /opt/flanneld/cert/flanneld-csr.json <<EOF
{
    "CN": "flanneld",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • 該證書只會被 kubectl 當作 client 證書使用,因此 hosts 字段爲空;

4.02.02 生成證書和密鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem -ca-key=/opt/k8s/cert/ca-key.pem -config=/opt/k8s/cert/ca-config.json -profile=kubernetes /opt/flanneld/cert/flanneld-csr.json | cfssljson -bare /opt/flanneld/cert/flanneld

[root@k8s-master01 ~]# ll /opt/flanneld/cert/flanneld*
flanneld.csr       flanneld-csr.json  flanneld-key.pem   flanneld.pem

4.02.03 將flanneld二進制文件和生成的證書和私鑰分發到全部節點(包含node節點!)

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_flanneld.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/flanneld/flanneld /root/flanneld/mk-docker-opts.sh root@${master_ip}:/opt/flanneld/bin/
    ssh root@${master_ip} "chmod +x /opt/flanneld/bin/*"
    scp /opt/flanneld/cert/flanneld*.pem root@${master_ip}:/opt/flanneld/cert
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_flanneld.sh 192.168.2.201 192.168.2.202 192.168.2.203

4.03 向etcd 寫入集羣Pod網段信息

  • flanneld當前版本(v0.11.0)不支持etcd v3,故需使用etcd v2 API寫入配置key和網段數據;
  • 特別注意etcd版本匹配!!!

向etcd 寫入集羣Pod網段信息(一個節點操做就能夠了!)

[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
set /atomic.io/network/config '{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

# 返回以下信息(寫入的Pod網段"Network"必須是/16 段地址,必須與kube-controller-manager的--cluster-cidr參數值一致) 
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

4.04 建立flanneld的systemd unit文件

[root@k8s-master01 ~]# vi /opt/flanneld/flanneld.service.template

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/flanneld/bin/flanneld \
-etcd-cafile=/opt/k8s/cert/ca.pem \
-etcd-certfile=/opt/flanneld/cert/flanneld.pem \
-etcd-keyfile=/opt/flanneld/cert/flanneld-key.pem \
-etcd-endpoints=https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379 \
-etcd-prefix=/atomic.io/network \
-iface=eth0
ExecStartPost=/opt/flanneld/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh腳本將分配給flanneld的Pod子網網段信息寫入/run/flannel/docker文件,後續docker啓動時使用這個文件中的環境變量配置docker0 網橋;
  • flanneld使用系統缺省路由所在的接口與其它節點通訊,對於有多個網絡接口(如內網和公網)的節點,能夠用-iface參數指定通訊接口,如上面的eth1接口;
  • flanneld 運行時須要 root 權限;

4.05 分發flanneld systemd unit文件到全部節點,啓動並檢查flanneld服務

[root@k8s-master01 ~]# vi /opt/k8s/script/flanneld_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #分發 flanneld systemd unit 文件到全部節點
    scp /opt/flanneld/flanneld.service.template root@${master_ip}:/etc/systemd/system/flanneld.service
 #啓動 flanneld 服務
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
 #檢查啓動結果
    ssh root@${master_ip} "systemctl status flanneld|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/flanneld_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

4.06 檢查分配給各 flanneld 的 Pod 網段信息

# 查看集羣 Pod 網段(/16)
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
get /atomic.io/network/config
# 輸出:
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

# 查看已分配的 Pod 子網段列表(/24)
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
ls /atomic.io/network/subnets
# 輸出:
/atomic.io/network/subnets/10.30.34.0-24
/atomic.io/network/subnets/10.30.41.0-24
/atomic.io/network/subnets/10.30.7.0-24

# 查看某一 Pod 網段對應的節點 IP 和 flannel 接口地址
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
get /atomic.io/network/subnets/10.30.34.0-24
# 輸出:
{"PublicIP":"192.168.2.202","BackendType":"vxlan","BackendData":{"VtepMAC":"e6:b2:85:07:9f:c0"}}

# 驗證各節點能經過 Pod 網段互通, 注意輸出的pod網段!
[root@k8s-master01 ~]# vi /opt/k8s/script/ping_flanneld.sh
MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #在各節點上部署 flannel 後,檢查是否建立了 flannel 接口(名稱可能爲 flannel0、flannel.0、flannel.1 等)
    ssh ${master_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
 #在各節點上 ping 全部 flannel 接口 IP,確保能通
    ssh ${master_ip} "ping -c 1 10.30.34.0"
    ssh ${master_ip} "ping -c 1 10.30.41.0"
    ssh ${master_ip} "ping -c 1 10.30.7.0"
done
# 運行!
[root@k8s-master01 ~]# bash /opt/k8s/script/ping_flanneld.sh 192.168.2.201 192.168.2.202 192.168.2.203

5. 部署kubectl命令行工具

  • kubectl 是 kubernetes 集羣的命令行管理工具
  • kubectl 默認從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息,若是沒有配置,執行 kubectl 命令時可能會出錯:
  • 本文檔只須要部署一次,生成的 kubeconfig 文件與機器無關。

5.01 下載kubectl二進制文件, 並分發全部節點(包含node!)

  • kubernetes-server-linux-amd64.tar.gz包含全部組件!
[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 ~]# vi /opt/k8s/script/kubectl_environment.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/kubectl root@${master_ip}:/opt/k8s/bin/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/kubectl_environment.sh 192.168.2.201 192.168.2.202 192.168.2.203

5.02 建立 admin 證書和私鑰

  • kubectl 與 apiserver https 安全端口通訊,apiserver 對提供的證書進行認證和受權。
  • kubectl 做爲集羣的管理工具,須要被授予最高權限。這裏建立具備最高權限的admin 證書。

建立證書籤名請求

[root@k8s-master01 ~]# cat > /opt/k8s/cert/admin-csr.json <<EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:masters",
            "OU": "steams"
        }
    ]
}
EOF
  • O 爲 system:masters ,kube-apiserver 收到該證書後將請求的 Group 設置爲system:masters;
  • 預約義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與Role cluster-admin 綁定,該 Role 授予全部 API的權限;
  • 該證書只會被 kubectl 當作 client 證書使用,因此 hosts 字段爲空;

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/admin-csr.json | cfssljson -bare /opt/k8s/cert/admin

[root@k8s-master01 ~]# ll /opt/k8s/cert/admin*
admin.csr       admin-csr.json  admin-key.pem   admin.pem

5.03 建立和分發 kubeconfig 文件

5.03.01 建立kubeconfig文件

  • kubeconfig 爲 kubectl 的配置文件,包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書;

step.1 設置集羣參數, --server=${KUBE_APISERVER}, 指定IP和端口; 本文使用的是haproxy的VIP和端口;若是沒有haproxy代理,就用實際服務的IP和端口!centos

[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.2 設置客戶端認證參數

[root@k8s-master01 ~]# kubectl config set-credentials kube-admin \
--client-certificate=/opt/k8s/cert/admin.pem \
--client-key=/opt/k8s/cert/admin-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.3 設置上下文參數

[root@k8s-master01 ~]#  kubectl config set-context kube-admin@kubernetes \
--cluster=kubernetes \
--user=kube-admin \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.4設置默認上下文

[root@k8s-master01 ~]# kubectl config use-context kube-admin@kubernetes --kubeconfig=/root/.kube/kubectl.kubeconfig

--certificate-authority :驗證 kube-apiserver 證書的根證書;
--client-certificate 、 --client-key :剛生成的 admin 證書和私鑰,鏈接 kube-apiserver 時使用;
--embed-certs=true :將 ca.pem 和 admin.pem 證書內容嵌入到生成的kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑);

5.03.02 驗證kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/root/.kube/kubectl.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-admin
  name: kube-admin@kubernetes
current-context: kube-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kube-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

5.03.03 分發 kubeclt 和kubeconfig 文件,分發到全部使用kubectl 命令的節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_kubectl_config.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/kubectl root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
    scp /root/.kube/kubectl.kubeconfig root@${master_ip}:/root/.kube/config
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_kubectl_config.sh 192.168.2.201 192.168.2.202 192.168.2.203

6. 部署master節點

  • kubernetes master 節點運行以下組件:
    kube-apiserver
    kube-scheduler
    kube-controller-manager

  • kube-scheduler 和 kube-controller-manager 能夠以集羣模式運行,經過 leader 選舉產生一個工做進程,其它進程處於阻塞模式。
  • 對於 kube-apiserver,能夠運行多個實例, 但對其它組件須要提供統一的訪問地址,該地址須要高可用。本文檔使用 keepalived 和 haproxy 實現 kube-apiserver VIP 高可用和負載均衡。
  • 由於對master作了keepalived高可用,因此3臺服務器都有可能會升成master服務器(主master宕機,會有從升級爲主);所以全部的master操做,在3個服務器上都要進行。

下載最新版本的二進制文件, 想辦法!!!

[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

將二進制文件拷貝到全部 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_master.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_master.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.01 部署高可用組件

  • 本文檔講解使用 keepalived 和 haproxy 實現 kube-apiserver 高可用的步驟:
    keepalived 提供 kube-apiserver 對外服務的 VIP;
    haproxy 監聽 VIP,後端鏈接全部 kube-apiserver 實例,提供健康檢查和負載均衡功能;
  • 運行 keepalived 和 haproxy 的節點稱爲 LB 節點。因爲 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
  • 本文檔複用 master 節點的三臺機器,haproxy 監聽的端口(8443) 須要與 kube-apiserver的端口 6443 不一樣,避免衝突。
  • keepalived 在運行過程當中週期檢查本機的 haproxy 進程狀態,若是檢測到 haproxy 進程異常,則觸發從新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
  • 全部組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都經過 VIP 和haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。

6.01.01 安裝軟件包,配置haproxy 配置文件

[root@k8s-master01 ~]# yum install keepalived haproxy -y

[root@k8s-master01 ~]# vi /etc/haproxy/haproxy.cfg 
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1
defaults
    log global
    timeout connect 5000
    timeout client 10m
    timeout server 10m
listen admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth haproxy:123456
    stats hide-version
    stats admin if TRUE
listen k8s-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 192.168.2.201 192.168.2.201:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.2.202 192.168.2.202:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.2.203 192.168.2.203:6443 check inter 2000 fall 2 rise 2 weight 1
  • haproxy 在 10080 端口輸出 status 信息;
  • haproxy 監聽全部接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致;
  • server 字段列出全部kube-apiserver監聽的 IP 和端口;

6.01.02 在其餘服務器安裝、下發haproxy 配置文件;並啓動檢查haproxy服務

[root@k8s-master01 ~]# vi /opt/k8s/script/haproxy.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #安裝haproxy
    ssh root@${master_ip} "yum install -y keepalived haproxy"
 #下發配置文件
    scp /etc/haproxy/haproxy.cfg root@${master_ip}:/etc/haproxy
 #啓動檢查haproxy服務
    ssh root@${master_ip} "systemctl restart haproxy"
    ssh root@${master_ip} "systemctl enable haproxy.service"
    ssh root@${master_ip} "systemctl status haproxy|grep Active"
 #檢查 haproxy 是否監聽6443 端口
    ssh root@${master_ip} "netstat -lnpt|grep haproxy"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/haproxy.sh 192.168.2.201 192.168.2.202 192.168.2.203

輸出相似:

Active: active (running) since Tue 2019-11-12 01:54:41 CST; 543ms ago
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      4995/haproxy        
tcp        0      0 0.0.0.0:10080           0.0.0.0:*               LISTEN      4995/haproxy

6.01.03 配置和啓動 keepalived 服務

  • keepalived 是一主(master)多備(backup)運行模式,故有兩種類型的配置文件。
  • master 配置文件只有一份,backup 配置文件視節點數目而定,對於本文檔而言,規劃以下:
    master: 192.168.2.201
    backup:192.168.2.20二、192.168.2.203

在192.168.2.201 master主服務;配置文件:

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf

global_defs {
    router_id keepalived_ha_121
}
vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}
vrrp_instance VI-k8s-master {
    state MASTER
    priority 120    # 第一臺從爲110, 以此類推!
    dont_track_primary
    interface eth0
    virtual_router_id 121
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        192.168.2.210
    }
}
  • 個人VIP 所在的接口nterface 爲 eth0;根據本身的狀況改變
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。若是異常則將權重減小(-30),從而觸發從新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,若是有多套keepalived HA,則必須各不相同;

在192.168.2.202, 192.168.2.203兩臺backup 服務;配置文件:

[root@k8s-master02 ~]# vi /etc/keepalived/keepalived.conf

global_defs {
        router_id keepalived_ha_122_123
}
vrrp_script check-haproxy {
        script "killall -0 haproxy"
        interval 5
        weight -30
}
vrrp_instance VI-k8s-master {
        state BACKUP
        priority 110   # 第2臺從爲100
        dont_track_primary
        interface eth0
        virtual_router_id 121
        advert_int 3
        track_script {
        check-haproxy
        }
        virtual_ipaddress {
            192.168.2.210
        }
}
  • priority 的值必須小於 master 的值;兩個從的值也須要不同;

開啓keepalived 服務

[root@k8s-master01 ~]# systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:00:68:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.101/24 brd 192.168.2.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.2.10/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f726:9d22:2b89:694c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether aa:43:e5:bb:88:28 brd ff:ff:ff:ff:ff:ff
    inet 10.30.34.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::a843:e5ff:febb:8828/64 scope link 
       valid_lft forever preferred_lft forever
  • 在master主服務器上能看到eth0網卡上已經有VIP的IP地址存在

6.01.04 查看 haproxy 狀態頁面

  • 瀏覽器訪問 192.168.2.210:10080/status 地址

6.02 部署 kube-apiserver 組件

下載二進制文件

  • kubernetes_server 包裏有, 已經解壓到/opt/k8s/bin下

6.02.01 建立 kube-apiserver證書和私鑰

建立證書籤名請求

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-apiserver-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203",
        "192.168.2.210",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "steams"
        }
    ]
}
EOF
  • hosts 字段指定受權使用該證書的 IP 或域名列表,這裏列出了 VIP 、apiserver節點 IP、kubernetes 服務 IP 和域名;
  • 域名最後字符不能是 . (如不能爲kubernetes.default.svc.cluster.local. ),不然解析時失敗,提示: x509:cannot parse dnsName "kubernetes.default.svc.cluster.local." ;
  • 若是使用非 cluster.local 域名,如 opsnull.com ,則須要修改域名列表中的最後兩個域名爲: kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
  • kubernetes 服務 IP 是 apiserver 自動建立的,通常是 --service-cluster-ip-range 參數指定的網段的第一個IP,後續能夠經過以下命令獲取:
    kubectl get svc kubernetes

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-apiserver-csr.json | cfssljson -bare /opt/k8s/cert/kube-apiserver

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-apiserver*
kube-apiserver.csr      kube-apiserver-csr.json  kube-apiserver-key.pem  kube-apiserver.pem

6.02.02 建立加密配置文件

產生一個用來加密Etcd 的 Key:

[root@k8s-master01 ~]# head -c 32 /dev/urandom | base64
# 返回一個key, 每臺master節點須要用同樣的 Key!!!
muqIUutYDd5ARLtsg/W1CYWs3g8Fq9uJO/lDpSsv9iw=

使用這個加密的key,建立加密配置文件

[root@k8s-master01 ~]# vi encryption-config.yaml

kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: muqIUutYDd5ARLtsg/W1CYWs3g8Fq9uJO/lDpSsv9iw=
      - identity: {}

6.02.03 將生成的證書和私鑰文件、加密配置文件拷貝到master節點的/opt/k8s目錄下

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_apiserver.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo  ">>> ${master_ip}"
    scp /opt/k8s/cert/kube-apiserver*.pem root@${master_ip}:/opt/k8s/cert/
    scp /root/encryption-config.yaml root@${master_ip}:/opt/k8s/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_apiserver.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.02.04 建立 kube-apiserver 的 systemd unit 模板文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-apiserver/kube-apiserver.service.template

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--experimental-encryption-provider-config=/opt/k8s/encryption-config.yaml \
--advertise-address=##MASTER_IP## \
--bind-address=##MASTER_IP## \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/k8s/cert/kube-apiserver.pem \
--tls-private-key-file=/opt/k8s/cert/kube-apiserver-key.pem \
--client-ca-file=/opt/k8s/cert/ca.pem \
--kubelet-client-certificate=/opt/k8s/cert/kube-apiserver.pem \
--kubelet-client-key=/opt/k8s/cert/kube-apiserver-key.pem \
--service-account-key-file=/opt/k8s/cert/ca-key.pem \
--etcd-cafile=/opt/k8s/cert/ca.pem \
--etcd-certfile=/opt/k8s/cert/kube-apiserver.pem \
--etcd-keyfile=/opt/k8s/cert/kube-apiserver-key.pem \
--etcd-servers=https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • -experimental-encryption-provider-config :啓用加密特性;
  • --authorization-mode=Node,RBAC : 開啓 Node 和 RBAC 受權模式,拒絕未受權的請求;
  • --enable-admission-plugins :啓用 ServiceAccount 和NodeRestriction ;
  • --service-account-key-file :簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file 指定私鑰文件,二者配對使用;
  • --tls-*-file :指定 apiserver 使用的證書、私鑰和 CA 文件。 --client-ca-file 用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
  • --kubelet-client-certificate 、 --kubelet-client-key :若是指定,則使用 https 訪問 kubelet APIs;須要爲證書對應的用戶(上面 kubernetes*.pem 證書的用戶爲 kubernetes) 用戶定義 RBAC 規則,不然訪問 kubelet API 時提示未受權;
  • --bind-address : 不能爲 127.0.0.1 ,不然外界不能訪問它的安全端口6443;
  • --insecure-port=0 :關閉監聽非安全端口(8080);
  • --service-cluster-ip-range : 指定 Service Cluster IP 地址段;
  • --service-node-port-range : 指定 NodePort 的端口範圍;
  • --runtime-config=api/all=true : 啓用全部版本的 APIs,如autoscaling/v2alpha1;
  • --enable-bootstrap-token-auth :啓用 kubelet bootstrap 的 token 認證;
  • --apiserver-count=3 :指定集羣運行模式,多臺 kube-apiserver 會經過 leader選舉產生一個工做節點,其它節點處於阻塞狀態;

6.02.05 爲各master節點建立和分發 kube-apiserver的systemd unit文件; 啓動檢查 kube-apiserver 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/apiserver_service.sh

MASTER_IPS=("$1" "$2" "$3")
#替換模板文件中的變量,爲各節點建立 systemd unit 文件
for (( i=0; i < 3; i++ ));do
    sed "s/##MASTER_IP##/${MASTER_IPS[i]}/" /opt/k8s/kube-apiserver/kube-apiserver.service.template > /opt/k8s/kube-apiserver/kube-apiserver-${MASTER_IPS[i]}.service
done
#啓動並檢查 kube-apiserver 服務
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/kube-apiserver/kube-apiserver-${master_ip}.service root@${master_ip}:/etc/systemd/system/kube-apiserver.service
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
    ssh root@${master_ip} "systemctl status kube-apiserver |grep 'Active:'"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/apiserver_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.02.06 打印 kube-apiserver 寫入 etcd 的數據

[root@k8s-master01 ~]# ETCDCTL_API=3 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--cacert=/opt/k8s/cert/ca.pem \
--cert=/opt/etcd/cert/etcd.pem \
--key=/opt/etcd/cert/etcd-key.pem \
get /registry/ --prefix --keys-only

6.02.07 檢查集羣信息

[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.2.210:8443

[root@k8s-master01 ~]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   49m

# 6443: 接收 https 請求的安全端口,對全部請求作認證和受權;
# 因爲關閉了非安全端口,故沒有監聽 8080;
[root@k8s-master01 ~]# ss -nutlp |grep apiserver
tcp    LISTEN   0        128         192.168.2.201:6443           0.0.0.0:*      users:(("kube-apiserver",pid=4425,fd=6))

6.03 部署高可用kube-controller-manager 集羣

  • 該集羣包含 3 個節點,啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
  • 爲保證通訊安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在以下兩種狀況下使用該證書:
    與 kube-apiserver 的安全端口通訊時;
    在安全端口(https,10252) 輸出 prometheus 格式的 metrics;

準備工做:下載kube-controller-manager二進制文件(包含在kubernetes-server包裏, 已解壓發送)

6.03.01 建立 kube-controller-manager 證書和私鑰

建立證書籤名請求:

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203"
    ],
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "steams"
        }
    ]
}
EOF
  • hosts 列表包含全部 kube-controller-manager 節點 IP;
  • CN 爲 system:kube-controller-manager、O 爲 system:kube-controller-manager(kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予kube-controller-manager 工做所需的權限.)

生成證書和私鑰

cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-controller-manager-csr.json | cfssljson -bare /opt/k8s/cert/kube-controller-manager

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-controller-manager*
kube-controller-manager.csr       kube-controller-manager-csr.json  kube-controller-manager-key.pem   kube-controller-manager.pem

6.03.02 建立.kubeconfig 文件

  • kubeconfig 文件包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書;

執行命令,生成kube-controller-manager.kubeconfig文件

# step.1 設置集羣參數:
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# step.2 設置客戶端認證參數
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/opt/k8s/cert/kube-controller-manager.pem \
--client-key=/opt/k8s/cert/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# step.3 設置上下文參數
[root@k8s-master01 ~]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# tep.4 設置默認上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

驗證kube-controller-manager.kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

分發生成的證書和私鑰、kubeconfig 到全部 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_controller_manager.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/cert/kube-controller-manager*.pem root@${master_ip}:/opt/k8s/cert/
    scp /opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig root@${master_ip}:/opt/k8s/kube-controller-manager/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_controller_manager.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.03.03 建立和分發 kube-controller-manager 的 systemd unit 文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-controller-manager/kube-controller-manager.service.template

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/k8s/cert/ca.pem \
--cluster-signing-key-file=/opt/k8s/cert/ca-key.pem \
--experimental-cluster-signing-duration=8760h \
--root-ca-file=/opt/k8s/cert/ca.pem \
--service-account-private-key-file=/opt/k8s/cert/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/opt/k8s/cert/kube-controller-manager.pem \
--tls-private-key-file=/opt/k8s/cert/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • --port=0:關閉監聽 http /metrics 的請求,同時 --address 參數無效,--bind-address 參數有效;
  • --secure-port=1025二、--bind-address=0.0.0.0: 在全部網絡接口監聽 10252 端口的 https /metrics 請求;
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它鏈接和驗證 kube-apiserver;
  • --cluster-signing-*-file:簽名 TLS Bootstrap 建立的證書;
  • --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
  • --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;
  • --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用;
  • --service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;
  • --leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;
  • --feature-gates=RotateKubeletServerCertificate=true:開啓 kublet server 證書的自動更新特性;
  • --controllers=*,bootstrapsigner,tokencleaner:啓用的控制器列表,tokencleaner 用於自動清理過時的 Bootstrap token;
  • --horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1;
  • --tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;
  • --use-service-account-credentials=true:

6.03.04 kube-controller-manager 的權限

  • ClusteRole: system:kube-controller-manager 的權限很小,只能建立 secret、serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole system:controller:XXX 中。
  • 須要在 kube-controller-manager 的啓動參數中添加 --use-service-account-credentials=true 參數,這樣 main controller 會爲各 controller 建立對應的 ServiceAccount XXX-controller。
  • 內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。

6.03.05 分發systemd unit 文件到全部master 節點;啓動檢查 kube-controller-manager 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/controller_manager_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/kube-controller-manager/kube-controller-manager.service.template root@${master_ip}:/etc/systemd/system/kube-controller-manager.service
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager "
        ssh root@${master_ip} "systemctl status kube-controller-manager|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/controller_manager_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.03.6 查看輸出的 metric

[root@k8s-master01 ~]# ss -nutlp |grep kube-controll
tcp    LISTEN   0        128             127.0.0.1:10252          0.0.0.0:*      users:(("kube-controller",pid=9382,fd=6))

6.03.07 測試 kube-controller-manager 集羣的高可用

  • 停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日誌,看是否獲取了 leader 權限。
  • 查看當前的 leader
[root@k8s-master02 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

6.04 部署高可用 kube-scheduler 集羣

  • 該集羣包含 3 個節點,啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
  • 爲保證通訊安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在以下兩種狀況下使用該證書:
    與 kube-apiserver 的安全端口通訊;
    在安全端口(https,10251) 輸出 prometheus 格式的 metrics;

準備工做:下載kube-scheduler 的二進制文件---^^^

6.04.01 建立 kube-scheduler 證書和私鑰

建立證書籤名請求:

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.2.201",
      "192.168.2.202",
      "192.168.2.203"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "steams"
      }
    ]
}
EOF
  • hosts 列表包含全部 kube-scheduler 節點 IP;
  • CN 爲 system:kube-scheduler、O 爲 system:kube-scheduler (kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工做所需的權限.)

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-scheduler-csr.json | cfssljson -bare /opt/k8s/cert/kube-scheduler

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-scheduler*
kube-scheduler.csr       kube-scheduler-csr.json  kube-scheduler-key.pem   kube-scheduler.pem

6.04.02 建立kubeconfig 文件

  • kubeconfig 文件包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書;

執行命令,生成kube-scheduler.kubeconfig文件

# step.1 設置集羣參數
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.2 設置客戶端認證參數
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/opt/k8s/cert/kube-scheduler.pem \
--client-key=/opt/k8s/cert/kube-scheduler-key.pem \
--embed-certs=true  \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.3 設置上下文參數
[root@k8s-master01 ~]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.4設置默認上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

驗證kube-controller-manager.kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

6.04.03 分發生成的證書和私鑰、kubeconfig 到全部 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_scheduler.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/cert/kube-scheduler*.pem root@${master_ip}:/opt/k8s/cert/
        scp /opt/k8s/kube-scheduler/kube-scheduler.kubeconfig root@${master_ip}:/opt/k8s/kube-scheduler/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_scheduler.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.04.04 建立kube-scheduler 的 systemd unit 文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-scheduler/kube-scheduler.service.template

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • ps: kube-scheduler目前僅支持http, 因此少了一大推相關安全設定!
  • --address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它鏈接和驗證 kube-apiserver;
  • --leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;

6.04.05 分發systemd unit 文件到全部master 節點;啓動檢查kube-scheduler 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/scheduler_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/kube-scheduler/kube-scheduler.service.template root@${master_ip}:/etc/systemd/system/kube-scheduler.service
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scheduler_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.04.06 查看輸出的 metric

  • kube-scheduler 監聽 10251 端口,接收 http 請求:
[root@k8s-master01 ~]# ss -nutlp |grep kube-scheduler
tcp    LISTEN   0        128             127.0.0.1:10251          0.0.0.0:*      users:(("kube-scheduler",pid=8584,fd=6))                                       
tcp    LISTEN   0        128                     *:10259                *:*      users:(("kube-scheduler",pid=8584,fd=7))   
                                    
[root@k8s-master01 ~]# curl -s http://127.0.0.1:10251/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

6.04.07 測試 kube-scheduler 集羣的高可用

  • 停掉一個或兩個節點的 kube-scheduler 服務,觀察其它節點的日誌,看是否獲取了 leader 權限。
  • 查看當前的 leader
[root@k8s-master02 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

master集羣已部署完畢!

相關文章
相關標籤/搜索