kubernetes 手動搭建

01.系統初始化和全局變量node

主機分配linux

主機名nginx

系統git

ip地址github

vipweb

dev-k8s-master1redis

centos7.6docker

172.19.201.244json

172.19.201.242bootstrap

dev-k8s-master2

centos7.6

172.19.201.249


dev-k8s-master2

centos7.6

172.19.201.248


dev-k8s-node1

centos7.6

172.19.201.247


dev-k8s-node2

centos7.6

172.19.201.246


dev-k8s-node3

centos7.6

172.19.201.243


flanne


10.10.0.0/16


docker


10.10.1.1/24


主機名

設置永久主機名稱,而後從新登陸:

 

hostnamectl set-hostname dev-k8s-master1

 

設置的主機名保存在 /etc/hostname 文件中;


無密碼 ssh 登陸其它節點

若是沒有特殊指明,本文檔的全部操做均在 zhangjun-k8s01 節點上執行,而後遠程分發文件和執行命令,因此須要添加該節點到其它節點的 ssh 信任關係。

設置 zhangjun-k8s01 的 root 帳戶能夠無密碼登陸全部節點:

ssh-keygen -t rsa

ssh-copy-id root@dev_k8s_master1

...

 

更新 PATH 變量

將可執行文件目錄添加到 PATH 環境變量中:

echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc

source /root/.bashrc

 

安裝依賴包

在每臺機器上安裝依賴包:

CentOS:

yum install -y epel-release

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

 

 

關閉防火牆

在每臺機器上關閉防火牆,清理防火牆規則,設置默認轉發策略:

systemctl stop firewalld

systemctl disable firewalld

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

iptables -P FORWARD ACCEPT

 

關閉 swap 分區

若是開啓了 swap 分區,kubelet 會啓動失敗(能夠經過將參數 --fail-swap-on 設置爲 false 來忽略 swap on),故須要在每臺機器上關閉 swap 分區。同時註釋 /etc/fstab 中相應的條目,防止開機自動掛載 swap 分區:

 

swapoff -a

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

關閉 SELinux

關閉 SELinux,不然後續 K8S 掛載目錄時可能報錯 Permission denied:

setenforce 0

sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

加載內核模塊

modprobe ip_vs_rr

modprobe br_netfilter

 

優化內核參數

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf

sysctl -p /etc/sysctl.d/kubernetes.conf

設置系統時區

# 調整系統 TimeZone

timedatectl set-timezone Asia/Shanghai

 

關閉無關的服務

systemctl stop postfix && systemctl disable postfix

設置 rsyslogd 和 systemd journald

systemd 的 journald 是 Centos 7 缺省的日誌記錄工具,它記錄了全部系統、內核、Service Unit 的日誌。

相比 systemd,journald 記錄的日誌有以下優點:

  • 能夠記錄到內存或文件系統;(默認記錄到內存,對應的位置爲 /run/log/jounal);

  • 能夠限制佔用的磁盤空間、保證磁盤剩餘空間;

  • 能夠限制日誌文件大小、保存的時間;

journald 默認將日誌轉發給 rsyslog,這會致使日誌寫了多份,/var/log/messages 中包含了太多無關日誌,不方便後續查看,同時也影響系統性能。

# 持久化保存日誌的目錄

mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF

 

systemctl restart systemd-journald

 

建立相關目錄

建立目錄:

mkdir -p  /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert

升級內核

yum -y update

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum --enablerepo=elrepo-kernel install kernel-lt.x86_64 -y

sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

       mNdU1UHcMCcUJGfK.png!thumbnail      

sudo grub2-set-default 0

安裝內核源文件(可選,在升級完內核並重啓機器後執行):

 

 

02.建立 CA 證書和祕鑰

安裝 cfssl 工具集

sudo mkdir -p /opt/k8s/cert && cd /opt/k8s

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

 

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

 

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

 

chmod +x /opt/k8s/bin/*

export PATH=/opt/k8s/bin:$PATH

 

建立根證書 (CA)

CA 證書是集羣全部節點共享的,只須要建立一個 CA 證書,後續建立的全部證書都由它簽名。

建立配置文件

CA 配置文件用於配置根證書的使用場景 (profile) 和具體參數 (usage,過時時間、服務端認證、客戶端認證、加密等),後續在簽名其它證書時須要指定特定場景。

cd /opt/k8s/work
cat > ca-config.json <<EOF
{
 "signing": {
   "default": {
     "expiry": "87600h"
   },
   "profiles": {
     "kubernetes": {
       "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
       ],
       "expiry": "87600h"
     }
   }
 }
}
EOF

 

 

建立證書籤名請求文件

cd /opt/k8s/work
cat > ca-csr.json <<EOF
{
 "CN": "kubernetes",
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ],
 "ca": {
   "expiry": "876000h"
}
}
EOF

 

 

生成 CA 證書和私鑰

cd /opt/k8s/work

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

ls ca*

 

分發證書文件

將生成的 CA 證書、祕鑰文件、配置文件拷貝到全部節點的 /etc/kubernetes/cert 目錄下:

mkdir -p /etc/kubernetes/cert

scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert

 

 

下載和分發 kubectl 二進制文件

下載和解壓:

cd /opt/k8s/work

wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz

tar -xzvf kubernetes-client-linux-amd64.tar.gz

 

分發到全部使用 kubectl 的節點:

 

scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

 

 

建立 admin 證書和私鑰

kubectl 與 apiserver https 安全端口通訊,apiserver 對提供的證書進行認證和受權。

kubectl 做爲集羣的管理工具,須要被授予最高權限,這裏建立具備最高權限的 admin 證書。

建立證書籤名請求:

cd /opt/k8s/work
cat > admin-csr.json <<EOF
{
 "CN": "admin",
 "hosts": [],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "system:masters",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

 

生成證書和私鑰:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

 

建立 kubeconfig 文件

kubeconfig 爲 kubectl 的配置文件,包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書;

cd /opt/k8s/work

 

# 設置集羣參數

kubectl config set-cluster kubernetes \

 --certificate-authority=/opt/k8s/work/ca.pem \

 --embed-certs=true \

 --server=https://172.19.201.242:8443 \

 --kubeconfig=kubectl.kubeconfig

 

# 設置客戶端認證參數

kubectl config set-credentials admin \

 --client-certificate=/opt/k8s/work/admin.pem \

 --client-key=/opt/k8s/work/admin-key.pem \

 --embed-certs=true \

 --kubeconfig=kubectl.kubeconfig

 

# 設置上下文參數

kubectl config set-context kubernetes \

 --cluster=kubernetes \

 --user=admin \

 --kubeconfig=kubectl.kubeconfig

 

# 設置默認上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

分發 kubeconfig 文件

分發到全部使用 kubectl 命令的節點:

cd /opt/k8s/work

mkdir -p ~/.kube

scp kubectl.kubeconfig root@dev-k8s-master1:~/.kube/config

 

03.部署 kubectl 命令行工具

下載和分發 kubectl 二進制文件

下載和解壓:

cd /opt/k8s/work

wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz

tar -xzvf kubernetes-client-linux-amd64.tar.gz

 

分發到全部使用 kubectl 的節點:

 

cd /opt/k8s/work

scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

建立 admin 證書和私鑰

kubectl 與 apiserver https 安全端口通訊,apiserver 對提供的證書進行認證和受權。

kubectl 做爲集羣的管理工具,須要被授予最高權限,這裏建立具備最高權限的 admin 證書。

建立證書籤名請求:

cd /opt/k8s/work
cat > admin-csr.json <<EOF
{
 "CN": "admin",
 "hosts": [],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "system:masters",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

 

生成證書和私鑰:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

 

建立 kubeconfig 文件

kubeconfig 爲 kubectl 的配置文件,包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書;

cd /opt/k8s/work

 

# 設置集羣參數

kubectl config set-cluster kubernetes \

 --certificate-authority=/opt/k8s/work/ca.pem \

 --embed-certs=true \

 --server="https://172.19.201.202:8443" \

 --kubeconfig=kubectl.kubeconfig

 

# 設置客戶端認證參數

kubectl config set-credentials admin \

 --client-certificate=/opt/k8s/work/admin.pem \

 --client-key=/opt/k8s/work/admin-key.pem \

 --embed-certs=true \

 --kubeconfig=kubectl.kubeconfig

 

# 設置上下文參數

kubectl config set-context kubernetes \

 --cluster=kubernetes \

 --user=admin \

 --kubeconfig=kubectl.kubeconfig

 

# 設置默認上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

 

 

分發 kubeconfig 文件

分發到全部使用 kubectl 命令的節點:

cd /opt/k8s/work

mkdir -p ~/.kube

scp kubectl.kubeconfig root@dev-k8s-master1:/root/.kube/config

 

 

 

 

04部署haproxy+keepalived

部署keepalived【全部master】

此處的keeplived的主要做用是爲haproxy提供vip(172.19.201.242),在三個haproxy實例之間提供主備,下降當其中一個haproxy失效的時對服務的影響。

 

安裝keepalived

yum install -y keepalived

配置keepalived:

[注意:VIP地址是否正確,且各個節點的priority不一樣,master1節點爲MASTER,其他節點爲BACKUP,killall -0 意思是根據進程名稱檢測進程是否存活]

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
  router_id LVS_DEVEL
}
vrrp_script check_haproxy {
   script "killall -0 haproxy"
   interval 3
   weight -2
   fall 10
   rise 2
}
vrrp_instance VI_1 {
   state MASTER
   interface eno1
   virtual_router_id 51
   priority 100
   advert_int 1
   authentication {
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {
       172.19.201.242
   }
}
EOF

 

scp -pr /etc/keepalived/keepalived.conf root@dev-k8s-master2:/etc/keepalived/   (master節點)

 

1.killall -0 根據進程名稱檢測進程是否存活,若是服務器沒有該命令,請使用yum install psmisc -y安裝

2.第一個master節點的state爲MASTER,其餘master節點的state爲BACKUP

3.priority表示各個節點的優先級,範圍:0~250(非強制要求)

 

啓動並檢測服務

systemctl enable keepalived.service 

systemctl start keepalived.service

systemctl status keepalived.service 

ip address show eno1

 

部署haproxy【全部master】

此處的haproxy爲apiserver提供反向代理,haproxy將全部請求輪詢轉發到每一個master節點上。相對於僅僅使用keepalived主備模式僅單個master節點承載流量,這種方式更加合理、健壯。

 

安裝haproxy

yum install -y haproxy

配置haproxy【三個master節點同樣】

cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
 
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
   # to have these messages end up in /var/log/haproxy.log you will
   # need to:
   #
   # 1) configure syslog to accept network log events.  This is done
   #    by adding the '-r' option to the SYSLOGD_OPTIONS in
   #    /etc/sysconfig/syslog
   #
   # 2) configure local2 events to go to the /var/log/haproxy.log
   #   file. A line like the following can be added to
   #   /etc/sysconfig/syslog
   #
   #    local2.*                       /var/log/haproxy.log
   #
   log         127.0.0.1 local2
 
   chroot      /var/lib/haproxy
   pidfile     /var/run/haproxy.pid
   maxconn     4000
   user        haproxy
   group       haproxy
   daemon
 
   # turn on stats unix socket
   stats socket /var/lib/haproxy/stats
 
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
   mode                    http
   log                     global
   option                  httplog
   option                  dontlognull
   option http-server-close
   option forwardfor       except 127.0.0.0/8
   option                  redispatch
   retries                 3
   timeout http-request    10s
   timeout queue           1m
   timeout connect         10s
   timeout client          1m
   timeout server          1m
   timeout http-keep-alive 10s
   timeout check           10s
   maxconn                 3000
 
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiserver
   mode                 tcp
   bind                 *:8443
   option               tcplog
   default_backend      kubernetes-apiserver
 
 
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
   mode        tcp
   balance     roundrobin
   server  dev-k8s-master1 172.19.201.244:6443 check
   server  dev-k8s-master2 172.19.201.249:6443 check
   server  dev-k8s-master3 172.19.201.248:6443 check
 
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
   bind                 *:1080
   stats auth           admin:awesomePassword
   stats refresh        5s
   stats realm          HAProxy\ Statistics
   stats uri            /admin?stats

 

 

把配置文件拷貝到其餘兩臺master節點

scp -pr /etc/haproxy/haproxy.cfg root@dev-k8s-master2:/etc/haproxy/

 

啓動並檢測服務

 systemctl enable haproxy.service 

 systemctl start haproxy.service 

 systemctl status haproxy.service 

 ss -lnt | grep -E "8443|1080"

 

05.部署 etcd 集羣

下載和分發 etcd 二進制文件

到 etcd 的 release 頁面 下載最新版本的發佈包:

cd /opt/k8s/work

wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz

tar -xvf etcd-v3.3.13-linux-amd64.tar.gz

 

分發二進制文件到集羣全部節點:

cd /opt/k8s/work

scp etcd-v3.3.13-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin

chmod +x /opt/k8s/bin/*

建立 etcd 證書和私鑰

建立證書籤名請求:

cd /opt/k8s/work
cat > etcd-csr.json <<EOF
{
 "CN": "etcd",
 "hosts": [
   "127.0.0.1",
   "172.19.201.244",
   "172.19.201.249",
   "172.19.201.248",
   "172.19.201.242"
 ],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

 

生成證書和私鑰:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

   -ca-key=/opt/k8s/work/ca-key.pem \

   -config=/opt/k8s/work/ca-config.json \

   -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

 

分發生成的證書和私鑰到各 etcd 節點:

cd /opt/k8s/work

mkdir -p /etc/etcd/cert

scp etcd*.pem root@dev-k8s-master1:/etc/etcd/cert/

 

建立 etcd 的 systemd 文件

vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
 
[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd/data
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd/data \
  --wal-dir=/data/k8s/etcd/wal \
  --name=dev-k8s-master1 \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://172.19.201.244:2380 \
  --initial-advertise-peer-urls=https://172.19.201.244:2380 \
  --listen-client-urls=https://172.19.201.244:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://172.19.201.244:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=dev-k8s-master1=https://172.19.201.244:2380,dev-k8s-master2=https://172.19.201.249:2380,dev-k8s-master3=https://172.19.201.
248:2380 \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

 

 

驗證服務狀態

部署完 etcd 集羣后,在任一 etcd 節點上執行以下命令:

cd /opt/k8s/work

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

   --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

   --cacert=/opt/k8s/work/ca.pem \

   --cert=/etc/etcd/cert/etcd.pem \

   --key=/etc/etcd/cert/etcd-key.pem endpoint health

 

輸出結果:輸出均爲 healthy 時表示集羣服務正常。

 

查看當前的 leader

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

 -w table --cacert=/opt/k8s/work/ca.pem \

 --cert=/etc/etcd/cert/etcd.pem \

 --key=/etc/etcd/cert/etcd-key.pem \

 --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 endpoint status

 

 

       GOxx8RjM9MYkRkVD.png!thumbnail      

 

 

06.部署 flannel 網絡

下載和分發 flanneld 二進制文件

從 flannel 的 release 頁面 下載最新版本的安裝包:

cd /opt/k8s/work

mkdir flannel

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel

 

分發二進制文件到集羣全部節點:

cd /opt/k8s/work

source /opt/k8s/bin/environment.sh

scp flannel/{flanneld,mk-docker-opts.sh} root@dev-k8s-node1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

 

 

建立 flannel 證書和私鑰

flanneld 從 etcd 集羣存取網段分配信息,而 etcd 集羣啓用了雙向 x509 證書認證,因此須要爲 flanneld 生成證書和私鑰。

建立證書籤名請求:

cd /opt/k8s/work
cat > flanneld-csr.json <<EOF
{
 "CN": "flanneld",
 "hosts": [],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

 

  • 該證書只會被 kubectl 當作 client 證書使用,因此 hosts 字段爲空;

生成證書和私鑰:

 

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

 

將生成的證書和私鑰分發到全部節點(master 和 worker):

cd /opt/k8s/work

mkdir -p /etc/flanneld/cert

scp flanneld*.pem root@dev-k8s-master1:/etc/flanneld/cert

 

 

向 etcd 寫入集羣 Pod 網段信息

注意:本步驟只需執行一次。

cd /opt/k8s/work

etcdctl \

 --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

 --ca-file=/opt/k8s/work/ca.pem \

 --cert-file=/opt/k8s/work/flanneld.pem \

 --key-file=/opt/k8s/work/flanneld-key.pem \

 set /kubernetes/network/config '{"Network":"'10.10.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'

 

  • flanneld 當前版本 (v0.11.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段數據;

  • 寫入的 Pod 網段 ${CLUSTER_CIDR} 地址段(如 /16)必須小於 SubnetLen,必須與 kube-controller-manager 的 --cluster-cidr 參數值一致;

 

建立 flanneld 的 systemd unit 文件

cat /etc/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
 
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \
  -etcd-endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
  -etcd-prefix=/kubernetes/network \
  -iface=eno1 \
  -ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0
 
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

 

啓動 flanneld 服務

systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld

 

檢查分配給各 flanneld 的 Pod 網段信息

查看集羣 Pod 網段(/16):

etcdctl \

 --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

 --ca-file=/etc/kubernetes/cert/ca.pem \

 --cert-file=/etc/flanneld/cert/flanneld.pem \

 --key-file=/etc/flanneld/cert/flanneld-key.pem \

 get /kubernetes/network/config

 

輸出:

{"Network":"10.10.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

 

 

查看已分配的 Pod 子網段列表(/24):

etcdctl \

 --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

 --ca-file=/etc/kubernetes/cert/ca.pem \

 --cert-file=/etc/flanneld/cert/flanneld.pem \

 --key-file=/etc/flanneld/cert/flanneld-key.pem \

 ls /kubernetes/network/subnets

 

輸出(結果視部署狀況而定):

       NhBzJxSOR0EEGJwZ.png!thumbnail      

 

查看某一 Pod 網段對應的節點 IP 和 flannel 接口地址:

 

etcdctl \

 --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

 --ca-file=/etc/kubernetes/cert/ca.pem \

 --cert-file=/etc/flanneld/cert/flanneld.pem \

 --key-file=/etc/flanneld/cert/flanneld-key.pem \

 get  /kubernetes/network/subnets/10.10.80.0-21

 

輸出(結果視部署狀況而定):

       Nn6uY8Hhfm47C8Hb.png!thumbnail      

檢查節點 flannel 網絡信息

       l7pmFnYUBMk0lrej.png!thumbnail      

 

flannel.1 網卡的地址爲分配的 Pod 子網段的第一個 IP(.0),且是 /32 的地址;

[root@dev-k8s-node1 ~]# ip route show |grep flannel.1

       XX3U9g4G0EYqkSK0.png!thumbnail      

 

 

驗證各節點能經過 Pod 網段互通

在各節點上部署 flannel 後,檢查是否建立了 flannel 接口(名稱可能爲 flannel0、flannel.0、flannel.1 等):

 

ssh dev-k8s-node2 "/usr/sbin/ip addr show flannel.1|grep -w inet"

       Zwb0QiXDYY8xzLdc.png!thumbnail      

 

在各節點上 ping 全部 flannel 接口 IP,確保能通:

 

ssh dev-k8s-node2 "ping -c 2 10.10.176.0"

       0LjmKMZv6Qw4qIVZ.png!thumbnail      

 

 

07.部署高可用 kube-apiserver 集羣

建立 kubernetes 證書和私鑰

建立證書籤名請求:

cd /opt/k8s/work
cat > kubernetes-csr.json <<EOF
{
 "CN": "kubernetes",
 "hosts": [
   "127.0.0.1",
   "172.19.201.244",
   "172.19.201.249",
   "172.19.201.248",
   "172.19.201.242",
   "kubernetes",
   "kubernetes.default",
   "kubernetes.default.svc",
   "kubernetes.default.svc.cluster",
   "kubernetes.default.svc.cluster.local."
 ],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

kubernetes 服務 IP 是 apiserver 自動建立的,通常是 --service-cluster-ip-range 參數指定的網段的第一個IP,後續能夠經過下面命令獲取:

 

kubectl get svc kubernetes

       ZXzA0SSiBAMaNrtt.png!thumbnail      

 

 

生成證書和私鑰:

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

ls kubernetes*pem

 

將生成的證書和私鑰文件拷貝到全部 master 節點:

 

cd /opt/k8s/work

mkdir -p /etc/kubernetes/cert

scp kubernetes*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

 

 

建立加密配置文件

cd /opt/k8s/work
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
 - resources:
     - secrets
   providers:
     - aescbc:
         keys:
           - name: key1
             secret: $(head -c 32 /dev/urandom | base64)
     - identity: {}
EOF

 

將加密配置文件拷貝到 master 節點的 /etc/kubernetes 目錄下:

 

cd /opt/k8s/work

scp encryption-config.yaml root@dev-k8s-master1:/etc/kubernetes/

 

 

建立審計策略文件

cd /opt/k8s/work
cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
 # The following requests were manually identified as high-volume and low-risk, so drop them.
 - level: None
   resources:
     - group: ""
       resources:
         - endpoints
         - services
         - services/status
   users:
     - 'system:kube-proxy'
   verbs:
     - watch
 
 - level: None
   resources:
     - group: ""
       resources:
         - nodes
         - nodes/status
   userGroups:
     - 'system:nodes'
   verbs:
     - get
 
 - level: None
   namespaces:
     - kube-system
   resources:
     - group: ""
       resources:
         - endpoints
   users:
     - 'system:kube-controller-manager'
     - 'system:kube-scheduler'
     - 'system:serviceaccount:kube-system:endpoint-controller'
   verbs:
     - get
     - update
 
 - level: None
   resources:
     - group: ""
       resources:
         - namespaces
         - namespaces/status
         - namespaces/finalize
   users:
     - 'system:apiserver'
   verbs:
     - get
 
 # Don't log HPA fetching metrics.
 - level: None
   resources:
     - group: metrics.k8s.io
   users:
     - 'system:kube-controller-manager'
   verbs:
     - get
     - list
 
 # Don't log these read-only URLs.
 - level: None
   nonResourceURLs:
     - '/healthz*'
     - /version
     - '/swagger*'
 
 # Don't log events requests.
 - level: None
   resources:
     - group: ""
       resources:
         - events
 
 # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
 - level: Request
   omitStages:
     - RequestReceived
   resources:
     - group: ""
       resources:
         - nodes/status
         - pods/status
   users:
     - kubelet
     - 'system:node-problem-detector'
     - 'system:serviceaccount:kube-system:node-problem-detector'
   verbs:
     - update
     - patch
 
 - level: Request
   omitStages:
     - RequestReceived
   resources:
     - group: ""
       resources:
         - nodes/status
         - pods/status
   userGroups:
     - 'system:nodes'
   verbs:
     - update
     - patch
 
 # deletecollection calls can be large, don't log responses for expected namespace deletions
 - level: Request
   omitStages:
     - RequestReceived
   users:
     - 'system:serviceaccount:kube-system:namespace-controller'
   verbs:
     - deletecollection
 
 # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
 # so only log at the Metadata level.
 - level: Metadata
   omitStages:
     - RequestReceived
   resources:
     - group: ""
       resources:
         - secrets
         - configmaps
     - group: authentication.k8s.io
       resources:
         - tokenreviews
 # Get repsonses can be large; skip them.
 - level: Request
   omitStages:
     - RequestReceived
   resources:
     - group: ""
     - group: admissionregistration.k8s.io
     - group: apiextensions.k8s.io
     - group: apiregistration.k8s.io
     - group: apps
     - group: authentication.k8s.io
     - group: authorization.k8s.io
     - group: autoscaling
     - group: batch
     - group: certificates.k8s.io
     - group: extensions
     - group: metrics.k8s.io
     - group: networking.k8s.io
     - group: policy
     - group: rbac.authorization.k8s.io
     - group: scheduling.k8s.io
     - group: settings.k8s.io
     - group: storage.k8s.io
   verbs:
     - get
     - list
     - watch
 
 # Default level for known APIs
 - level: RequestResponse
   omitStages:
     - RequestReceived
   resources:
     - group: ""
     - group: admissionregistration.k8s.io
     - group: apiextensions.k8s.io
     - group: apiregistration.k8s.io
     - group: apps
     - group: authentication.k8s.io
     - group: authorization.k8s.io
     - group: autoscaling
     - group: batch
     - group: certificates.k8s.io
     - group: extensions
     - group: metrics.k8s.io
     - group: networking.k8s.io
     - group: policy
     - group: rbac.authorization.k8s.io
     - group: scheduling.k8s.io
     - group: settings.k8s.io
     - group: storage.k8s.io
     
 # Default level for all other requests.
 - level: Metadata
   omitStages:
     - RequestReceived
EOF

 

分發審計策略文件:

 

cd /opt/k8s/work

scp audit-policy.yaml root@dev-k8s-master1:/etc/kubernetes/audit-policy.yaml

 

 

建立後續訪問 metrics-server 使用的證書

建立證書籤名請求:

cat > proxy-client-csr.json <<EOF
{
 "CN": "aggregator",
 "hosts": [],
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ]
}
EOF


 

 

 

生成證書和私鑰:

 

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

 -ca-key=/etc/kubernetes/cert/ca-key.pem  \

 -config=/etc/kubernetes/cert/ca-config.json  \

 -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

 

ls proxy-client*.pem

將生成的證書和私鑰文件拷貝到全部 master 節點:

 

scp proxy-client*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

 

 

建立 kube-apiserver systemd unit 配置文件

cd /opt/k8s/work

vim /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/data/k8s/k8s/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \
  --advertise-address=172.19.201.244 \
  --default-not-ready-toleration-seconds=360 \
  --default-unreachable-toleration-seconds=360 \
  --feature-gates=DynamicAuditing=true \
  --max-mutating-requests-inflight=2000 \
  --max-requests-inflight=4000 \
  --default-watch-cache-size=200 \
  --delete-collection-workers=2 \
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \
  --bind-address=172.19.201.244 \
  --secure-port=6443 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --insecure-port=0 \
  --audit-dynamic-configuration \
  --audit-log-maxage=15 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-truncate-enabled \
  --audit-log-path=/data/k8s/k8s/kube-apiserver/audit.log \
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \
  --profiling \
  --anonymous-auth=false \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --enable-bootstrap-token-auth \
  --requestheader-allowed-names="aggregator" \
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-admission-plugins=NodeRestriction \
  --allow-privileged=true \
  --apiserver-count=3 \
  --event-ttl=168h \
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --kubelet-https=true \
  --kubelet-timeout=10s \
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32767 \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

 

 

 

 

啓動 kube-apiserver 服務

啓動服務前必須先建立工做目錄;

mkdir -p /data/k8s/k8s/kube-apiserver

systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver

 

打印 kube-apiserver 寫入 etcd 的數據

ETCDCTL_API=3 etcdctl \

   --endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

   --cacert=/opt/k8s/work/ca.pem \

   --cert=/opt/k8s/work/etcd.pem \

   --key=/opt/k8s/work/etcd-key.pem \

   get /registry/ --prefix --keys-only

 

檢查集羣信息

$ kubectl cluster-info

Kubernetes master is running at https://172.19.201.242:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

$ kubectl get all --all-namespaces

NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE

default     service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   12m

 

$ kubectl get componentstatuses

       UNm6wL3e7g0TkWgV.png!thumbnail      

檢查 kube-apiserver 監聽的端口

sudo netstat -lnpt|grep kube

       RSz7YXfUiQYmab93.png!thumbnail      

 

授予 kube-apiserver 訪問 kubelet API 的權限

在執行 kubectl exec、run、logs 等命令時,apiserver 會將請求轉發到 kubelet 的 https 端口。這裏定義 RBAC 規則,受權 apiserver 使用的證書(kubernetes.pem)用戶名(CN:kuberntes)訪問 kubelet API 的權限:

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

       CRXSI3Ru1VM5zrom.png!thumbnail      

 

 

 

08.部署高可用 kube-controller-manager 集羣

建立 kube-controller-manager 證書和私鑰

建立證書籤名請求:

cd /opt/k8s/work
cat > kube-controller-manager-csr.json <<EOF
{
   "CN": "system:kube-controller-manager",
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "hosts": [
     "127.0.0.1",
     "172.19.201.244",
     "172.19.201.249",
"172.19.201.248",
     "172.19.201.242"
   ],
   "names": [
     {
       "C": "CN",
       "ST": "BeiJing",
       "L": "BeiJing",
       "O": "system:kube-controller-manager",
       "OU": "4Paradigm"
     }
   ]
}
EOF

 

生成證書和私鑰:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

 

ls kube-controller-manager*pem

將生成的證書和私鑰分發到全部 master 節點:

 

cd /opt/k8s/work

scp kube-controller-manager*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

 

 

建立和分發 kubeconfig 文件

kube-controller-manager 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-controller-manager 證書:

 

cd /opt/k8s/work

kubectl config set-cluster kubernetes \

 --certificate-authority=/opt/k8s/work/ca.pem \

 --embed-certs=true \

 --server=https://172.19.201.242:8443 \

 --kubeconfig=kube-controller-manager.kubeconfig

 

kubectl config set-credentials system:kube-controller-manager \

 --client-certificate=kube-controller-manager.pem \

 --client-key=kube-controller-manager-key.pem \

 --embed-certs=true \

 --kubeconfig=kube-controller-manager.kubeconfig

 

kubectl config set-context system:kube-controller-manager \

 --cluster=kubernetes \

 --user=system:kube-controller-manager \

 --kubeconfig=kube-controller-manager.kubeconfig

 

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

 

分發 kubeconfig 到全部 master 節點:

 

cd /opt/k8s/work

scp kube-controller-manager.kubeconfig root@dev-k8s-master1:/etc/kubernetes/

建立 kube-controller-manager systemd unit 模板文件

cd /opt/k8s/work

cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
 
[Service]
WorkingDirectory=/data/k8s/k8s/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \
  --port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --leader-elect=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --kube-api-qps=1000 \
  --kube-api-burst=2000 \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

 

 

 

建立目錄

mkdir -p /data/k8s/k8s/kube-controller-manager

 

啓動服務

systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager

 

kube-controller-manager 監聽 10252 端口,接收 https 請求:

sudo netstat -lnpt | grep kube-cont

       BuxTLcacgt0AjJ5r.png!thumbnail      

 

 

kube-controller-manager 賦予相應的權限

kubectl create clusterrolebinding controller-manager:system:auth-delegator --user system:kube-controller-manager --clusterrole system:auth-delegator

 

 

kubectl describe clusterrole system:kube-controller-manager

       m21AdVPuBAAl5ifH.png!thumbnail      

 

kubectl get clusterrole|grep controller

       nJtbOH7Vd4Ek6EmV.png!thumbnail      

 

kubectl describe clusterrole system:controller:deployment-controller

       KJbxAPUDfvctBAWX.png!thumbnail      

 

查看當前的 leader

kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml

       vgXz54Au5IEk7cP7.png!thumbnail      

 

 

09.部署高可用 kube-scheduler 集羣

建立 kube-scheduler 證書和私鑰

建立證書籤名請求:

cd /opt/k8s/work
cat > kube-scheduler-csr.json <<EOF
{
   "CN": "system:kube-scheduler",
   "hosts": [
     "127.0.0.1",
     "172.19.201.244",
     "172.19.201.249",
      "172.19.201.248",
     "172.19.201.242"
   ],
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "BeiJing",
       "L": "BeiJing",
       "O": "system:kube-scheduler",
       "OU": "4Paradigm"
     }
   ]
}
EOF

 

生成證書和私鑰:

cd /opt/k8s/work

 

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

 

 

將生成的證書和私鑰分發到全部 master 節點:

cd /opt/k8s/work

scp kube-scheduler*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

 

 

建立和分發 kubeconfig 文件

kube-scheduler 使用 kubeconfig 文件訪問 apiserver,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler 證書:

 

cd /opt/k8s/work

kubectl config set-cluster kubernetes \

 --certificate-authority=/opt/k8s/work/ca.pem \

 --embed-certs=true \

 --server=https://172.19.201.242:8443 \

 --kubeconfig=kube-scheduler.kubeconfig

 

kubectl config set-credentials system:kube-scheduler \

 --client-certificate=kube-scheduler.pem \

 --client-key=kube-scheduler-key.pem \

 --embed-certs=true \

 --kubeconfig=kube-scheduler.kubeconfig

 

kubectl config set-context system:kube-scheduler \

 --cluster=kubernetes \

 --user=system:kube-scheduler \

 --kubeconfig=kube-scheduler.kubeconfig

 

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

 

分發 kubeconfig 到全部 master 節點:

cd /opt/k8s/work

scp kube-scheduler.kubeconfig root@dev-k8s-master1:/etc/kubernetes/

建立 kube-scheduler 配置文件

cd /opt/k8s/work
cat <<EOF | sudo tee kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
 kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
 leaderElect: true
EOF

 

分發 kube-scheduler 配置文件到全部 master 節點:

scp kube-scheduler.yaml root@dev-k8s-master1:/etc/kubernetes/

 

 

 

建立 kube-scheduler systemd 文件

cat /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
 
[Service]
WorkingDirectory=/data/k8s/k8s/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \
  --config=/etc/kubernetes/kube-scheduler.yaml \
  --address=127.0.0.1 \
  --kube-api-qps=100 \
  --logtostderr=true \
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
 
[Install]
WantedBy=multi-user.target

 

 

 

建立目錄

mkdir -p /data/k8s/k8s/kube-scheduler

 

啓動 kube-scheduler 服務

systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler

 

檢查服務運行狀態

systemctl status kube-scheduler

 

查看輸出的 metrics

sudo netstat -lnpt |grep kube-sch

       bHrTuHr46JQziPAP.png!thumbnail      

 

curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.19.201.244:10259/metrics |head

       gpsl0GuMCgELU4NZ.png!thumbnail      

 

查看當前的 leader

$ kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml

       6FvV2BbutNsJi3xM.png!thumbnail      

 

 

10.部署 docker 組件

下載和分發 docker 二進制文件

到 docker 下載頁面 下載最新發布包:

cd /opt/k8s/work

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz

tar -xvf docker-18.09.6.tgz

 

分發二進制文件到全部 worker 節點:

cd /opt/k8s/work

scp docker/*  root@dev-k8s-node1:/opt/k8s/bin/

ssh root@dev-k8s-node1 "chmod +x /opt/k8s/bin/*"

 

 

在work節點建立和分發 systemd unit 文件

cat /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
 
[Service]
WorkingDirectory=/data/k8s/docker
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
 
[Install]
WantedBy=multi-user.target

 

 

 

sudo iptables -P FORWARD ACCEPT

/sbin/iptables -P FORWARD ACCEPT

 

分發 systemd unit 文件到全部 worker 機器:

cd /opt/k8s/work

scp docker.service root@dev-k8s-node1:/etc/systemd/system/

 

配置和分發 docker 配置文件

使用國內的倉庫鏡像服務器以加快 pull image 的速度,同時增長下載的併發數 (須要重啓 dockerd 生效):

cd /opt/k8s/work

分發 docker 配置文件到全部 worker 節點:

mkdir -p  /etc/docker/ /data/k8s/docker/{data,exec}

cat /etc/docker/daemon.json
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    "insecure-registries": ["docker02:35000"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "max-concurrent-uploads": 10,
    "debug": true,
    "data-root": "/data/k8s/docker/data",
    "exec-root": "/data/k8s/docker/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}

 

 

分發 docker 配置文件到全部 worker 節點:

mkdir -p  /etc/docker/ /data/k8s/docker/{data,exec}

scp docker-daemon.json root@dev-k8s-node1:/etc/docker/daemon.json

 

啓動 docker 服務

systemctl daemon-reload && systemctl enable docker && systemctl restart docker

 

 

檢查服務運行狀態

systemctl status docker|grep active

 


 

配置docker的配置文件

vim /etc/docker/daemon.json
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    "insecure-registries": ["docker02:35000"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "max-concurrent-uploads": 10,
    "debug": true,
    "data-root": "/data/k8s/docker/data",
    "exec-root": "/data/k8s/docker/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}

啓動 docker 服務

systemctl daemon-reload && systemctl enable docker && systemctl restart docker

 

 

12.部署 kubelet 組件

建立 kubelet bootstrap kubeconfig 文件

cd /opt/k8s/work
vim /opt/k8s/bin/environment.sh
#!/bin/bash
KUBE_APISERVER="https://172.19.201.202:8443"
BOOTSTRAP_TOKEN="head -c 16 /dev/urandom | od -An -t x | tr -d ' '"
NODE_NAMES=(dev-k8s-node1 dev-k8s-node2 dev-k8s-node3)
 
 
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
 do
   echo ">>> ${node_name}"
 
   # 建立 token
   export BOOTSTRAP_TOKEN=$(kubeadm token create \
     --description kubelet-bootstrap-token \
     --groups system:bootstrappers:${node_name} \
     --kubeconfig ~/.kube/config)
 
   # 設置集羣參數
   kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/cert/ca.pem \
     --embed-certs=true \
     --server=${KUBE_APISERVER} \
     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
 
   # 設置客戶端認證參數
   kubectl config set-credentials kubelet-bootstrap \
     --token=${BOOTSTRAP_TOKEN} \
     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
 
   # 設置上下文參數
   kubectl config set-context default \
     --cluster=kubernetes \
     --user=kubelet-bootstrap \
     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
 
   # 設置默認上下文
   kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
 done

 

證書中寫入 Token 而非證書,證書後續由 kube-controller-manager 建立。

查看 kubeadm 爲各節點建立的 token:

kubeadm token list --kubeconfig ~/.kube/config

       jYMf2gCf3k07iwJ4.png!thumbnail      

 

分發 bootstrap kubeconfig 文件到全部 worker 節點

scp -pr kubelet-bootstrap-dev-k8s-master1.kubeconfig root@dev-k8s-master1:/etc/kubernetes/kubelet-bootstrap.kubeconfig

注:對應的文件名傳到對應的主機上

建立和分發 kubelet 參數配置文件

cat > /etc/kubernetes/kubelet-config.yaml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
 anonymous:
   enabled: false
 webhook:
   enabled: true
 x509:
   clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
 mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
 - "10.254.0.2"
podCIDR: ""
maxPods: 220
serializeImagePulls: false
hairpinMode: promiscuous-bridge
cgroupDriver: cgroupfs
runtimeRequestTimeout: "15m"
rotateCertificates: true
serverTLSBootstrap: true
readOnlyPort: 0
port: 10250
address: "172.19.201.247"
EOF

 

 

爲各節點建立和分發 kubelet 配置文件:(分發到work節點上)

scp -pr /etc/kubernetes/kubelet-config.yaml root@dev-k8s-master2:/etc/kubernetes/

 

建立和分發 kubelet systemd unit 文件

cat  /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
 
[Service]
WorkingDirectory=/data/k8s/k8s/kubelet
ExecStart=/opt/k8s/bin/kubelet \
 --root-dir=/data/k8s/k8s/kubelet \
 --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
 --cert-dir=/etc/kubernetes/cert \
 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
 --config=/etc/kubernetes/kubelet-config.yaml \
 --hostname-override=dev-k8s-node1 \
 --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \
 --allow-privileged=true \
 --event-qps=0 \
 --kube-api-qps=1000 \
 --kube-api-burst=2000 \
 --registry-qps=0 \
 --image-pull-progress-deadline=30m \
 --logtostderr=true \
 --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
 
[Install]
WantedBy=multi-user.target

 

 

爲各節點建立和分發 kubelet systemd unit 文件:

scp -pr /etc/systemd/system/kubelet.service root@dev-k8s-node1:/etc/systemd/system/

 

Bootstrap Token Auth 和授予權限:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

 

建立工做路徑

mkdir -p /data/k8s/k8s/kubelet

 

關閉swapoff, 不然 kubelet 會啓動失敗

/usr/sbin/swapoff -a

 

啓動 kubelet 服務

systemctl daemon-reload  && systemctl restart kubelet && systemctl enable kubelet

 

檢測服務其否啓動

systemctl status kubelet |grep active

 

自動 approve CSR 請求

建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:

cd /opt/k8s/work

cat > csr-crb.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: auto-approve-csrs-for-group
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-client-cert-renewal
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
 resources: ["certificatesigningrequests/selfnodeserver"]
 verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-server-cert-renewal
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: approve-node-server-renewal-csr
  apiGroup: rbac.authorization.k8s.io
EOF

 

生效配置:

kubectl apply -f csr-crb.yaml

 

查看 kublet 的狀況

等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approved:

手動 approve server cert csr

基於安全性考慮,CSR approving controllers 默認不會自動 approve kubelet server 證書籤名請求,須要手動 approve。

kubectl get csr

       uCc2DwfIWh0kNGsL.png!thumbnail      

kubectl certificate approve csr-bjtp4  

 

13.部署kube-proxy 組件

kube-proxy 運行在全部 worker 節點上,,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。

 

建立 kube-proxy 證書

建立證書籤名請求:

cd /opt/k8s/work

cat > kube-proxy-csr.json <<EOF
{
 "CN": "system:kube-proxy",
 "key": {
   "algo": "rsa",
   "size": 2048
 },
 "names": [
   {
     "C": "CN",
     "ST": "BeiJing",
     "L": "BeiJing",
     "O": "k8s",
     "OU": "4Paradigm"
   }
 ]
}
EOF

 

 

生成證書和私鑰:

cfssl gencert -ca=/opt/k8s/work/ca.pem \

 -ca-key=/opt/k8s/work/ca-key.pem \

 -config=/opt/k8s/work/ca-config.json \

 -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

 

 

建立和分發 kubeconfig 文件

kubectl config set-cluster kubernetes \

 --certificate-authority=/opt/k8s/work/ca.pem \

 --embed-certs=true \

 --server=https://172.19.201242:8443 \

 --kubeconfig=kube-proxy.kubeconfig

 

kubectl config set-credentials kube-proxy \

 --client-certificate=kube-proxy.pem \

 --client-key=kube-proxy-key.pem \

 --embed-certs=true \

 --kubeconfig=kube-proxy.kubeconfig

 

kubectl config set-context default \

 --cluster=kubernetes \

 --user=kube-proxy \

 --kubeconfig=kube-proxy.kubeconfig

 

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

 

分發 kubeconfig 文件:(拷貝到work工做節點)

scp kube-proxy.kubeconfig root@master1:/etc/kubernetes/        

 

 

建立 kube-proxy 配置文件

cat >  /etc/kubernetes/kube-proxy-config.yaml <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
bindAddress: 172.19.201.247
clusterCIDR: 10.10.0.0/16
healthzBindAddress: 172.19.201.247:10256
hostnameOverride: dev-k8s-node1
metricsBindAddress: 172.19.201.247:10249
mode: "ipvs"
EOF

 

注:修改各個節點上的配置文件,寫上對應的主機名

 

爲各節點建立和分發 kube-proxy 配置文件:(拷貝到全部工做節點)

scp -pr /etc/kubernetes/kube-proxy-config.yaml root@dev-k8s-node1: /etc/kubernetes/

   

 

建立和分發 kube-proxy systemd unit 文件

cat  /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/data/k8s/k8s/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy-config.yaml \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

 

必須先建立工做目錄

mkdir -p /data/k8s/k8s/kube-proxy

 

啓動kube-proxy服務

systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

 

檢查啓動結果,確保狀態爲 active (running)

systemctl status kube-proxy|grep active

 

查看監聽端口和 metrics

netstat -lnpt|grep kube-proxy

 

14.部署coredns插件

修改配置文件

將下載的 kubernetes-server-linux-amd64.tar.gz 解壓後,再解壓其中的 kubernetes-src.tar.gz 文件。

cd /opt/k8s/work/kubernetes/

tar -xzvf kubernetes-src.tar.gz

 

coredns 目錄是 cluster/addons/dns:

cd /opt/k8s/work/kubernetes/cluster/addons/dns/coredns

cp coredns.yaml.base coredns.yaml

source /opt/k8s/bin/environment.sh

sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" coredns.yaml

建立 coredns

kubectl create -f coredns.yaml

檢查 coredns 功能

[root@dev-k8s-master1 test]# kubectl get pods  -n kube-system

NAME                                    READY   STATUS    RESTARTS   AGE

coredns-6dcf4d5b7b-tvn26                1/1     Running   5          17h

 

15.部署 ingress 插件

下載源碼包

wget https://github.com/kubernetes/ingress-nginx/archive/nginx-0.20.0.tar.gz

tar -zxvf nginx-0.20.0.tar.gz

 

進入工做路徑

cd ingress-nginx-nginx-0.20.0/deploy

建立 ingress

kubectl create -f mandatory.yaml

 

cd /opt/k8s/work/ingress-nginx-nginx-0.20.0/deploy/provider/baremetal

建立 ingress service

kubectl create -f service-nodeport.yaml

檢驗ingress-nginx是否啓動

kubectl get pods -n ingress-nginx

 

       chwHkgzRoAAWJes4.png!thumbnail      

 

16.部署dashboard 插件

修改配置文件

cd /opt/k8s/work/kubernetes/cluster/addons/dashboard

 

修改 service 定義,指定端口類型爲 NodePort,這樣外界能夠經過地址 NodeIP:NodePort 訪問 dashboard;

 

cat dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
 name: kubernetes-dashboard
 namespace: kube-system
 labels:
   k8s-app: kubernetes-dashboard
   kubernetes.io/cluster-service: "true"
   addonmanager.kubernetes.io/mode: Reconcile
spec:
 type: NodePort # 增長這一行
 selector:
   k8s-app: kubernetes-dashboard
 ports:
 - port: 443
   targetPort: 8443

 

執行全部定義文件

$ ls *.yaml

dashboard-configmap.yaml  dashboard-controller.yaml  dashboard-rbac.yaml  dashboard-secret.yaml  dashboard-service.yaml

 

$ kubectl apply -f  .

 

查看分配的 NodePort

$ kubectl get deployment kubernetes-dashboard  -n kube-system

       vln0a3DfUqsN6r44.png!thumbnail      

 

建立登陸 Dashboard 的 token 和 kubeconfig 配置文件

dashboard 默認只支持 token 認證(不支持 client 證書認證),因此若是使用 Kubeconfig 文件,須要將 token 寫入到該文件。

 

建立登陸 token

kubectl create sa dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

echo ${DASHBOARD_LOGIN_TOKEN}

使用輸出的 token 登陸 Dashboard。

 

建立使用 token 的 KubeConfig 文件

kubectl config set-cluster kubernetes \

 --certificate-authority=/etc/kubernetes/cert/ca.pem \

 --embed-certs=true \

 --server=https://172.19.201.202:8443 \

 --kubeconfig=dashboard.kubeconfig

 

# 設置客戶端認證參數,使用上面建立的 Token

kubectl config set-credentials dashboard_user \

 --token=${DASHBOARD_LOGIN_TOKEN} \

 --kubeconfig=dashboard.kubeconfig

 

# 設置上下文參數

kubectl config set-context default \

 --cluster=kubernetes \

 --user=dashboard_user \

 --kubeconfig=dashboard.kubeconfig

 

# 設置默認上下文

kubectl config use-context default --kubeconfig=dashboard.kubeconfig

用生成的 dashboard.kubeconfig 登陸 Dashboard。

 

 

       ljfJF83EP2Uhbjzq.png!thumbnail      

 

             

 

 

 

17.錯誤排查

當k8s新增長節點時, 新添加的node,建立pods 分配不了ip,報錯以下:

Warning  FailedCreatePodSandBox  72s (x26 over 6m40s)  kubelet, dev-k8s-master2  Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "registr

y.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1": Error response from daemon: pull access denied for registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64, repository does not exist or may require 'docker login'

 

解決方法:

在node節點操做:

docker pull lc13579443/pause-amd64

docker tag lc13579443/pause-amd64 registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1

重啓kubelet

systemctl daemon-reload && systemctl restart kubelet

相關文章
相關標籤/搜索