Kubernetes集羣的全部操做基本上都是經過kube-apiserver這個組件進行的,它提供HTTP RESTful形式的API供集羣內外客戶端調用。須要注意的是:認證受權過程只存在HTTPS形式的API中。也就是說,若是客戶端使用HTTP鏈接到kube-apiserver,那麼是不會進行認證受權的。因此說,能夠這麼設置,在集羣內部組件間通訊使用HTTP,集羣外部就使用HTTPS,這樣既增長了安全性,也不至於太複雜。 node
Kubernetes 提供了多種安全認證機制, 其中對於集羣通信間可採用 TLS(https) 雙向認證機制,也可採用基於 Token 或用戶名密碼的單向 tls 認證。k8s通常在內網部署,採用私有 IP 地址進行通信,權威CA只能簽署域名證書,咱們這裏採用自建CA。linux
Kubernetes 1.12.3 Docker 18.09.0-ce Etcd 3.3.10 Flanneld 0.10.0 插件: Coredns Dashboard Heapster (influxdb、grafana) Metrics-Server EFK (elasticsearch、fluentd、kibana) 鏡像倉庫: docker registry harbor
[root@k8s-master ~]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@k8s-master ~]# uname -r 3.10.0-327.el7.x86_64 [root@k8s-master ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) [root@k8s-master ~]# getenforce Disabled
節點及功能nginx |
主機名git |
IPgithub |
Master、etcd、registryweb |
K8s-node-1docker |
10.0.0.206json |
Node1 kube-proxy、kubeletbootstrap |
K8s-node-2api |
10.0.0.207 |
Node2 kube-proxy、kubelet |
K8s-node-3 |
10.0.0.208 |
cat >> /etc/hosts <<EOF 10.0.0.206 K8s-node-1 K8s-node-1 10.0.0.207 K8s-node-2 K8s-node-2 10.0.0.208 K8s-node-3 K8s-node-3 EOF
ssh-keygen -t rsa ssh-copy-id root@K8s-node-1 ssh-copy-id root@K8s-node-2 ssh-copy-id root@K8s-node-3
echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
yum install -y epel-release yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp /usr/sbin/modprobe ip_vs
cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才容許使用它 vm.overcommit_memory=1 # 不檢查物理內存是否夠用 vm.panic_on_oom=0 # 開啓 OOM fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF cp kubernetes.conf /etc/sysctl.d/kubernetes.conf sysctl -p /etc/sysctl.d/kubernetes.conf
yum -y install ntp ntpdate [root@k8s-master ~]# crontab -l #sync time 5 * * * * /usr/sbin/ntpdate cn.pool.ntp.org >/dev/null 2>&1
systemd的journald是Centos7缺省的日誌記錄工具,它記錄了全部系統、內核、Service Unit 的日誌。相比 systemd,journald 記錄的日誌有以下優點:
能夠記錄到內存或文件系統;(默認記錄到內存,對應的位置爲 /run/log/jounal)
能夠限制佔用的磁盤空間、保證磁盤剩餘空間;
能夠限制日誌文件大小、保存的時間;
journald 默認將日誌轉發給 rsyslog,這會致使日誌寫了多份,/var/log/messages 中包含了太多無關日誌,不方便後續查看,同時也影響系統性能。
mkdir /var/log/journal # 持久化保存日誌的目錄 mkdir /etc/systemd/journald.conf.d cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盤 Storage=persistent # 壓縮歷史日誌 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大佔用空間 10G SystemMaxUse=10G # 單日誌文件最大 200M SystemMaxFileSize=200M # 日誌保存時間 2 周 MaxRetentionSec=2week # 不將日誌轉發到 syslog ForwardToSyslog=no EOF systemctl restart systemd-journald
mkdir -p /opt/k8s/{bin,work} /etc/kubernetes/cert /etc/etcd/cert
#!/usr/bin/bash # 生成 EncryptionConfig 所需的加密 key export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) # 集羣各機器 IP 數組 export NODE_IPS=(10.0.0.206 10.0.0.207 10.0.0.208) # 集羣各 IP 對應的 主機名數組 export NODE_NAMES=(K8s-node-1 K8s-node-2 K8s-node-3 ) # etcd 集羣服務地址列表 export ETCD_ENDPOINTS="https://10.0.0.206:2379,https://10.0.0.207:2379,https://10.0.0.208:2379" # etcd 集羣間通訊的 IP 和端口 export ETCD_NODES="K8s-node-1=https://10.0.0.206:2380,K8s-node-2=https://10.0.0.207:2380,K8s-node-3=https://10.0.0.208:2380" # kube-apiserver 的反向代理(kube-nginx)地址端口 export KUBE_APISERVER="https://127.0.0.1:8443" # 節點間互聯網絡接口名稱 export IFACE="eth0" # etcd 數據目錄 export ETCD_DATA_DIR="/data/k8s/etcd/data" # etcd WAL 目錄,建議是 SSD 磁盤分區,或者和 ETCD_DATA_DIR 不一樣的磁盤分區 export ETCD_WAL_DIR="/data/k8s/etcd/wal" # k8s 各組件數據目錄 export K8S_DIR="/data/k8s/k8s" # docker 數據目錄 export DOCKER_DIR="/data/k8s/docker" ## 如下參數通常不須要修改 # TLS Bootstrapping 使用的 Token,可使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成 BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c" # 最好使用 當前未用的網段 來定義服務網段和 Pod 網段 # 服務網段,部署前路由不可達,部署後集羣內路由可達(kube-proxy 保證) SERVICE_CIDR="10.254.0.0/16" # Pod 網段,建議 /16 段地址,部署前路由不可達,部署後集羣內路由可達(flanneld 保證) CLUSTER_CIDR="172.30.0.0/16" # 服務端口範圍 (NodePort Range) export NODE_PORT_RANGE="30000-32767" # flanneld 網絡配置前綴 export FLANNEL_ETCD_PREFIX="/kubernetes/network" # kubernetes 服務 IP (通常是 SERVICE_CIDR 中第一個IP) export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1" # 集羣 DNS 服務 IP (從 SERVICE_CIDR 中預分配) export CLUSTER_DNS_SVC_IP="10.254.0.2" # 集羣 DNS 域名(末尾不帶點號) export CLUSTER_DNS_DOMAIN="cluster.local" # 將二進制目錄 /opt/k8s/bin 加到 PATH 中 export PATH=/opt/k8s/bin:$PATH
全局變量定義腳本拷貝到全部節點的 /opt/k8s/bin
目錄
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp /opt/k8s/bin/environment.sh root@${node_ip}:/opt/k8s/bin/ ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
至此,基礎環境準備完畢,如沒有特殊說明的話,接下來全部的操做都在K8s-node-1執行而後分發到其它節點
安裝CFSSL
mkdir -p /opt/k8s/cert && cd /opt/k8s wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 mv cfssl_linux-amd64 /opt/k8s/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo chmod +x /opt/k8s/bin/* export PATH=/opt/k8s/bin:$PATH
CA 證書是集羣全部節點共享的,只須要建立一個 CA 證書,後續建立的全部證書都由它簽名
建立配置文件
cd /opt/k8s/work cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF
signing:表示該證書可用於簽名其它證書,生成的 ca.pem 證書中 CA=TRUE;
server auth:表示 client 能夠用該該證書對 server 提供的證書進行驗證;
client auth:表示 server 能夠用該該證書對 client 提供的證書進行驗證;
建立證書籤名請求文件
cd /opt/k8s/work cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
cd /opt/k8s/work cfssl gencert -initca ca-csr.json | cfssljson -bare ca ls ca*
cd /opt/k8s/work source /opt/k8s/bin/environment.sh # 導入 NODE_IPS 環境變量 for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert" scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert done
cd /opt/k8s/work wget https://dl.k8s.io/v1.12.3/kubernetes-client-linux-amd64.tar.gz tar -xzvf kubernetes-client-linux-amd64.tar.gz
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kubernetes/client/bin/kubectl root@${node_ip}:/opt/k8s/bin/ ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
kubectl 與 apiserver https 安全端口通訊,apiserver 對提供的證書進行認證和受權。
kubectl 做爲集羣的管理工具,須要被授予最高權限。這裏建立具備最高權限的 admin 證書。
cd /opt/k8s/work cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "4Paradigm" } ] } EOF
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin ls admin*
cd /opt/k8s/work source /opt/k8s/bin/environment.sh # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubectl.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials admin \ --client-certificate=/opt/k8s/work/admin.pem \ --client-key=/opt/k8s/work/admin-key.pem \ --embed-certs=true \ --kubeconfig=kubectl.kubeconfig # 設置上下文參數 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin \ --kubeconfig=kubectl.kubeconfig # 設置默認上下文 kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
--certificate-authority:驗證 kube-apiserver 證書的根證書;
--client-certificate、--client-key:剛生成的 admin 證書和私鑰,鏈接 kube-apiserver 時使用;
--embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑)
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ~/.kube" scp kubectl.kubeconfig root@${node_ip}:~/.kube/config done
~/.kube/config
文件cd /opt/k8s/work wget https://github.com/coreos/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-v3.3.10-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
cd /opt/k8s/work cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "10.0.0.206", "10.0.0.207", "10.0.0.208" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*pem
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/etcd/cert" scp etcd*.pem root@${node_ip}:/etc/etcd/cert/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=${ETCD_DATA_DIR} ExecStart=/opt/k8s/bin/etcd \\ --data-dir=${ETCD_DATA_DIR} \\ --wal-dir=${ETCD_WAL_DIR} \\ --name=##NODE_NAME## \\ --cert-file=/etc/etcd/cert/etcd.pem \\ --key-file=/etc/etcd/cert/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-cert-file=/etc/etcd/cert/etcd.pem \\ --peer-key-file=/etc/etcd/cert/etcd-key.pem \\ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --listen-peer-urls=https://##NODE_IP##:2380 \\ --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\ --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://##NODE_IP##:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new \\ --auto-compaction-mode=periodic \\ --auto-compaction-retention=1 \\ --max-request-bytes=33554432 \\ --quota-backend-bytes=6442450944 \\ --heartbeat-interval=250 \\ --election-timeout=2000 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
WorkingDirectory
、--data-dir
:指定工做目錄和數據目錄爲 ${ETCD_DATA_DIR}
,需在啓動服務前建立這個目錄;--wal-dir
:指定 wal 目錄,爲了提升性能,通常使用 SSD 或者和 --data-dir
不一樣的磁盤;--name
:指定節點名稱,當 --initial-cluster-state
值爲 new
時,--name
的參數值必須位於 --initial-cluster
列表中;--cert-file
、--key-file
:etcd server 與 client 通訊時使用的證書和私鑰;--trusted-ca-file
:簽名 client 證書的 CA 證書,用於驗證 client 證書;--peer-cert-file
、--peer-key-file
:etcd 與 peer 通訊使用的證書和私鑰;--peer-trusted-ca-file
:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;cd /opt/k8s/work source /opt/k8s/bin/environment.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service done ls *.service
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " & done
systemctl start etcd
會卡住一段時間,爲正常現象cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status etcd|grep Active" done
journalctl -u etcd
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ --endpoints=https://${node_ip}:2379 \ --cacert=/opt/k8s/work/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem endpoint health done
輸出:
>>> 10.0.0.206 https://10.0.0.206:2379 is healthy: successfully committed proposal: took = 4.622709ms >>> 10.0.0.207 https://10.0.0.207:2379 is healthy: successfully committed proposal: took = 3.621197ms >>> 10.0.0.208 https://10.0.0.208:2379 is healthy: successfully committed proposal: took = 3.186656ms
查看當前的 leader
source /opt/k8s/bin/environment.sh ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ -w table --cacert=/opt/k8s/work/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem \ --endpoints=${ETCD_ENDPOINTS} endpoint status
輸出:
+-------------------------+------------------+---------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+-------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.0.0.206:2379 | 23d31ba59ca79fa0 | 3.3.10 | 20 kB | false | 2 | 8 |
| https://10.0.0.207:2379 | 2323451019f6428d | 3.3.10 | 20 kB | false | 2 | 8 |
| https://10.0.0.208:2379 | 7a42012c95def99e | 3.3.10 | 20 kB | true | 2 | 8 |
+-------------------------+------------------+---------+---------+-----------+-----------+------------+
cd /opt/k8s/work mkdir flannel wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp flannel/{flanneld,mk-docker-opts.sh} root@${node_ip}:/opt/k8s/bin/ ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
cd /opt/k8s/work cat > flanneld-csr.json <<EOF { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld ls flanneld*pem
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/flanneld/cert" scp flanneld*.pem root@${node_ip}:/etc/flanneld/cert done
注意:本步驟只需執行一次
cd /opt/k8s/work source /opt/k8s/bin/environment.sh etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/opt/k8s/work/ca.pem \ --cert-file=/opt/k8s/work/flanneld.pem \ --key-file=/opt/k8s/work/flanneld-key.pem \ set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
--cluster-cidr
參數值一致cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/opt/k8s/bin/flanneld \\ -etcd-cafile=/etc/kubernetes/cert/ca.pem \\ -etcd-certfile=/etc/flanneld/cert/flanneld.pem \\ -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\ -etcd-endpoints=${ETCD_ENDPOINTS} \\ -etcd-prefix=${FLANNEL_ETCD_PREFIX} \\ -iface=${IFACE} \\ -ip-masq ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp flanneld.service root@${node_ip}:/etc/systemd/system/ done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld" done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status flanneld|grep Active" done
journalctl -u flanneld
source /opt/k8s/bin/environment.sh etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/cert/ca.pem \ --cert-file=/etc/flanneld/cert/flanneld.pem \ --key-file=/etc/flanneld/cert/flanneld-key.pem \ get ${FLANNEL_ETCD_PREFIX}/config
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet" done
輸出:
>>> 10.0.0.206 inet 172.30.168.0/32 scope global flannel.1 >>> 10.0.0.207 inet 172.30.160.0/32 scope global flannel.1 >>> 10.0.0.208 inet 172.30.112.0/32 scope global flannel.1
cd /opt/k8s/work wget http://nginx.org/download/nginx-1.15.3.tar.gz tar -xzvf nginx-1.15.3.tar.gz cd /opt/k8s/work/nginx-1.15.3 mkdir nginx-prefix ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
cd /opt/k8s/work/nginx-1.15.3
make && make install
--with-stream
:開啓 4 層透明轉發(TCP Proxy)功能;--without-xxx
:關閉全部其餘功能,這樣生成的動態連接二進制程序依賴最小mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}
cp /opt/k8s/work/nginx-1.15.3/nginx-prefix/sbin/nginx /opt/k8s/kube-nginx/sbin/kube-nginx chmod a+x /opt/k8s/kube-nginx/sbin/*
cat > /opt/k8s/kube-nginx/conf/kube-nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { upstream backend { hash $remote_addr consistent; server 10.0.0.206:6443 max_fails=3 fail_timeout=30s; server 10.0.0.207:6443 max_fails=3 fail_timeout=30s; server 10.0.0.208:6443 max_fails=3 fail_timeout=30s; } server { listen 127.0.0.1:8443; proxy_connect_timeout 1s; proxy_pass backend; } } EOF
cat > /etc/systemd/system/kube-nginx.service <<EOF [Unit] Description=kube-apiserver nginx proxy After=network.target After=network-online.target Wants=network-online.target [Service] Type=forking ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload PrivateTmp=true Restart=always RestartSec=5 StartLimitInterval=0 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
啓動 kube-nginx 服務
systemctl daemon-reload && systemctl enable kube-nginx && systemctl restart kube-nginx
systemctl status kube-nginx |grep 'Active:'
journalctl -u kube-nginx
kubernetes master 節點運行以下組件:
cd /opt/k8s/work wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cd kubernetes tar -xzvf kubernetes-src.tar.gz
分發文件到集羣節點
cd /opt/k8s/work/kubernetes source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp server/bin/* root@${node_ip}:/opt/k8s/bin/ ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
建立證書籤名請求
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "10.0.0.206", "10.0.0.207", "10.0.0.208", "${CLUSTER_KUBERNETES_SVC_IP}", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
hosts 字段指定受權使用該證書的 IP 或域名列表,這裏列出了 VIP 、apiserver 節點 IP、kubernetes 服務 IP 和域名;
域名最後字符不能是 .
(如不能爲 kubernetes.default.svc.cluster.local.
),不然解析時失敗,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."
;
若是使用非 cluster.local
域名,如 opsnull.com
,則須要修改域名列表中的最後兩個域名爲:kubernetes.default.svc.opsnull
、kubernetes.default.svc.opsnull.com
kubernetes 服務 IP 是 apiserver 自動建立的,通常是 --service-cluster-ip-range
參數指定的網段的第一個IP,後續能夠經過以下命令獲取:
生成證書和私鑰
cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes ls kubernetes*pem
分發證書和私鑰文件到master節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/kubernetes/cert" scp kubernetes*.pem root@${node_ip}:/etc/kubernetes/cert/ done
建立加密配置文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > encryption-config.yaml <<EOF kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: ${ENCRYPTION_KEY} - identity: {} EOF
分發加密配置文件到集羣節點的 /etc/kubernetes
目錄下
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp encryption-config.yaml root@${node_ip}:/etc/kubernetes/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > kube-apiserver.service.template <<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=${K8S_DIR}/kube-apiserver ExecStart=/opt/k8s/bin/kube-apiserver \\ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --anonymous-auth=false \\ --experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\ --advertise-address=##NODE_IP## \\ --bind-address=##NODE_IP## \\ --insecure-port=0 \\ --authorization-mode=Node,RBAC \\ --runtime-config=api/all \\ --enable-bootstrap-token-auth \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --service-node-port-range=${NODE_PORT_RANGE} \\ --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/cert/ca.pem \\ --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\ --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\ --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\ --kubelet-https=true \\ --service-account-key-file=/etc/kubernetes/cert/ca.pem \\ --etcd-cafile=/etc/kubernetes/cert/ca.pem \\ --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\ --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\ --etcd-servers=${ETCD_ENDPOINTS} \\ --enable-swagger-ui=true \\ --allow-privileged=true \\ --max-mutating-requests-inflight=2000 \\ --max-requests-inflight=4000 \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\ --event-ttl=168h \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
爲各節點建立 systemd unit 文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${NODE_IPS[i]}.service done ls kube-apiserver*.service
分發生成的 systemd unit 文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-apiserver" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver" done
檢查 kube-apiserver 運行狀態
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'" done
如啓動報錯,查看日誌
journalctl -u kube-apiserver
[root@k8s-node-1 work]# kubectl cluster-info Kubernetes master is running at https://127.0.0.1:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-node-1 work]# kubectl get all --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 3m37s [root@k8s-node-1 work]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
[root@k8s-node-1 ~]# netstat -lnpt|grep kube tcp 0 0 10.0.0.206:6443 0.0.0.0:* LISTEN 26023/kube-apiserve
在執行kubectl exec、run、logs 等命令時,apiserver 會轉發到kubelet。這裏定義 RBAC 規則,受權 apiserver 調用 kubelet API
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
建立證書籤名請求
cd /opt/k8s/work cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "10.0.0.206", "10.0.0.207", "10.0.0.208" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "4Paradigm" } ] } EOF
生成證書和私鑰
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager ls kube-controller-manager*pem
分發證書和私鑰到集羣節點:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/ done
kubeconfig 文件包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書
cd /opt/k8s/work source /opt/k8s/bin/environment.sh kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=kube-controller-manager.pem \ --client-key=kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分發 kubeconfig 到集羣節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager.kubeconfig root@${node_ip}:/etc/kubernetes/ done
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] WorkingDirectory=${K8S_DIR}/kube-controller-manager ExecStart=/opt/k8s/bin/kube-controller-manager \\ --port=0 \\ --secure-port=10252 \\ --bind-address=127.0.0.1 \\ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\ --experimental-cluster-signing-duration=8760h \\ --root-ca-file=/etc/kubernetes/cert/ca.pem \\ --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\ --leader-elect=true \\ --controllers=*,bootstrapsigner,tokencleaner \\ --horizontal-pod-autoscaler-use-rest-clients=true \\ --horizontal-pod-autoscaler-sync-period=10s \\ --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\ --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\ --use-service-account-credentials=true \\ --kube-api-qps=1000 \\ --kube-api-burst=2000 \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
分發 systemd unit 文件到全部集羣節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-controller-manager.service root@${node_ip}:/etc/systemd/system/ done
kubectl create clusterrolebinding controller-manager:system:auth-delegator --user system:kube-controller-manager --clusterrole system:auth-delegator
啓動服務
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager" done
檢查集羣服務運行狀態
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active" done
確保狀態爲active(running)
,不然查看日誌,確認緣由:
journalctl -u kube-controller-manager
停掉一個或兩個節點的kube-controller-manager服務,觀察其它節點的日誌,看是否獲取了 leader 權限
systemctl stop kube-controller-manager.service
查看當前的 leader
[root@k8s-node-1 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-node-1_e1878012-7c9f-11e9-904c-000c290e828c","leaseDurationSeconds":15,"acquireTime":"2019-05-22T14:43:10Z","renewTime":"2019-05-22T14:44:39Z","leaderTransitions":3}' creationTimestamp: 2019-05-22T14:23:04Z name: kube-controller-manager namespace: kube-system resourceVersion: "1706" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 1ce1e9ba-7c9d-11e9-b7a4-000c290e828c
建立證書籤名請求
cd /opt/k8s/work cat > kube-scheduler-csr.json <<EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "10.0.0.206", "10.0.0.207", "10.0.0.208" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "4Paradigm" } ] } EOF
生成證書和私鑰:
cd /opt/k8s/work cfssl gencert -ca=/opt/k8s/work/ca.pem \ -ca-key=/opt/k8s/work/ca-key.pem \ -config=/opt/k8s/work/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler ls kube-scheduler*pem
建立kubeconfig
cd /opt/k8s/work source /opt/k8s/bin/environment.sh kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/work/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context system:kube-scheduler \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
分發kubeconfig 到所節點:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-scheduler.kubeconfig root@${node_ip}:/etc/kubernetes/ done
cat <<EOF | sudo tee kube-scheduler.yaml apiVersion: componentconfig/v1alpha1 kind: KubeSchedulerConfiguration clientConnection: kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig" leaderElection: leaderElect: true EOF
分發 kube-scheduler 配置文件到全部節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-scheduler.yaml root@${node_ip}:/etc/kubernetes/ done
cd /opt/k8s/work cat > kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] WorkingDirectory=${K8S_DIR}/kube-scheduler ExecStart=/opt/k8s/bin/kube-scheduler \\ --config=/etc/kubernetes/kube-scheduler.yaml \\ --address=127.0.0.1 \\ --kube-api-qps=100 \\ --logtostderr=true \\ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target EOF
分發 systemd unit 文件到全部節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp kube-scheduler.service root@${node_ip}:/etc/systemd/system/ done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-scheduler" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler" done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-scheduler|grep Active" done
確保狀態爲 active (running)
,不然查看日誌
journalctl -u kube-scheduler
隨便停掉一個節點kube-scheduler 服務,看其它節點是否獲取了 leader 權限(systemd 日誌)
systemctl stop kube-scheduler.service
查看當前的 leader
kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
kubernetes node 節點運行以下組件:
https://download.docker.com/linux/static/stable/x86_64/ 頁面下載最新發布包
下載解壓
cd /opt/k8s/work wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.0.tgz tar -xvf docker-18.09.0.tgz
分發二進制文件到全部node節點:
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp docker/* root@${node_ip}:/opt/k8s/bin/ ssh root@${node_ip} "chmod +x /opt/k8s/bin/*" done
cd /opt/k8s/work cat > docker.service <<"EOF" [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.io [Service] WorkingDirectory=##DOCKER_DIR## Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin" EnvironmentFile=-/run/flannel/docker ExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID Restart=on-failure RestartSec=5 LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity Delegate=yes KillMode=process [Install] WantedBy=multi-user.target EOF
EOF 先後有雙引號,這樣 bash 不會替換文檔中的變量,如 $DOCKER_NETWORK_OPTIONS;
dockerd 運行時會調用其它 docker 命令,如 docker-proxy,因此須要將 docker 命令所在的目錄加到 PATH 環境變量中;
flanneld 啓動時將網絡配置寫入 /run/flannel/docker
文件中,dockerd 啓動前讀取該文件中的環境變量 DOCKER_NETWORK_OPTIONS
,而後設置 docker0 網橋網段;
若是指定了多個 EnvironmentFile
選項,則必須將 /run/flannel/docker
放在最後(確保 docker0 使用 flanneld 生成的 bip 參數);
docker 須要以 root 用於運行;
docker 從 1.13 版本開始,可能將 iptables FORWARD chain的默認策略設置爲DROP,從而致使 ping 其它 Node 上的 Pod IP 失敗,遇到這種狀況時,須要手動設置策略爲 ACCEPT
iptables -P FORWARD ACCEPT
分發 systemd unit 文件到全部node節點
使用國內的倉庫鏡像服務器以加快 pull image 的速度,同時增長下載的併發數 (須要重啓 dockerd 生效)
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat > docker-daemon.json <<EOF { "registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"], "insecure-registries": ["docker02:35000"], "max-concurrent-downloads": 20, "live-restore": true, "max-concurrent-uploads": 10, "debug": true, "data-root": "${DOCKER_DIR}/data", "exec-root": "${DOCKER_DIR}/exec", "log-opts": { "max-size": "100m", "max-file": "5" } } EOF
分發 docker 配置文件到全部node
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/docker/ ${DOCKER_DIR}/{data,exec}" scp docker-daemon.json root@${node_ip}:/etc/docker/daemon.json done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld" ssh root@${node_ip} "/usr/sbin/iptables -F && /usr/sbin/iptables -X && /usr/sbin/iptables -F -t nat && /usr/sbin/iptables -X -t nat" ssh root@${node_ip} "/usr/sbin/iptables -P FORWARD ACCEPT" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker" ssh root@${node_ip} "sysctl -p /etc/sysctl.d/kubernetes.conf" done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status docker|grep Active" done
確保狀態爲 active (running)
,不然查看日誌
journalctl -u docker
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0" done
確認各 work 節點的 docker0 網橋和 flannel.1 接口的 IP 處於同一個網段中(以下 172.30.168.0/32 位於 172.30.168.1/21 中)
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 96:f0:62:fb:38:4b brd ff:ff:ff:ff:ff:ff inet 172.30.168.0/32 scope global flannel.1 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:df:b8:e8:8d brd ff:ff:ff:ff:ff:ff inet 172.30.168.1/21 brd 172.30.175.255 scope global docker0 valid_lft forever preferred_lft forever
kublet 運行在每一個Node節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" # 建立 token export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:${node_name} \ --kubeconfig ~/.kube/config) # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig done
分發 bootstrap kubeconfig 文件到全部Node節點
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig done
建立 kubelet 參數配置模板文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh cat <<EOF | tee kubelet-config.yaml.template kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: "/etc/kubernetes/cert/ca.pem" authorization: mode: Webhook clusterDomain: "${CLUSTER_DNS_DOMAIN}" clusterDNS: - "${CLUSTER_DNS_SVC_IP}" podCIDR: "${POD_CIDR}" maxPods: 220 serializeImagePulls: false hairpinMode: promiscuous-bridge cgroupDriver: cgroupfs runtimeRequestTimeout: "15m" rotateCertificates: true serverTLSBootstrap: true readOnlyPort: 0 port: 10250 address: "##NODE_IP##" EOF
爲各節點建立和分發 kubelet 配置文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml done
cd /opt/k8s/work cat > kubelet.service.template <<EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=${K8S_DIR}/kubelet ExecStart=/opt/k8s/bin/kubelet \\ --root-dir=${K8S_DIR}/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\ --cert-dir=/etc/kubernetes/cert \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-config.yaml \\ --hostname-override=##NODE_NAME## \\ --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 --allow-privileged=true \\ --event-qps=0 \\ --kube-api-qps=1000 \\ --kube-api-burst=2000 \\ --registry-qps=0 \\ --image-pull-progress-deadline=30m \\ --logtostderr=true \\ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target EOF
爲各節點建立和分發 kubelet systemd unit 文件
cd /opt/k8s/work source /opt/k8s/bin/environment.sh for node_name in ${NODE_NAMES[@]} do echo ">>> ${node_name}" sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service done
source /opt/k8s/bin/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet" ssh root@${node_ip} "/usr/sbin/swapoff -a" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet" done
建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:
cd /opt/k8s/work cat > csr-crb.yaml <<EOF # Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name: system:bootstrappers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient apiGroup: rbac.authorization.k8s.io --- # To let a node of the group "system:nodes" renew its own credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-client-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient apiGroup: rbac.authorization.k8s.io --- # A ClusterRole which instructs the CSR approver to approve a node requesting a # serving cert matching its client cert. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: approve-node-server-renewal-csr rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] --- # To let a node of the group "system:nodes" renew its own server credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-server-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: approve-node-server-renewal-csr apiGroup: rbac.authorization.k8s.io EOF
生效配置:
kubectl apply -f csr-crb.yaml
查看 kublet 的狀況
kubectl get csr