主機名 | IP | 備註 |
---|---|---|
k8s-master1 | 192.168.0.216 | Master1,etcd1,node節點 |
k8s-master2 | 192.168.0.217 | Master2,etcd2,node節點 |
k8s-master3 | 192.168.0.218 | Master3,etcd3,node節點 |
slb | lb.ypvip.com.cn | 外網阿里slb域名 |
本環境使用阿里雲,API Server
高可用經過阿里雲SLB
實現,若是環境不在雲上,能夠經過 Nginx + Keepalived,或者 HaProxy + Keepalived等實現。
阿里slb
設置TCP監聽
,監聽6443端口(經過四層負載到master apiserver)。阿里雲ECS主機
使用 CentOS 7.6.1810
版本,而且內核都升到5.x
版本。Iptables 模式
(kube-proxy 註釋中預留 Ipvs
模式配置)IPIP
模式svc.cluster.local
10.10.0.1
爲集羣 kubernetes svc 解析ip$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 6d23h
PS:上面服務版本都是使用
當前最新版本
名稱 | IP網段 | 備註 |
---|---|---|
service-cluster-ip | 10.10.0.0/16 | 可用地址 65534 |
pods-ip | 10.20.0.0/16 | 可用地址 65534 |
集羣dns | 10.10.0.2 | 用於集羣service域名解析 |
k8s svc | 10.10.0.1 | 集羣 kubernetes svc 解析ip |
全部集羣服務器都須要初始化
$ systemctl stop firewalld $ systemctl disable firewalld
$ swapoff -a $ sed -i 's/.*swap.*/#&/' /etc/fstab
$ setenforce 0 $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config $ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux $ sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
運行下面 init.sh
shell 腳本,腳本完成下面四項任務:html
hostname
k8s依賴環境
升級系統內核
(升級Centos7系統內核,解決Docker-ce版本兼容問題)docker ce
19.03.6 版本在每臺機器上運行 init.sh 腳本,示例以下:node
Ps:init.sh 腳本只用於Centos
,支持重複運行
。
# k8s-master1 機器運行,init.sh 後面接的參數是設置 k8s-master1 服務器主機名 $ chmod +x init.sh && ./init.sh k8s-master1 # 執行完 init.sh 腳本,請重啓服務器 $ reboot
#!/usr/bin/env bash function Check_linux_system(){ linux_version=`cat /etc/redhat-release` if [[ ${linux_version} =~ "CentOS" ]];then echo -e "\033[32;32m 系統爲 ${linux_version} \033[0m \n" else echo -e "\033[32;32m 系統不是CentOS,該腳本只支持CentOS環境\033[0m \n" exit 1 fi } function Set_hostname(){ if [ -n "$HostName" ];then grep $HostName /etc/hostname && echo -e "\033[32;32m 主機名已設置,退出設置主機名步驟 \033[0m \n" && return case $HostName in help) echo -e "\033[32;32m bash init.sh 主機名 \033[0m \n" exit 1 ;; *) hostname $HostName echo "$HostName" > /etc/hostname echo "`ifconfig eth0 | grep inet | awk '{print $2}'` $HostName" >> /etc/hosts ;; esac else echo -e "\033[32;32m 輸入爲空,請參照 bash init.sh 主機名 \033[0m \n" exit 1 fi } function Install_depend_environment(){ rpm -qa | grep nfs-utils &> /dev/null && echo -e "\033[32;32m 已完成依賴環境安裝,退出依賴環境安裝步驟 \033[0m \n" && return yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl telnet echo -e "\033[32;32m 升級Centos7系統內核到5版本,解決Docker-ce版本兼容問題\033[0m \n" rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && \ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && \ yum --disablerepo=\* --enablerepo=elrepo-kernel repolist && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 && \ yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64 && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 && \ grub2-set-default 0 modprobe br_netfilter cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf ls /proc/sys/net/bridge } function Install_docker(){ rpm -qa | grep docker && echo -e "\033[32;32m 已安裝docker,退出安裝docker步驟 \033[0m \n" && return yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce-19.03.6 docker-ce-cli-19.03.6 systemctl enable docker.service systemctl start docker.service systemctl stop docker.service echo '{"registry-mirrors": ["https://4xr1qpsp.mirror.aliyuncs.com"], "log-opts": {"max-size":"500m", "max-file":"3"}}' > /etc/docker/daemon.json systemctl daemon-reload systemctl start docker } # 初始化順序 HostName=$1 Check_linux_system && \ Set_hostname && \ Install_depend_environment && \ Install_docker
在
k8s-master1
安裝證書生成工具 cfssl,並生成相關證書
# 建立目錄用於存放 SSL 證書 $ mkdir /data/ssl -p # 下載生成證書命令 $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 # 添加執行權限 $ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 # 移動到 /usr/local/bin 目錄下 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
# 進入證書目錄 $ cd /data/ssl/ # 建立 certificate.sh 腳本 $ vim certificate.sh
PS:證書有效期爲
10年
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.0.216", "192.168.0.217", "192.168.0.218", "10.10.0.1", "lb.ypvip.com.cn", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
根據本身環境修改 certificate.sh
腳本linux
"192.168.0.216", "192.168.0.217", "192.168.0.218", "10.10.0.1", "lb.ypvip.com.cn",
修改完腳本,而後執行git
$ bash certificate.sh
k8s-master1 機器上操做,把執行文件copy到 k8s-master2 k8s-master3
二進制包下載地址:https://github.com/etcd-io/et...github
# 建立存儲etcd數據目錄 $ mkdir /data/etcd/ # 建立 k8s 集羣配置目錄 $ mkdir /opt/kubernetes/{bin,cfg,ssl} -p # 下載二進制etcd包,並把執行文件放到 /opt/kubernetes/bin/ 目錄 $ cd /data/etcd/ $ wget https://github.com/etcd-io/etcd/releases/download/v3.4.7/etcd-v3.4.7-linux-amd64.tar.gz $ tar zxvf etcd-v3.4.7-linux-amd64.tar.gz $ cd etcd-v3.4.7-linux-amd64 $ cp -a etcd etcdctl /opt/kubernetes/bin/ # 把 /opt/kubernetes/bin 目錄加入到 PATH $ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile $ source /etc/profile
登錄到k8s-master2
和k8s-master3
服務器上操做
# 建立 k8s 集羣配置目錄 $ mkdir /data/etcd $ mkdir /opt/kubernetes/{bin,cfg,ssl} -p # 把 /opt/kubernetes/bin 目錄加入到 PATH $ echo 'export PATH=$PATH:/opt/kubernetes/bin' >> /etc/profile $ source /etc/profile
登錄到
k8s-master1
操做
# 進入 K8S 集羣證書目錄 $ cd /data/ssl # 把證書 copy 到 k8s-master1 機器 /opt/kubernetes/ssl/ 目錄 $ cp ca*pem server*pem /opt/kubernetes/ssl/ # 把etcd執行文件與證書 copy 到 k8s-master2 k8s-master3 機器 scp -r /opt/kubernetes/* root@k8s-master2:/opt/kubernetes scp -r /opt/kubernetes/* root@k8s-master3:/opt/kubernetes
$ cd /data/etcd # 編寫 etcd 配置文件腳本 $ vim etcd.sh
#!/bin/bash ETCD_NAME=${1:-"etcd01"} ETCD_IP=${2:-"127.0.0.1"} ETCD_CLUSTER=${3:-"etcd01=https://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/etcd.yml name: ${ETCD_NAME} data-dir: /var/lib/etcd/default.etcd listen-peer-urls: https://${ETCD_IP}:2380 listen-client-urls: https://${ETCD_IP}:2379,https://127.0.0.1:2379 advertise-client-urls: https://${ETCD_IP}:2379 initial-advertise-peer-urls: https://${ETCD_IP}:2380 initial-cluster: ${ETCD_CLUSTER} initial-cluster-token: etcd-cluster initial-cluster-state: new client-transport-security: cert-file: /opt/kubernetes/ssl/server.pem key-file: /opt/kubernetes/ssl/server-key.pem client-cert-auth: false trusted-ca-file: /opt/kubernetes/ssl/ca.pem auto-tls: false peer-transport-security: cert-file: /opt/kubernetes/ssl/server.pem key-file: /opt/kubernetes/ssl/server-key.pem client-cert-auth: false trusted-ca-file: /opt/kubernetes/ssl/ca.pem auto-tls: false debug: false logger: zap log-outputs: [stderr] EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server Documentation=https://github.com/etcd-io/etcd Conflicts=etcd.service After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify LimitNOFILE=65536 Restart=on-failure RestartSec=5s TimeoutStartSec=0 ExecStart=/opt/kubernetes/bin/etcd --config-file=/opt/kubernetes/cfg/etcd.yml [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd
# 執行 etcd.sh 生成配置腳本 $ chmod +x etcd.sh $ ./etcd.sh etcd01 192.168.0.216 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # 查看 etcd 是否啓動正常 $ ps -ef | grep etcd $ netstat -ntplu | grep etcd tcp 0 0 192.168.0.216:2379 0.0.0.0:* LISTEN 1558/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 1558/etcd tcp 0 0 192.168.0.216:2380 0.0.0.0:* LISTEN 1558/etcd # 把 etcd.sh 腳本 copy 到 k8s-master2 k8s-master3 機器上 $ scp /data/etcd/etcd.sh root@k8s-master2:/data/etcd/ $ scp /data/etcd/etcd.sh root@k8s-master3:/data/etcd/
登錄到
k8s-master2
操做
# 執行 etcd.sh 生成配置腳本 $ chmod +x etcd.sh $ ./etcd.sh etcd02 192.168.0.217 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # 查看 etcd 是否啓動正常 $ ps -ef | grep etcd $ netstat -ntplu | grep etcd
登錄到
k8s-master3
操做
# 執行 etcd.sh 生成配置腳本 $ chmod +x etcd.sh $ ./etcd.sh etcd03 192.168.0.218 etcd01=https://192.168.0.216:2380,etcd02=https://192.168.0.217:2380,etcd03=https://192.168.0.218:2380 # 查看 etcd 是否啓動正常 $ ps -ef | grep etcd $ netstat -ntplu | grep etcd
# 隨便登錄一臺master機器,查看 etcd 集羣是否正常 $ ETCDCTL_API=3 etcdctl --write-out=table \ --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/kubernetes/ssl/server.pem --key=/opt/kubernetes/ssl/server-key.pem \ --endpoints=https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 endpoint health +---------------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +---------------------------------+--------+-------------+-------+ | https://192.168.0.216:2379 | true | 38.721248ms | | | https://192.168.0.217:2379 | true | 38.621248ms | | | https://192.168.0.218:2379 | true | 38.821248ms | | +---------------------------------+--------+-------------+-------+
建立 metrics-server 使用的證書web
登錄到
k8s-master1
操做
$ cd /data/ssl/ # 注意: "CN": "system:metrics-server" 必定是這個,由於後面受權時用到這個名稱,不然會報禁止匿名訪問 $ cat > metrics-server-csr.json <<EOF { "CN": "system:metrics-server", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "system" } ] } EOF
生成 metrics-server 證書和私鑰docker
# 生成證書 $ cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server # copy 到 /opt/kubernetes/ssl 目錄 $ cp metrics-server-key.pem metrics-server.pem /opt/kubernetes/ssl/ # copy 到 k8s-master2 k8s-master3 機器上 $ scp metrics-server-key.pem metrics-server.pem root@k8s-master2:/opt/kubernetes/ssl/ $ scp metrics-server-key.pem metrics-server.pem root@k8s-master3:/opt/kubernetes/ssl/
登錄到k8s-master1
操做v1.18 下載頁面 https://github.com/kubernetes...shell
# 建立存放 k8s 二進制包目錄 $ mkdir /data/k8s-package $ cd /data/k8s-package # 下載 v1.18.2 二進制包 # 做者把二進制安裝包上傳到cdn上 https://cdm.yp14.cn/k8s-package/kubernetes-server-v1.18.2-linux-amd64.tar.gz $ wget https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz $ tar xf kubernetes-server-linux-amd64.tar.gz
master 節點須要用到:json
node 節點須要用到:bootstrap
PS:本文master節點也作爲一個node節點,因此須要用到 kubelet kube-proxy 執行文件
# 進入解壓出來二進制包bin目錄 $ cd /data/k8s-package/kubernetes/server/bin # cpoy 執行文件到 /opt/kubernetes/bin 目錄 $ cp -a kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /opt/kubernetes/bin # copy 執行文件到 k8s-master2 k8s-master3 機器 /opt/kubernetes/bin 目錄 $ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master2:/opt/kubernetes/bin/ $ scp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy root@k8s-master3:/opt/kubernetes/bin/
登錄到
k8s-master1
操做
$ cd /data/ssl/ # 修改第10行 KUBE_APISERVER 地址 $ vim kubeconfig.sh
# 建立 TLS Bootstrapping Token export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #---------------------- # 建立kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://lb.ypvip.com.cn:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 生成證書 $ sh kubeconfig.sh # 輸出下面結果 kubeconfig.sh kube-proxy-csr.json kube-proxy.kubeconfig kube-proxy.csr kube-proxy-key.pem kube-proxy.pem bootstrap.kubeconfig
# copy *kubeconfig 文件到 /opt/kubernetes/cfg 目錄 $ cp *kubeconfig /opt/kubernetes/cfg # copy 到 k8s-master2 k8s-master3 機器上 $ scp *kubeconfig root@k8s-master2:/opt/kubernetes/cfg $ scp *kubeconfig root@k8s-master3:/opt/kubernetes/cfg
登錄到k8s-master1
k8s-master2
k8s-master3
操做
# 建立 /data/k8s-master 目錄,用於存放 master 配置執行腳本 $ mkdir /data/k8s-master
登錄到
k8s-master1
$ cd /data/k8s-master # 建立生成 kube-apiserver 配置文件腳本 $ vim apiserver.sh
#!/bin/bash MASTER_ADDRESS=${1:-"192.168.0.216"} ETCD_SERVERS=${2:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --etcd-servers=${ETCD_SERVERS} \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.0.0/16 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem \\ --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \\ --proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \\ --runtime-config=api/all=true \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-truncate-enabled=true \\ --audit-log-path=/var/log/kubernetes/k8s-audit.log" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
# 建立生成 kube-controller-manager 配置文件腳本 $ vim controller-manager.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=2 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --bind-address=0.0.0.0 \\ --service-cluster-ip-range=10.10.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s \\ --feature-gates=RotateKubeletServerCertificate=true \\ --feature-gates=RotateKubeletClientCertificate=true \\ --allocate-node-cidrs=true \\ --cluster-cidr=10.20.0.0/16 \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
# 建立生成 kube-scheduler 配置文件腳本 $ vim scheduler.sh
#!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=2 \\ --master=${MASTER_ADDRESS}:8080 \\ --address=0.0.0.0 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
# 添加執行權限 $ chmod +x *.sh $ cp /data/ssl/token.csv /opt/kubernetes/cfg/ # copy token.csv 和 master 配置到 k8s-master2 k8s-master3 機器上 $ scp /data/ssl/token.csv root@k8s-master2:/opt/kubernetes/cfg $ scp /data/ssl/token.csv root@k8s-master3:/opt/kubernetes/cfg $ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master2:/data/k8s-master $ scp apiserver.sh controller-manager.sh scheduler.sh root@k8s-master3:/data/k8s-master # 生成 master配置文件並運行 $ ./apiserver.sh 192.168.0.216 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # 查看master三個服務是否正常運行 $ ps -ef | grep kube $ netstat -ntpl | grep kube-
登錄到
k8s-master2
操做
$ cd /data/k8s-master # 生成 master配置文件並運行 $ ./apiserver.sh 192.168.0.217 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # 查看master三個服務是否正常運行 $ ps -ef | grep kube $ netstat -ntpl | grep kube-
登錄到
k8s-master3
操做
$ cd /data/k8s-master # 生成 master配置文件並運行 $ ./apiserver.sh 192.168.0.218 https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379 $ ./controller-manager.sh 127.0.0.1 $ ./scheduler.sh 127.0.0.1 # 查看master三個服務是否正常運行 $ ps -ef | grep kube $ netstat -ntpl | grep kube-
# 隨便登錄一臺master查看集羣健康狀態 $ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
登錄到
k8s-master1
操做
建立 Node節點
受權用戶 kubelet-bootstrap
$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
建立自動批准相關 CSR 請求的 ClusterRole
# 建立證書旋轉配置存放目錄 $ mkdir ~/yaml/kubelet-certificate-rotating $ cd ~/yaml/kubelet-certificate-rotating $ vim tls-instructs-csr.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"]
# 部署 $ kubectl apply -f tls-instructs-csr.yaml
自動批准 kubelet-bootstrap 用戶 TLS bootstrapping 首次申請證書的 CSR 請求
$ kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap
自動批准 system:nodes 組用戶更新 kubelet 自身與 apiserver 通信證書的 CSR 請求
$ kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
自動批准 system:nodes 組用戶更新 kubelet 10250 api 端口證書的 CSR 請求
$ kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
首先咱們先了解下 kubelet
中 kubelet.kubeconfig
配置是如何生成?
kubelet.kubeconfig
配置是經過 TLS Bootstrapping
機制生成,下面是生成的流程圖。
登錄到k8s-master1
k8s-master2
k8s-master3
操做
# 建立 node 節點生成配置腳本目錄 $ mkdir /data/k8s-node
登錄到
k8s-master1
操做
# 建立生成 kubelet 配置腳本 $ vim kubelet.sh
#!/bin/bash DNS_SERVER_IP=${1:-"10.10.0.2"} HOSTNAME=${2:-"`hostname`"} CLUETERDOMAIN=${3:-"cluster.local"} cat <<EOF >/opt/kubernetes/cfg/kubelet.conf KUBELET_OPTS="--logtostderr=true \\ --v=2 \\ --hostname-override=${HOSTNAME} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/opt/kubernetes/ssl \\ --network-plugin=cni \\ --cni-conf-dir=/etc/cni/net.d \\ --cni-bin-dir=/opt/cni/bin \\ --pod-infra-container-image=yangpeng2468/google_containers-pause-amd64:3.2" EOF cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml kind: KubeletConfiguration # 使用對象 apiVersion: kubelet.config.k8s.io/v1beta1 # api版本 address: 0.0.0.0 # 監聽地址 port: 10250 # 當前kubelet的端口 readOnlyPort: 10255 # kubelet暴露的端口 cgroupDriver: cgroupfs # 驅動,要於docker info顯示的驅動一致 clusterDNS: - ${DNS_SERVER_IP} clusterDomain: ${CLUETERDOMAIN} # 集羣域 failSwapOn: false # 關閉swap # 身份驗證 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem # 受權 authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s # Node 資源保留 evictionHard: imagefs.available: 15% memory.available: 1G nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s # 鏡像刪除策略 imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s # 旋轉證書 rotateCertificates: true # 旋轉kubelet client 證書 featureGates: RotateKubeletServerCertificate: true RotateKubeletClientCertificate: true maxOpenFiles: 1000000 maxPods: 110 EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
# 建立生成 kube-proxy 配置腳本 $ vim proxy.sh
#!/bin/bash HOSTNAME=${1:-"`hostname`"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy.conf KUBE_PROXY_OPTS="--logtostderr=true \\ --v=2 \\ --config=/opt/kubernetes/cfg/kube-proxy-config.yml" EOF cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 address: 0.0.0.0 # 監聽地址 metricsBindAddress: 0.0.0.0:10249 # 監控指標地址,監控獲取相關信息 就從這裏獲取 clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig # 讀取配置文件 hostnameOverride: ${HOSTNAME} # 註冊到k8s的節點名稱惟一 clusterCIDR: 10.10.0.0/16 # service IP範圍 mode: iptables # 使用iptables模式 # 使用 ipvs 模式 #mode: ipvs # ipvs 模式 #ipvs: # scheduler: "rr" #iptables: # masqueradeAll: true EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
# 生成 node 配置文件 $ ./kubelet.sh 10.10.0.2 k8s-master1 cluster.local $ ./proxy.sh k8s-master1 # 查看服務是否啓動 $ netstat -ntpl | egrep "kubelet|kube-proxy" # copy kubelet.sh proxy.sh 腳本到 k8s-master2 k8s-master3 機器上 $ scp kubelet.sh proxy.sh root@k8s-master2:/data/k8s-node $ scp kubelet.sh proxy.sh root@k8s-master3:/data/k8s-node
登錄到
k8s-master2
操做
$ cd /data/k8s-node # 生成 node 配置文件 $ ./kubelet.sh 10.10.0.2 k8s-master2 cluster.local $ ./proxy.sh k8s-master2 # 查看服務是否啓動 $ netstat -ntpl | egrep "kubelet|kube-proxy"
登錄到
k8s-master3
操做
$ cd /data/k8s-node # 生成 node 配置文件 $ ./kubelet.sh 10.10.0.2 k8s-master3 cluster.local $ ./proxy.sh k8s-master3 # 查看服務是否啓動 $ netstat -ntpl | egrep "kubelet|kube-proxy"
# 隨便登錄一臺master機器查看node節點是否添加成功 $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 NoReady <none> 4d4h v1.18.2 k8s-master2 NoReady <none> 4d4h v1.18.2 k8s-master3 NoReady <none> 4d4h v1.18.2
上面 Node 節點處理
NoReady
狀態,是由於目前尚未安裝網絡組件,下文安裝網絡組件。
$ vim ~/yaml/apiserver-to-kubelet-rbac.yml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-api-admin subjects: - kind: User name: kubernetes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:kubelet-api-admin apiGroup: rbac.authorization.k8s.io
# 應用 $ kubectl apply -f ~/yaml/apiserver-to-kubelet-rbac.yml
登錄到
k8s-master1
操做
下載 Calico Version v3.14.0
Yaml 文件
# 存放etcd yaml文件 $ mkdir -p ~/yaml/calico $ cd ~/yaml/calico # 注意:下面是基於自建etcd作爲存儲的配置文件 $ curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -O
calico-etcd.yaml
須要修改以下配置:apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: etcd-key: (cat /opt/kubernetes/ssl/server-key.pem | base64 -w 0) # 將輸出結果填寫在這裏 etcd-cert: (cat /opt/kubernetes/ssl/server.pem | base64 -w 0) # 將輸出結果填寫在這裏 etcd-ca: (cat /opt/kubernetes/ssl/ca.pem | base64 -w 0) # 將輸出結果填寫在這裏
ConfigMap
配置修改kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: etcd_endpoints: "https://192.168.0.216:2379,https://192.168.0.217:2379,https://192.168.0.218:2379" etcd_ca: "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key"
關於ConfigMap部分主要參數以下:
etcd_endpoints
:Calico使用etcd來保存網絡拓撲和狀態,該參數指定etcd的地址,可使用K8S Master所用的etcd,也能夠另外搭建。calico_backend
:Calico的後端,默認爲bird。cni_network_config
:符合CNI規範的網絡配置,其中type=calico表示,Kubelet 從 CNI_PATH (默認爲/opt/cni/bin)目錄找calico的可執行文件,用於容器IP地址的分配。TLS安全認證
,則還須要指定相應的ca
、cert
、key
等文件IP 網段
,默認使用 192.168.0.0/16
網段- name: CALICO_IPV4POOL_CIDR value: "10.20.0.0/16"
在 DaemonSet calico-node env
中添加網卡發現規則
# 定義ipv4自動發現網卡規則 - name: IP_AUTODETECTION_METHOD value: "interface=eth.*" # 定義ipv6自動發現網卡規則 - name: IP6_AUTODETECTION_METHOD value: "interface=eth.*"
# Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always"
Calico 有兩種網絡模式:BGP
和 IPIP
IPIP
模式時,設置 CALICO_IPV4POOL_IPIP="always"
,IPIP 是一種將各Node的路由之間作一個tunnel
,再把兩個網絡鏈接起來的模式,啓用IPIP模式時,Calico將在各Node上建立一個名爲 tunl0
的虛擬網絡接口。BGP
模式時,設置 CALICO_IPV4POOL_IPIP="off"
錯誤
: ERROR startup/startup.go 146: failed to query kubeadm's config map error=Get https://10.10.0.1:443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=2s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
緣由
:Node工做節點鏈接不到 apiserver
地址,檢查一下calico配置文件,要把apiserver的IP和端口配置上,若是不配置的話,calico默認將設置默認的calico網段和443端口。字段名:KUBERNETES_SERVICE_HOST
、KUBERNETES_SERVICE_PORT
、KUBERNETES_SERVICE_PORT_HTTPS
。
解決方法
:
在 DaemonSet calico-node env
中添加環境變量
- name: KUBERNETES_SERVICE_HOST value: "lb.ypvip.com.cn" - name: KUBERNETES_SERVICE_PORT value: "6443" - name: KUBERNETES_SERVICE_PORT_HTTPS value: "6443"
修改完 calico-etcd.yaml
後,執行部署
# 部署 $ kubectl apply -f calico-etcd.yaml # 查看 calico pods $ kubectl get pods -n kube-system | grep calico # 查看 node 是否正常,如今 node 服務正常了 $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 Ready <none> 4d4h v1.18.2 k8s-master2 Ready <none> 4d4h v1.18.2 k8s-master3 Ready <none> 4d4h v1.18.2
登錄到
k8s-master1
操做
deploy.sh
是一個便捷的腳本,用於生成coredns yaml 配置。
# 安裝依賴 jq 命令 $ yum install jq -y $ cd ~/yaml $ mkdir coredns $ cd coredns # 下載 CoreDNS 項目 $ git clone https://github.com/coredns/deployment.git $ cd coredns/deployment/kubernetes
默認狀況下 CLUSTER_DNS_IP
是自動獲取kube-dns的集羣ip的,可是因爲沒有部署kube-dns因此只能手動指定一個集羣ip。
111 if [[ -z $CLUSTER_DNS_IP ]]; then 112 # Default IP to kube-dns IP 113 # CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}") 114 CLUSTER_DNS_IP=10.10.0.2
# 查看執行效果,並未開始部署 $ ./deploy.sh # 執行部署 $ ./deploy.sh | kubectl apply -f - # 查看 Coredns $ kubectl get svc,pods -n kube-system| grep coredns
測試 Coredns 解析
# 建立一個 busybox Pod $ vim busybox.yaml
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28.4 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
# 部署 $ kubectl apply -f busybox.yaml # 測試解析,下面是解析正常 $ kubectl exec -i busybox -n default nslookup kubernetes Server: 10.10.0.2 Address 1: 10.10.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local
登錄到
k8s-master1
操做
$ cd ~/yaml # 拉取 v0.3.6 版本 $ git clone https://github.com/kubernetes-sigs/metrics-server.git -b v0.3.6 $ cd metrics-server/deploy/1.8+
只修改 metrics-server-deployment.yaml
配置文件
# 下面是修改先後比較差別 $ git diff metrics-server-deployment.yaml diff --git a/deploy/1.8+/metrics-server-deployment.yaml b/deploy/1.8+/metrics-server-deployment.yaml index 2393e75..2139e4a 100644 --- a/deploy/1.8+/metrics-server-deployment.yaml +++ b/deploy/1.8+/metrics-server-deployment.yaml @@ -29,8 +29,19 @@ spec: emptyDir: {} containers: - name: metrics-server - image: k8s.gcr.io/metrics-server-amd64:v0.3.6 - imagePullPolicy: Always + image: yangpeng2468/metrics-server-amd64:v0.3.6 + imagePullPolicy: IfNotPresent + resources: + limits: + cpu: 400m + memory: 1024Mi + requests: + cpu: 50m + memory: 50Mi + command: + - /metrics-server + - --kubelet-insecure-tls + - --kubelet-preferred-address-types=InternalIP volumeMounts: - name: tmp-dir mountPath: /tmp
# 部署 $ kubectl apply -f . # 驗證 $ kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master1 72m 7% 1002Mi 53% k8s-master2 121m 3% 1852Mi 12% k8s-master3 300m 3% 1852Mi 20% # 內存單位 Mi=1024*1024字節 M=1000*1000字節 # CPU單位 1核=1000m 即 250m=1/4核
Kubernetes Dashboard 部署,請參考 K8S Dashboard 2.0 部署 文章。
Kubernetes v1.18.2 二進制部署,做者測試過無坑。這篇部署文章徹底能夠直接用於生產環境部署。全方位包含整個 Kubernetes 組件部署。
本文由 YP小站發佈
本文由博客一文多發平臺 OpenWrite 發佈!