1、說明html
本篇主要參考kubernetes中文社區的一篇部署文章(CentOS 使用二進制部署 Kubernetes 1.13集羣),並作了更詳細的記錄以備用。node
2、部署環境linux
一、kubernetes 版本:1.13,二進制文件git
在參考文章中有下載。github
二、本地部署環境docker
ip | hostname | version | 部署 |
10.0.3.107 | manager107 | 3.10.0-957.1.3.el7.x86_64 | api-server,scheduler,controller-manager,etcd,kubelet,kube-proxy,flannel |
10.0.3.68 | worker68 | 3.10.0-957.1.3.el7.x86_64 | kubelet,kube-proxy,flannel |
10.0.3.80 | worker80 | 3.10.0-957.1.3.el7.x86_64 | kubelet,kube-proxy,flannel |
三、部署網絡說明json
參考CentOS 使用二進制部署 Kubernetes 1.13集羣。bootstrap
3、kubernetes 安裝及配置vim
一、建立臨時目錄centos
#存放etcd證書及配置文件 [root@manager107 ~]# mkdir -p /home/workspace/etcd #存放k8s證書及配置文件 [root@manager107 ~]# mkdir -p /home/workspace/k8s #存放k8s安裝文件 [root@manager107 ~]# mkdir -p /home/workspace/packages
二、設置關閉防火牆、Swap及SELINUX
3臺服務器上執行:
systemctl stop firewalld && systemctl disable firewalld setenforce 0
swapoff -a && sysctl -w vm.swappiness=0 vi /etc/selinux/config SELINUX=disabled
三、安裝docker
略
四、建立安裝目錄
[root@manager107 ~]# mkdir /k8s/etcd/{bin,cfg,ssl} -p [root@manager107 ~]# mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
五、安裝及配置CFSSL
[root@manager107 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@manager107 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@manager107 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@manager107 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 [root@manager107 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl [root@manager107 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson [root@manager107 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
六、建立認證證書
[root@manager107 ~]# cd /home/workspace/etcd #建立 ETCD 證書 [root@manager107 etcd]# cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF #建立 ETCD CA 配置文件 [root@manager107 etcd]# cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF #建立 ETCD Server 證書 [root@manager107 etcd]# cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "10.0.3.107", "10.0.3.68", "10.0.3.80" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF #生成 ETCD CA 證書和私鑰 [root@manager107 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@manager107 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@manager107 etcd]# cd /home/workspace/k8s/ #建立 Kubernetes CA 證書 [root@manager107 k8s]# cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF [root@manager107 k8s]# cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF [root@manager107 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #生成API_SERVER證書 [root@manager107 k8s]# cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "10.0.3.107", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF [root@manager107 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #建立 Kubernetes Proxy 證書 [root@manager107 k8s]# cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF [root@manager107 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
七、ssh-key認證
[root@manager107 ~]# ssh-keygen [root@manager107 ~]# ssh-copy-id 10.0.3.68 [root@manager107 ~]# ssh-copy-id 10.0.3.80
八、部署etcd
[root@manager107 workspace]# cd /home/workspace/packages/k8s1.13-centos [root@manager107 k8s1.13-centos]# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz [root@manager107 k8s1.13-centos]# cd etcd-v3.3.10-linux-amd64/ [root@manager107 etcd-v3.3.10-linux-amd64]# cp etcd etcdctl /k8s/etcd/bin/ [root@manager107 etcd-v3.3.10-linux-amd64]# vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.0.3.107:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.3.107:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.3.107:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.3.107:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.0.3.107:2380,etcd02=https://10.0.3.68:2380,etcd03=https://10.0.3.80:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #建立 etcd的 systemd unit 文件 [root@manager107 etcd-v3.3.10-linux-amd64]# vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/k8s/etcd/cfg/etcd ExecStart=/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/k8s/etcd/ssl/server.pem \ --key-file=/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/k8s/etcd/ssl/server.pem \ --peer-key-file=/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target #拷貝證書文件 [root@manager107 etcd-v3.3.10-linux-amd64]# cd /home/workspace/etcd/ [root@manager107 etcd]# cp ca*pem server*pem /k8s/etcd/ssl #將啓動文件、配置文件拷貝到 節點6八、節點80 [root@manager107 etcd]# cd /k8s/ [root@manager107 k8s]# scp -r etcd 10.0.3.68:/k8s/etcd/ [root@manager107 k8s]# scp -r etcd 10.0.3.80:/k8s/etcd/ [root@manager107 k8s]# scp /usr/lib/systemd/system/etcd.service 10.0.3.68:/usr/lib/systemd/system/etcd.service [root@manager107 k8s]# scp /usr/lib/systemd/system/etcd.service 10.0.3.80:/usr/lib/systemd/system/etcd.service #在68上修改etcd配置文件 [root@worker68 ~]# vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.0.3.68:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.3.68:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.3.68:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.3.68:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.0.3.107:2380,etcd02=https://10.0.3.68:2380,etcd03=https://10.0.3.80:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #在80上修改etcd配置文件 [root@worker80 ~]# vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.0.3.80:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.3.80:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.3.80:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.3.80:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.0.3.107:2380,etcd02=https://10.0.3.68:2380,etcd03=https://10.0.3.80:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #80啓動etcd [root@worker80 ~]# systemctl daemon-reload [root@worker80 ~]# systemctl enable etcd [root@worker80 ~]# systemctl start etcd #68啓動etcd [root@worker68 ~]# systemctl daemon-reload [root@worker68 ~]# systemctl enable etcd [root@worker68 ~]# systemctl start etcd #107啓動etcd [root@manager107 ~]# systemctl daemon-reload [root@manager107 ~]# systemctl enable etcd [root@manager107 ~]# systemctl start etcd #驗證集羣是否正常運行 [root@manager107 ~]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--endpoints="https://10.0.3.107:2379,https://10.0.3.68:2379,https://10.0.3.80:2379" \
cluster-health
九、部署Flannel網絡
#向 etcd 寫入集羣 Pod 網段信息 [root@manager107 ssl]# cd /k8s/etcd/ssl/ [root@manager107 ssl]# /k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://10.0.3.107:2379,https://10.0.3.68:2379,https://10.0.3.80:2379" \ set /coreos.com/network/config '{ "Network": "172.20.0.0/16", "Backend": {"Type": "vxlan"}}' #解壓安裝 [root@manager107 ssl]# cd /home/workspace/packages/k8s1.13-centos [root@manager107 k8s1.13-centos]# tar -xvf flannel-v0.10.0-linux-amd64.tar.gz [root@manager107 k8s1.13-centos]# mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/ #配置Flannel [root@manager107 k8s1.13-centos]# vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.3.107:2379,https://10.0.3.68:2379,https://10.0.3.80:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem" #建立 flanneld 的 systemd unit 文件 [root@manager107 k8s1.13-centos]# vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target #配置Docker啓動指定子網段 [root@manager107 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/subnet.env # ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:// ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target [root@worker68 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/subnet.env # ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:// ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target [root@worker80 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/subnet.env # ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:// ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target #將flanneld systemd unit 文件到全部節點 [root@manager107 ~]# cd /k8s [root@manager107 k8s]# scp -r kubernetes 10.0.3.68:/k8s/kubernetes [root@manager107 k8s]# scp -r kubernetes 10.0.3.80:/k8s/kubernetes [root@manager107 k8s]# scp /k8s/kubernetes/cfg/flanneld 10.0.3.68:/k8s/kubernetes/cfg/flanneld [root@manager107 k8s]# scp /k8s/kubernetes/cfg/flanneld 10.0.3.80:/k8s/kubernetes/cfg/flanneld [root@manager107 k8s]# scp /usr/lib/systemd/system/docker.service 10.0.3.68:/usr/lib/systemd/system/docker.service [root@manager107 k8s]# scp /usr/lib/systemd/system/docker.service 10.0.3.80:/usr/lib/systemd/system/docker.service [root@manager107 k8s]# scp /usr/lib/systemd/system/flanneld.service 10.0.3.68:/usr/lib/systemd/system/flanneld.service [root@manager107 k8s]# scp /usr/lib/systemd/system/flanneld.service 10.0.3.80:/usr/lib/systemd/system/flanneld.service #107上啓動flannel [root@manager107 ~]# systemctl daemon-reload [root@manager107 ~]# systemctl enable flanneld [root@manager107 ~]# systemctl start flanneld [root@manager107 ~]# systemctl restart docker #68上啓動flannel [root@worker68 ~]# systemctl daemon-reload [root@worker68 ~]# systemctl enable flanneld [root@worker68 ~]# systemctl start flanneld [root@worker68 ~]# systemctl restart docker #80上啓動flannel [root@worker80 ~]# systemctl daemon-reload [root@worker80 ~]# systemctl enable flanneld [root@worker80 ~]# systemctl start flanneld [root@worker80 ~]# systemctl restart docker #查看是否生效 [root@manager107 ~]# ip add
十、部署master節點
kubernetes master 節點運行以下組件:
kube-scheduler 和 kube-controller-manager 能夠以集羣模式運行,經過 leader 選舉產生一個工做進程,其它進程處於阻塞模式。
#將二進制文件解壓拷貝到master 節點 [root@manager107 ~]# cd /home/workspace/packages/k8s1.13-centos [root@manager107 k8s1.13-centos]# tar -xvf kubernetes-server-linux-amd64.tar.gz [root@manager107 k8s1.13-centos]# cd kubernetes/server/bin/ [root@manager107 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/ #拷貝認證 [root@manager107 bin]# cd /home/workspace/k8s/ [root@manager107 k8s]# cp *pem /k8s/kubernetes/ssl/ #部署 kube-apiserver 組件 ##建立 TLS Bootstrapping Token [root@manager107 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' e9ca0f3e1b66c9bef910b47171490c53 [root@manager107 k8s]# vim /k8s/kubernetes/cfg/token.csv e9ca0f3e1b66c9bef910b47171490c53,kubelet-bootstrap,10001,"system:kubelet-bootstrap" ##建立apiserver配置文件 [root@manager107 k8s]# vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://10.0.3.107:2379,https://10.0.3.68:2379,https://10.0.3.80:2379 \ --bind-address=10.0.3.107 \ --secure-port=6443 \ --advertise-address=10.0.3.107 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem" ##建立 kube-apiserver systemd unit 文件 [root@manager107 k8s]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target #啓動服務 [root@manager107 k8s]# systemctl daemon-reload [root@manager107 k8s]# systemctl enable kube-apiserver [root@manager107 k8s]# systemctl restart kube-apiserver #查看apiserver是否運行 [root@manager107 k8s]# systemctl status kube-apiserver #部署kube-scheduler ##建立kube-scheduler配置文件 [root@manager107 k8s]# vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect" ##建立kube-scheduler systemd unit 文件 [root@manager107 k8s]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target ##啓動服務 [root@manager107 k8s]# systemctl daemon-reload [root@manager107 k8s]# systemctl enable kube-scheduler.service [root@manager107 k8s]# systemctl start kube-scheduler.service ##查看kube-scheduler是否運行 [root@manager107 k8s]# systemctl status kube-scheduler.service #部署kube-controller-manager ##建立kube-controller-manager配置文件 [root@manager107 k8s]# vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem" ##建立kube-controller-manager systemd unit 文件 [root@manager107 k8s]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target ##啓動服務 [root@manager107 k8s]# systemctl daemon-reload [root@manager107 k8s]# systemctl enable kube-controller-manager [root@manager107 k8s]# systemctl start kube-controller-manager ##查看kube-controller-manager是否運行 [root@manager107 k8s]# systemctl status kube-controller-manager #將可執行文件路/k8s/kubernetes/ 添加到 PATH 變量中 [root@manager107 k8s]# vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin [root@manager107 k8s]# source /etc/profile #查看master集羣狀態 [root@manager107 k8s]# kubectl get cs,nodes
十一、部署node 節點
kubernetes node 節點運行以下組件:
kubelet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如exec、run、logs 等;
kubelet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況;
爲確保安全,本文檔只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如apiserver、heapster)。
#將kubelet 二進制文件拷貝至node節點 [root@manager107 bin]# cd /home/workspace/packages/k8s1.13-centos/kubernetes/server/bin [root@manager107 bin]# scp kubelet kube-proxy 10.0.3.68:/k8s/kubernetes/bin/ [root@manager107 bin]# scp kubelet kube-proxy 10.0.3.80:/k8s/kubernetes/bin/ #新建目錄 [root@manager107 bin]# mkdir /home/workspace/kubelet_bootstrap_config [root@manager107 bin]# cd /home/workspace/kubelet_bootstrap_config #建立 kubelet bootstrap kubeconfig 文件 [root@manager107 kubelet_bootstrap_config]# vim environment.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=e9ca0f3e1b66c9bef910b47171490c53 KUBE_APISERVER="https://10.0.3.107:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/home/workspace/k8s/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/home/workspace/k8s/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/home/workspace/k8s/kube-proxy.pem \ --client-key=/home/workspace/k8s/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig [root@manager107 kubelet_bootstrap_config]# sh environment.sh #將bootstrap kubeconfig kube-proxy.kubeconfig 文件拷貝到全部 nodes節點 [root@manager107 kubelet_bootstrap_config]# cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/ [root@manager107 kubelet_bootstrap_config]# scp bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.3.68:/k8s/kubernetes/cfg/ [root@manager107 kubelet_bootstrap_config]# scp bootstrap.kubeconfig kube-proxy.kubeconfig 10.0.3.80:/k8s/kubernetes/cfg/ #107上建立 kubelet 參數配置文件 [root@manager107 ~]# vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.0.3.107 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true #107上建立kubelet配置文件 [root@manager107 ~]# vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.107 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" #107上建立kubelet systemd unit 文件 [root@manager107 ~]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target #68上建立 kubelet 參數配置文件: [root@worker68 ~]# vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.0.3.68 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true #68上建立kubelet配置文件 [root@worker68 ~]# vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.68 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" #68上建立kubelet systemd unit 文件 [root@worker68 ~]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target #80上建立 kubelet 參數配置文件 [root@worker80 ~]# vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.0.3.80 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true #80上建立kubelet配置文件 [root@worker80 ~]# vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.80 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" #80上建立kubelet systemd unit 文件 [root@worker80 ~]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target #將kubelet-bootstrap用戶綁定到系統集羣角色 [root@manager107 ~]# kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap #107上啓動服務 [root@manager107 ~]# systemctl daemon-reload [root@manager107 ~]# systemctl enable kubelet [root@manager107 ~]# systemctl start kubelet #68上啓動服務 [root@worker68 ~]# systemctl daemon-reload [root@worker68 ~]# systemctl enable kubelet [root@worker68 ~]# systemctl start kubelet #80上啓動服務 [root@worker80 ~]# systemctl daemon-reload [root@worker80 ~]# systemctl enable kubelet [root@worker80 ~]# systemctl start kubelet
kubelet 首次啓動時向 kube-apiserver 發送證書籤名請求,必須經過後 kubernetes 系統纔會將該 Node 加入到集羣。
#查看未受權的CSR請求 [root@manager107 ~]# kubectl get csr #經過CSR請求 [root@manager107 ~]# kubectl certificate approve 節點名
#查看集羣狀態 [root@manager107 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node/10.0.3.107 Ready master 20h v1.13.0 node/10.0.3.68 Ready node 20h v1.13.0 node/10.0.3.80 Ready node 20h v1.13.0
#部署 kube-proxy 組件 #kube-proxy 運行在全部 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。 #107上建立 kube-proxy 配置文件 [root@manager107 ~]# vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.107 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig" #107上建立kube-proxy systemd unit 文件 [root@manager107 ~]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target #68上建立 kube-proxy 配置文件 [root@worker68 ~]# vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.68 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig" #68上建立kube-proxy systemd unit 文件 [root@worker68 ~]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target #80上建立 kube-proxy 配置文件 [root@worker80 ~]# vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.0.3.80 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig" #80上建立kube-proxy systemd unit 文件 [root@worker80 ~]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target #107上啓動服務 [root@manager107 ~]# systemctl daemon-reload [root@manager107 ~]# systemctl enable kube-proxy [root@manager107 ~]# systemctl start kube-proxy #68上啓動服務 [root@worker68 ~]# systemctl daemon-reload [root@worker68 ~]# systemctl enable kube-proxy [root@worker68 ~]# systemctl start kube-proxy #80上啓動服務 [root@worker80 ~]# systemctl daemon-reload [root@worker80 ~]# systemctl enable kube-proxy [root@worker80 ~]# systemctl start kube-proxy
#給node和master節點打標籤 [root@manager107 ~]# kubectl label node 10.0.3.107 node-role.kubernetes.io/master='master' [root@manager107 ~]# kubectl label node 10.0.3.68 node-role.kubernetes.io/node='node' [root@manager107 ~]# kubectl label node 10.0.3.80 node-role.kubernetes.io/node='node'
#查看集羣狀態 [root@manager107 ~]# kubectl get node,cs NAME STATUS ROLES AGE VERSION node/10.0.3.107 Ready master 21h v1.13.0 node/10.0.3.68 Ready node 21h v1.13.0 node/10.0.3.80 Ready node 21h v1.13.0 NAME STATUS MESSAGE ERROR componentstatus/scheduler Healthy ok componentstatus/controller-manager Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}
4、參考