大多數與Kubernetes的工程師,都應該會使用kubeadm。它是管理集羣生命週期的重要工具,從建立到配置再到升級; 如今kubeadm正式成爲GA。kubeadm處理現有硬件上的生產集羣的引導,並以最佳實踐方式配置核心Kubernetes組件,以便爲新節點提供安全而簡單的鏈接流程並支持輕鬆升級。這個GA版本值得注意的是如今已經畢業的高級功能,特別是可插拔性和可配置性。kubeadm的範圍是管理員和自動化,更高級別系統的工具箱,這個版本是朝這個方向邁出的重要一步。html
容器存儲接口(CSI)如今已經GA,在v1.9中做爲alpha引入,在v1.10中做爲beta引入。經過CSI,Kubernetes卷層變得真正可擴展。這爲第三方存儲提供商提供了編寫與Kubernetes互操做而無需觸及核心代碼的插件的機會。該規範自己也達到了1.0狀態。node
在1.11中,咱們宣佈CoreDNS已達到基於DNS的服務發現的通常可用性。在1.13中,CoreDNS如今將kube-dns替換爲Kubernetes的默認DNS服務器。CoreDNS是一個通用的,權威的DNS服務器,提供與Kubernetes向後兼容但可擴展的集成。CoreDNS比之前的DNS服務器具備更少的移動部件,由於它是單個可執行文件和單個進程,並經過建立自定義DNS條目來支持靈活的用例。它也用Go編寫,使其具備內存安全性。linux
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131 https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational https://github.com/etcd-io/etcd https://shengbao.org/348.html https://github.com/coreos/flannel http://www.cnblogs.com/blogscc/p/10105134.html https://blog.csdn.net/xiegh2014/article/details/84830880 https://blog.csdn.net/tiger435/article/details/85002337 https://www.cnblogs.com/wjoyxt/p/9968491.html https://blog.csdn.net/zhaihaifei/article/details/79098564 https://blog.51cto.com/jerrymin/1898243 http://www.cnblogs.com/xuxinkun/p/5696031.html
Client Binaries https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz Server Binaries https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz Node Binaries https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz etcd https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz flannel https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
k8s-master1 10.2.8.44 k8s-master etcd、kube-apiserver、kube-controller-manager、kube-scheduler k8s-node1 10.2.8.65 k8s-node etcd、kubelet、docker、kube_proxy k8s-node2 10.2.8.34 k8s-node etcd、kubelet、docker、kube_proxy
wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz wget https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p cd /k8s/etcd/ssl/
1)etcd ca配置nginx
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
2)etcd ca證書git
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
3)etcd server證書github
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "10.2.8.44", "10.2.8.65", "10.2.8.34" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
4)生成etcd ca證書和私鑰
初始化caweb
cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json server-csr.json [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2018/12/26 16:13:54 [INFO] generating a new CA key and certificate from CSR 2018/12/26 16:13:54 [INFO] generate received request 2018/12/26 16:13:54 [INFO] received CSR 2018/12/26 16:13:54 [INFO] generating key: rsa-2048 2018/12/26 16:13:54 [INFO] encoded CSR 2018/12/26 16:13:54 [INFO] signed certificate with serial number 144752911121073185391033754516204538929473929443 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
生成server證書docker
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server 2018/12/26 16:18:53 [INFO] generate received request 2018/12/26 16:18:53 [INFO] received CSR 2018/12/26 16:18:53 [INFO] generating key: rsa-2048 2018/12/26 16:18:54 [INFO] encoded CSR 2018/12/26 16:18:54 [INFO] signed certificate with serial number 388122587040599986639159163167557684970159030057 2018/12/26 16:18:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
1)解壓縮json
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
2)配置etcd主文件bootstrap
vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://10.2.8.44:2380" ETCD_LISTEN_CLIENT_URLS="https://10.2.8.44:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.2.8.44:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.2.8.44:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.2.8.44:2380,etcd02=https://10.2.8.65:2380,etcd03=https://10.2.8.34:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
3)配置etcd啓動文件
mkdir /data1/etcd vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
4)啓動
注意啓動前etcd0二、etcd03一樣配置下
systemctl daemon-reload systemctl enable etcd systemctl start etcd
5)服務檢查
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" cluster-health member c21df2258ce015e6 is healthy: got healthy result from https://10.2.8.34:2379 member d427109ed3caf9c3 is healthy: got healthy result from https://10.2.8.44:2379 member ec8c40660d3c1192 is healthy: got healthy result from https://10.2.8.65:2379 cluster is healthy
1)製做kubernetes ca證書
cd /k8s/kubernetes/ssl cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2018/12/27 09:47:08 [INFO] generating a new CA key and certificate from CSR 2018/12/27 09:47:08 [INFO] generate received request 2018/12/27 09:47:08 [INFO] received CSR 2018/12/27 09:47:08 [INFO] generating key: rsa-2048 2018/12/27 09:47:08 [INFO] encoded CSR 2018/12/27 09:47:08 [INFO] signed certificate with serial number 156611735285008649323551446985295933852737436614 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
2)製做apiserver證書
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.254.0.1", "127.0.0.1", "10.2.8.44", "10.2.8.65", "10.2.8.34", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2018/12/27 09:51:56 [INFO] generate received request 2018/12/27 09:51:56 [INFO] received CSR 2018/12/27 09:51:56 [INFO] generating key: rsa-2048 2018/12/27 09:51:56 [INFO] encoded CSR 2018/12/27 09:51:56 [INFO] signed certificate with serial number 399376216731194654868387199081648887334508501005 2018/12/27 09:51:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
3)製做kube-proxy證書
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2018/12/27 09:52:40 [INFO] generate received request 2018/12/27 09:52:40 [INFO] received CSR 2018/12/27 09:52:40 [INFO] generating key: rsa-2048 2018/12/27 09:52:40 [INFO] encoded CSR 2018/12/27 09:52:40 [INFO] signed certificate with serial number 633932731787505365511506755558794469389165123417 2018/12/27 09:52:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json ca.pem kube-proxy-csr.json kube-proxy.pem server-csr.json server.pem ca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr server-key.pem
kubernetes master 節點運行以下組件:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-scheduler 和 kube-controller-manager 能夠以集羣模式運行,經過 leader 選舉產生一個工做進程,其它進程處於阻塞模式,master三節點高可用模式下可用
1)解壓縮文件
tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
2)部署kube-apiserver組件
建立TLS Bootstrapping Token
[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' f2c50331f07be89278acdaf341ff1ecc vim /k8s/kubernetes/cfg/token.csv f2c50331f07be89278acdaf341ff1ecc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
建立Apiserver配置文件
vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 \ --bind-address=10.2.8.44 \ --secure-port=6443 \ --advertise-address=10.2.8.44 \ --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
建立apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver [root@elasticsearch01 bin]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 14:41:22 CST; 20s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 22060 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─22060 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.... [root@elasticsearch01 bin]# ps -ef |grep kube-apiserver root 22060 1 5 14:41 ? 00:00:14 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 --bind-address=10.2.8.44 --secure-port=6443 --advertise-address=10.2.8.44 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem [root@elasticsearch01 bin]# netstat -tulpn |grep kube-apiserve tcp 0 0 10.2.8.44:6443 0.0.0.0:* LISTEN 22060/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 22060/kube-apiserve
3)部署kube-scheduler組件
建立kube-scheduler配置文件
vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
參數備註:
--address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
--kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它鏈接和驗證 kube-apiserver;
--leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工做,其它節點爲阻塞狀態;
建立kube-scheduler systemd文件
vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl start kube-scheduler.service [root@elasticsearch01 bin]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:16:51 CST; 17s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29026 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─29026 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
4)部署kube-controller-manager組件
建立kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
建立kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager [root@elasticsearch01 bin]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:19:19 CST; 11s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29510 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─29510 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=tru..
設置環境變量
vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH source /etc/profile
查看master服務狀態
kubectl get cs,nodes [root@elasticsearch01 bin]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}
kubernetes work 節點運行以下組件:
docker
kubelet
kube-proxy
flannel
系統環境
CentOS Linux release 7.4.1708 (Core)
Docker版本
Server Version: 18.09.0
Cgroup Driver: cgroupfs
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
kublet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如exec、run、logs 等;
kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況;
爲確保安全,只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如apiserver、heapster)
1)安裝二進制文件
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz tar zxvf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
2)複製相關證書到node節點
[root@elasticsearch01 ssl]# scp *.pem 10.2.8.65:$PWD root@10.2.8.65's password: ca-key.pem 100% 1679 914.6KB/s 00:00 ca.pem 100% 1359 1.0MB/s 00:00 kube-proxy-key.pem 100% 1675 1.2MB/s 00:00 kube-proxy.pem 100% 1403 1.1MB/s 00:00 server-key.pem 100% 1679 809.1KB/s 00:00 server.pem
3)建立kubelet bootstrap kubeconfig文件
經過腳本實現
vim /k8s/kubernetes/cfg/environment.sh #!/bin/bash #建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=f2c50331f07be89278acdaf341ff1ecc KUBE_APISERVER="https://10.2.8.44:6443" #設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig #設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \ --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
執行腳本
[root@elasticsearch02 cfg]# sh environment.sh Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@elasticsearch02 cfg]# ls bootstrap.kubeconfig environment.sh kube-proxy.kubeconfig
4)建立kubelet參數配置模板文件
vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.2.8.65 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.254.0.10"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
5)建立kubelet配置文件
vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.2.8.65 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
6)建立kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
7)將kubelet-bootstrap用戶綁定到系統集羣角色
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
注意這個默認鏈接localhost:8080端口,能夠在master上操做
[root@elasticsearch01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap \ > --clusterrole=system:node-bootstrapper \ > --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8)啓動服務
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
[root@elasticsearch02 cfg]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 17:34:30 CST; 18s ago Main PID: 24676 (kubelet) Memory: 88.6M CGroup: /system.slice/kubelet.service └─24676 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.2.8.44 --kubeconfig=/k8s/kubernetes...
9)Master接受kubelet CSR請求
能夠手動或自動 approve CSR 請求。推薦使用自動的方式,由於從 v1.8 版本開始,能夠自動輪轉approve csr 後生成的證書,以下是手動 approve CSR請求操做方法
查看CSR列表
[root@elasticsearch01 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
接受node
[root@elasticsearch01 ssl]# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
再查看CSR
[root@elasticsearch01 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
kube-proxy 運行在全部 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡
1)建立 kube-proxy 配置文件
vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.2.8.65 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
2)建立kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3)啓動服務
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
[root@elasticsearch02 cfg]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 18:31:42 CST; 11s ago Main PID: 5376 (kube-proxy) Memory: 40.9M CGroup: /system.slice/kube-proxy.service ‣ 5376 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.2.8.44 --cluster-cidr=10.254.0.0/...
4)查看集羣狀態
[root@elasticsearch01 cfg]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.65 Ready <none> 9m15s v1.13.1
5)一樣操做部署node 10.2.8.34並認證csr,認證後會生成kubelet-client證書
注意期間要是kubelet,kube-proxy配置錯誤,好比監聽IP或者hostname錯誤致使node not found,須要刪除kubelet-client證書,重啓kubelet服務,重啓認證csr便可
[root@elasticsearch03 kubernetes]# ls ssl ca-key.pem kubelet-client-2018-12-27-20-13-52.pem kubelet.crt kube-proxy-key.pem server-key.pem ca.pem kubelet-client-current.pem kubelet.key kube-proxy.pem server.pem [root@elasticsearch01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.34 Ready <none> 13h v1.13.1 10.2.8.65 Ready <none> 14h v1.13.1
默認沒有flanneld網絡,Node節點間的pod不能通訊,只能Node內通訊,爲了部署步驟簡潔明瞭,故flanneld放在後面安裝
flannel服務須要先於docker啓動。flannel服務啓動時主要作了如下幾步的工做:
從etcd中獲取network的配置信息
劃分subnet,並在etcd中進行註冊
將子網信息記錄到/run/flannel/subnet.env中
[root@elasticsearch02 cfg]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
flanneld 當前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段數據;
寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 --cluster-cidr 參數值一致;
1)解壓安裝
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
2)配置flanneld
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
建立flanneld systemd文件
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
注意
mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,後續 docker 啓動時 使用這個文件中的環境變量配置 docker0 網橋;
flanneld 使用系統缺省路由所在的接口與其它節點通訊,對於有多個網絡接口(如內網和公網)的節點,能夠用 -iface 參數指定通訊接口;
flanneld 運行時須要 root 權限;
3)配置Docker啓動指定子網
修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS便可
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
4)啓動服務
注意啓動flannel前要關閉docker及相關的kubelet這樣flannel纔會覆蓋docker0網橋
systemctl daemon-reload systemctl stop docker systemctl start flanneld systemctl enable flanneld systemctl start docker systemctl restart kubelet systemctl restart kube-proxy
5)驗證服務
[root@elasticsearch02 bin]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=10.254.35.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.254.35.1/24 --ip-masq=false --mtu=1450"
[root@elasticsearch02 bin]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 52:54:00:a4:ca:ff brd ff:ff:ff:ff:ff:ff inet 10.2.8.65/24 brd 10.2.8.255 scope global eth0 valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:06:0a:ab:32 brd ff:ff:ff:ff:ff:ff inet 10.254.35.1/24 brd 10.254.35.255 scope global docker0 valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 72:59:dc:2b:0a:21 brd ff:ff:ff:ff:ff:ff inet 10.254.35.0/32 scope global flannel.1 valid_lft forever preferred_lft forever
[root@elasticsearch01 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.34 Ready <none> 16h v1.13.1 10.2.8.65 Ready <none> 18h v1.13.1