kubernetes 1.13 已發佈,這是 2018 年年內第四次也是最後一次發佈新版本。Kubernetes 1.13 是迄今爲止發佈間隔最短的版本之一(與上一版本間隔十週),主要關注 Kubernetes 的穩定性與可擴展性,其中存儲與集羣生命週期相關的三項主要功能已逐步實現廣泛可用性。node
Kubernetes 1.13 的核心特性包括:利用 kubeadm 簡化集羣管理、容器存儲接口(CSI )以及將 CoreDNS 做爲默認 DNS 。linux
利用 kubeadm 簡化集羣管理功能git
大多數與 Kubernetes 接觸頻繁的人或多或少都會親自動手使用 kubeadm ,它是管理集羣生命週期的重要工具,可以幫助從建立到配置再到升級的整個流程。;隨着 1.13 版本的發佈,kubeadm 功能進入 GA 版本,正式廣泛可用。kubeadm 處理現有硬件上的生產集羣的引導,並以最佳實踐方式配置核心 Kubernetes 組件,以便爲新節點提供安全而簡單的鏈接流程並支持輕鬆升級。github
該 GA 版本中最值得注意的是已經畢業的高級功能,尤爲是可插拔性和可配置性。kubeadm 旨在爲管理員與高級自動化系統提供一套工具箱,現在已邁出重要一步。算法
容器存儲接口(CSI)docker
容器存儲接口最初於 1.9 版本中做爲 alpha 測試功能引入,在 1.10 版本中進入 beta 測試,現在終於進入 GA 階段正式廣泛可用。在 CSI 的幫助下,Kubernetes 卷層將真正實現可擴展性。經過 CSI ,第三方存儲供應商將能夠直接編寫可與 Kubernetes 互操做的代碼,而無需觸及任何 Kubernetes 核心代碼。事實上,相關規範也已經同步進入 1.0 階段。json
隨着 CSI 的穩定,插件做者將可以按照本身的節奏開發核心存儲插件,詳見 CSI 文檔。bootstrap
CoreDNS 成爲 Kubernetes 的默認 DNS 服務器vim
在 1.11 版本中,開發團隊宣佈 CoreDNS 已實現基於 DNS 服務發現的廣泛可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成爲 Kubernetes 中的默認 DNS 服務器。CoreDNS 是一種通用的、權威的 DNS 服務器,可以提供與 Kubernetes 向下兼容且具有可擴展性的集成能力。因爲 CoreDNS 自身單一可執行文件與單一進程的特性,所以 CoreDNS 的活動部件數量會少於以前的 DNS 服務器,且可以經過建立自定義 DNS 條目來支持各種靈活的用例。此外,因爲 CoreDNS 採用 Go 語言編寫,它具備強大的內存安全性。後端
CoreDNS 如今是 Kubernetes 1.13 及後續版本推薦的 DNS 解決方案,Kubernetes 已將經常使用測試基礎設施架構切換爲默認使用 CoreDNS ,所以,開發團隊建議用戶也儘快完成切換。KubeDNS 仍將至少支持一個版本,但如今是時候開始規劃遷移了。另外,包括 1.11 中 Kubeadm 在內的許多 OSS 安裝工具也已經進行了切換。
IP地址 | 主機名 | CPU | 內存 | 磁盤 |
---|---|---|---|---|
192.168.4.100 | master | 1C | 1G | 40G |
192.168.4.21 | node | 1C | 1G | 40G |
192.168.4.56 | node1 | 1C | 1G | 40G |
連接:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ
提取碼:pm9u
集羣功能各模塊功能描述:
Master節點:
Master 節點上面主要由四個模塊組成,APIServer,schedule , controller-manager , etcd
APIServer: APIServer 負責對外提供 RESTful 的 kubernetes API 的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給 APIServer 處理後再交給 etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對 kubernetes API 的調用)是直接和 APIServer 交互的。
schedule: schedule 負責調度 Pod 到合適的 Node 上,若是把 scheduler 當作一個黑匣子,那麼它的輸入是 pod 和由多個 Node 組成的列表,輸出是 Pod 和一個 Node 的綁定。 kubernetes 目前提供了調度算法,一樣也保留了接口。用戶根據本身的需求定義本身的調度算法。
controller manager: 若是 APIServer 作的是前臺的工做的話,那麼 controller manager 就是負責後臺的。每個資源都對應一個控制器。而 control manager 就是負責管理這些控制器的,好比咱們經過 APIServer 建立了一個Pod,當這個 Pod 建立成功後,APIServer 的任務就算完成了。
etcd:etcd 是一個高可用的鍵值存儲系統,kubernetes 使用它來存儲各個資源的狀態,從而實現了 Restful 的 API。
Node節點:
每一個Node節點主要由三個模板組成:kublet, kube-proxy
kube-proxy: 該模塊實現了 kubernetes 中的服務發現和反向代理功能。kube-proxy 支持 TCP 和 UDP 鏈接轉發,默認基 Round Robin 算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy 使用etcd 的 watch 機制監控集羣中 service 和 endpoint 對象數據的動態變化,而且維護一個 service 到 endpoint 的映射關係,從而保證了後端 pod 的 IP 變化不會對訪問者形成影響,另外,kube-proxy 還支持 session affinity。
kublet:kublet 是 Master 在每一個 Node 節點上面的 agent,是 Node 節點上面最重要的模塊,它負責維護和管理該 Node 上的全部容器,可是若是容器不是經過 kubernetes 建立的,它並不會管理。本質上,它負責使 Pod 的運行狀態與指望的狀態一致。
systemctl stop firewalld && systemctl disable firewalld setenforce 0 vi /etc/selinux/config SELINUX=disabled
swapoff -a && sysctl -w vm.swappiness=0 vi /etc/fstab #UUID=7bff6243-324c-4587-b550-55dc34018ebf swap swap defaults 0 0
cat << EOF | tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "192.168.4.100", "192.168.4.21", "192.168.4.56" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.4.100", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Shenzhen", "ST": "Shenzhen", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o root@qas-k8s-master01 The key's randomart image is: +---[RSA 2048]----+ |o.==o o. .. | |ooB+o+ o. . | |B++@o o . | |=X**o . | |o=O. . S | |..+ | |oo . | |* . | |o+E | +----[SHA256]-----+ # 複製 SSH 密鑰到目標主機,開啓無密碼 SSH 登陸 # ssh-copy-id 192.168.4.21 # ssh-copy-id 192.168.4.56
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.100:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.100:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.100:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/k8s/etcd/cfg/etcd ExecStart=/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/k8s/etcd/ssl/server.pem \ --key-file=/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/k8s/etcd/ssl/server.pem \ --peer-key-file=/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
cp ca*pem server*pem /k8s/etcd/ssl
systemctl daemon-reload systemctl enable etcd systemctl start etcd
cd /k8s/ scp -r etcd 192.168.4.21:/k8s/ scp -r etcd 192.168.4.56:/k8s/ scp /usr/lib/systemd/system/etcd.service 192.168.4.21:/usr/lib/systemd/system/etcd.service scp /usr/lib/systemd/system/etcd.service 192.168.4.56:/usr/lib/systemd/system/etcd.service #--節點1 vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.21:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.21:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.21:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.21:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://172.16.8.102:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #--節點2 vim /k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.4.56:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.4.56:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.56:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.56:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@master ~]# cd /k8s/etcd/bin/ [root@master bin]# ./etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.100:2379,\ > https://192.168.4.21:2379,\ > https://192.168.4.56:2379" cluster-health member 2345cdd5020eb294 is healthy: got healthy result from https://192.168.4.100:2379 member 91d74712f79e544f is healthy: got healthy result from https://192.168.4.21:2379 member b313b7e8d0a528cc is healthy: got healthy result from https://192.168.4.56:2379 cluster is healthy 注意: 啓動ETCD集羣同時啓動二個節點,啓動一個節點集羣是沒法正常啓動的(或將處於activing狀態)
cd /k8s/etcd/ssl/ /k8s/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem \ --key-file=server-key.pem \ --endpoints="https://192.168.4.100:2379,\ https://192.168.4.21:2379,https://192.168.4.56:2379" \ set /coreos.com/network/config '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
cd /k8s/ scp -r kubernetes 192.168.4.21:/k8s/ scp -r kubernetes 192.168.4.56:/k8s/ scp /k8s/kubernetes/cfg/flanneld 192.168.4.21:/k8s/kubernetes/cfg/flanneld scp /k8s/kubernetes/cfg/flanneld 192.168.4.56:/k8s/kubernetes/cfg/flanneld scp /usr/lib/systemd/system/docker.service 192.168.4.21:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/docker.service 192.168.4.56:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/flanneld.service 192.168.4.21:/usr/lib/systemd/system/flanneld.service scp /usr/lib/systemd/system/flanneld.service 192.168.4.56:/usr/lib/systemd/system/flanneld.service # 啓動服務 systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker
查看是否生效
[root@node ssl]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a5:99:6a brd ff:ff:ff:ff:ff:ff inet 192.168.4.21/16 brd 192.168.255.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::93dc:dfaf:2ddf:1aa9/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:5a:29:34:85 brd ff:ff:ff:ff:ff:ff inet 172.18.58.1/24 brd 172.18.58.255 scope global docker0 valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 16:6e:22:47:d0:cd brd ff:ff:ff:ff:ff:ff inet 172.18.58.0/32 scope global flannel.1 valid_lft forever preferred_lft forever
kubernetes master 節點運行以下組件:
tar -xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
cp *pem /k8s/kubernetes/ssl/
[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 91af09d8720f467def95b65704862025 [root@master ~]# cat /k8s/kubernetes/cfg/token.csv 91af09d8720f467def95b65704862025,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 \ --bind-address=192.168.4.100 \ --secure-port=6443 \ --advertise-address=192.168.4.100 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
[root@master ~]# ps -ef |grep kube-apiserver root 90572 118543 0 10:27 pts/0 00:00:00 grep --color=auto kube-apiserver root 119804 1 1 Feb26 ? 00:22:45 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 --bind-address=192.168.4.100 --secure-port=6443 --advertise-address=192.168.4.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem
vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl restart kube-scheduler.service
[root@master ~]# ps -ef |grep kube-scheduler root 3591 1 0 Feb25 ? 00:16:17 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 90724 118543 0 10:28 pts/0 00:00:00 grep --color=auto kube-scheduler [root@master ~]# [root@master ~]# systemctl status kube-scheduler ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-25 14:58:31 CST; 1 day 19h ago Docs: https://github.com/kubernetes/kubernetes Main PID: 3591 (kube-scheduler) Memory: 36.9M CGroup: /system.slice/kube-scheduler.service └─3591 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect Feb 27 10:22:54 master kube-scheduler[3591]: I0227 10:22:54.611139 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:01 master kube-scheduler[3591]: I0227 10:23:01.496338 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:02 master kube-scheduler[3591]: I0227 10:23:02.346595 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:23:19 master kube-scheduler[3591]: I0227 10:23:19.677905 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:26:36 master kube-scheduler[3591]: I0227 10:26:36.850715 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:27:21 master kube-scheduler[3591]: I0227 10:27:21.523891 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:27:22 master kube-scheduler[3591]: I0227 10:27:22.520733 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:12 master kube-scheduler[3591]: I0227 10:28:12.498729 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:33 master kube-scheduler[3591]: I0227 10:28:33.519011 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Feb 27 10:28:50 master kube-scheduler[3591]: I0227 10:28:50.573353 3591 reflector.go:357] k8s.io/client-go/informers/...ceived Hint: Some lines were ellipsized, use -l to show in full.
vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
[root@master ~]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-02-26 14:14:18 CST; 20h ago Docs: https://github.com/kubernetes/kubernetes Main PID: 120023 (kube-controller) Memory: 76.2M CGroup: /system.slice/kube-controller-manager.service └─120023 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elec... Feb 27 10:31:30 master kube-controller-manager[120023]: I0227 10:31:30.722696 120023 node_lifecycle_controller.go:929] N...tamp. Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.088697 120023 gc_controller.go:144] GC'ing orphaned Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.094678 120023 gc_controller.go:173] GC'ing unsche...ting. Feb 27 10:31:34 master kube-controller-manager[120023]: I0227 10:31:34.271634 120023 attach_detach_controller.go:634] pr...4.21" Feb 27 10:31:35 master kube-controller-manager[120023]: I0227 10:31:35.723490 120023 node_lifecycle_controller.go:929] N...tamp. Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.377876 120023 attach_detach_controller.go:634] pr....100" Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.498005 120023 attach_detach_controller.go:634] pr...4.56" Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.500915 120023 cronjob_controller.go:111] Found 0 jobs Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505005 120023 cronjob_controller.go:119] Found 0 cronjobs Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505021 120023 cronjob_controller.go:122] Found 0 groups Hint: Some lines were ellipsized, use -l to show in full. [root@master ~]# [root@master ~]# ps -ef|grep kube-controller-manager root 90967 118543 0 10:31 pts/0 00:00:00 grep --color=auto kube-controller-manager root 120023 1 0 Feb26 ? 00:08:42 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem
vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin # 生效變量 source /etc/profile
[root@master ~]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"}
kubernetes work 節點運行以下組件:
cp kubelet kube-proxy /k8s/kubernetes/bin/ scp kubelet kube-proxy 192.168.4.21:/k8s/kubernetes/bin/ scp kubelet kube-proxy 192.168.4.56:/k8s/kubernetes/bin/
# 在master節點 cd /k8s/kubernetes/ssl/ # 編輯並運行該腳本 vim environment.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=91af09d8720f467def95b65704862025 KUBE_APISERVER="https://192.168.4.100:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.21:/k8s/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.56:/k8s/kubernetes/cfg/
建立 kubelet 參數配置模板文件
# 節點1 vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.21 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true # 節點2 vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.4.56 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
# 節點1 vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.21 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" # 節點2 vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.56 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
能夠手動或自動 approve CSR 請求。推薦使用自動的方式,由於從 v1.8 版本開始,能夠自動輪轉approve csr 後生成的證書。
手動 approve CSR 請求
查看 CSR 列表:
# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 39m kubelet-bootstrap Pending node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 5m5s kubelet-bootstrap Pending # kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs certificatesigningrequest.certificates.k8s.io/node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs # kubectl certificate approve node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s certificatesigningrequest.certificates.k8s.io/node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s approved [ # kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 41m kubelet-bootstrap Approved,Issued node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 7m32s kubelet-bootstrap Approved,Issued
[root@master ssl]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.4.100 Ready 43h v1.13.0 192.168.4.21 Ready 20h v1.13.0 192.168.4.56 Ready 20h v1.13.0
kube-proxy 運行在全部 node節點上,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。
vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.4.100 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy
[root@node ~]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-02-25 15:38:16 CST; 1 day 19h ago Main PID: 2887 (kube-proxy) Memory: 8.2M CGroup: /system.slice/kube-proxy.service ‣ 2887 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.4.100 --cluster-cidr=10.... Feb 27 11:06:44 node kube-proxy[2887]: I0227 11:06:44.625875 2887 config.go:141] Calling handler.OnEndpointsUpdate
打node 或者master 節點的標籤
kubectl label node 192.168.4.100 node-role.kubernetes.io/master='master' kubectl label node 192.168.4.21 node-role.kubernetes.io/node='node' kubectl label node 192.168.4.56 node-role.kubernetes.io/node='node'
[root@master ~]# kubectl get node,cs NAME STATUS ROLES AGE VERSION node/192.168.4.100 Ready master 43h v1.13.0 node/192.168.4.21 Ready node 20h v1.13.0 node/192.168.4.56 Ready node 20h v1.13.0 NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"}