返回目錄html
kube-apiserver:集羣核心,集羣API接口、集羣各個組件通訊的中樞;集羣安全控制;node
etcd:集羣的數據中心,用於存放集羣的配置以及狀態信息,很是重要,若是數據丟失那麼集羣將沒法恢復;所以高可用集羣部署首先就是etcd是高可用集羣;linux
kube-scheduler:集羣Pod的調度中心;默認kubeadm安裝狀況下--leader-elect參數已經設置爲true,保證master集羣中只有一個kube-scheduler處於活躍狀態;nginx
kube-controller-manager:集羣狀態管理器,當集羣狀態與指望不一樣時,kcm會努力讓集羣恢復指望狀態,好比:當一個pod死掉,kcm會努力新建一個pod來恢復對應replicas set指望的狀態;默認kubeadm安裝狀況下--leader-elect參數已經設置爲true,保證master集羣中只有一個kube-controller-manager處於活躍狀態;git
kubelet: kubernetes node agent,負責與node上的docker engine打交道;github
kube-proxy: 每一個node上一個,負責service vip到endpoint pod的流量轉發,當前主要經過設置iptables規則實現。web
keepalived集羣設置一個虛擬ip地址,虛擬ip地址指向k8s-master一、k8s-master二、k8s-master3。docker
nginx用於k8s-master一、k8s-master二、k8s-master3的apiserver的負載均衡。外部kubectl以及nodes訪問apiserver的時候就能夠用過keepalived的虛擬ip(192.168.60.80)以及nginx端口(8443)訪問master集羣的apiserver。bootstrap
返回目錄api
主機名 | IP地址 | 說明 | 組件 |
---|---|---|---|
k8s-master1 | 192.168.60.71 | master節點1 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster |
k8s-master2 | 192.168.60.72 | master節點2 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster |
k8s-master3 | 192.168.60.73 | master節點3 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster |
無 | 192.168.60.80 | keepalived虛擬IP | 無 |
k8s-node1 ~ 8 | 192.168.60.81 ~ 88 | 8個node節點 | kubelet、kube-proxy |
cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core)
$ docker version Client: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:20:01 2017 OS/Arch: linux/amd64 Server: Version: 1.12.6 API version: 1.24 Go version: go1.6.4 Git commit: 78d1802 Built: Tue Jan 10 20:20:01 2017 OS/Arch: linux/amd64
$ kubeadm version kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubelet --version Kubernetes v1.6.4
https://www.daocloud.io/mirror#accelerator-doc
$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 $ docker pull quay.io/coreos/flannel:v0.7.1-amd64 $ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 $ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 $ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 $ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 $ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 $ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 $ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 $ docker pull nginx:latest $ docker pull gcr.io/google_containers/pause-amd64:3.0
$ git clone https://github.com/cookeem/kubeadm-ha $ cd kubeadm-ha
$ mkdir -p docker-images $ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 $ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.4 $ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 $ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 $ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 $ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 $ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 $ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 $ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 $ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 $ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 $ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 $ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 $ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 $ docker save -o docker-images/nginx nginx:latest
$ scp -r * root@k8s-master1:/root/kubeadm-ha $ scp -r * root@k8s-master2:/root/kubeadm-ha $ scp -r * root@k8s-master3:/root/kubeadm-ha $ scp -r * root@k8s-node1:/root/kubeadm-ha $ scp -r * root@k8s-node2:/root/kubeadm-ha $ scp -r * root@k8s-node3:/root/kubeadm-ha $ scp -r * root@k8s-node4:/root/kubeadm-ha $ scp -r * root@k8s-node5:/root/kubeadm-ha $ scp -r * root@k8s-node6:/root/kubeadm-ha $ scp -r * root@k8s-node7:/root/kubeadm-ha $ scp -r * root@k8s-node8:/root/kubeadm-ha
如下在kubernetes全部節點上都是使用root用戶進行操做
在kubernetes全部節點上增長kubernetes倉庫
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
$ yum update -y
$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
$ vi /etc/selinux/config SELINUX=permissive
$ vi /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
$ reboot
$ getenforce Permissive
$ yum install -y docker kubelet kubeadm kubernetes-cni $ systemctl enable docker && systemctl start docker $ systemctl enable kubelet && systemctl start kubelet
$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 $ docker load -i /root/kubeadm-ha/docker-images/flannel $ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 $ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 $ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 $ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 $ docker load -i /root/kubeadm-ha/docker-images/nginx $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 5 weeks ago 150.6 MB gcr.io/google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 5 weeks ago 109.2 MB gcr.io/google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 5 weeks ago 132.8 MB gcr.io/google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 5 weeks ago 76.75 MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 fc5e302d8309 4 months ago 44.52 MB gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 4 months ago 52.36 MB gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 1091847716ec 4 months ago 44.84 MB gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB nginx latest 01f818af747d 6 months ago 181.6 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB
$ docker stop etcd && docker rm etcd $ rm -rf /var/lib/etcd-cluster $ mkdir -p /var/lib/etcd-cluster $ docker run -d \ --restart always \ -v /etc/ssl/certs:/etc/ssl/certs \ -v /var/lib/etcd-cluster:/var/lib/etcd \ -p 4001:4001 \ -p 2380:2380 \ -p 2379:2379 \ --name etcd \ gcr.io/google_containers/etcd-amd64:3.0.17 \ etcd --name=etcd0 \ --advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.71:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd
$ docker stop etcd && docker rm etcd $ rm -rf /var/lib/etcd-cluster $ mkdir -p /var/lib/etcd-cluster $ docker run -d \ --restart always \ -v /etc/ssl/certs:/etc/ssl/certs \ -v /var/lib/etcd-cluster:/var/lib/etcd \ -p 4001:4001 \ -p 2380:2380 \ -p 2379:2379 \ --name etcd \ gcr.io/google_containers/etcd-amd64:3.0.17 \ etcd --name=etcd1 \ --advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.72:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd
$ docker stop etcd && docker rm etcd $ rm -rf /var/lib/etcd-cluster $ mkdir -p /var/lib/etcd-cluster $ docker run -d \ --restart always \ -v /etc/ssl/certs:/etc/ssl/certs \ -v /var/lib/etcd-cluster:/var/lib/etcd \ -p 4001:4001 \ -p 2380:2380 \ -p 2379:2379 \ --name etcd \ gcr.io/google_containers/etcd-amd64:3.0.17 \ etcd --name=etcd2 \ --advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.73:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd
$ docker exec -ti etcd ash $ etcdctl member list 1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false 1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false 4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true $ etcdctl cluster-health member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379 member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379 member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379 cluster is healthy $ exit
$ vi /root/kubeadm-ha/kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration kubernetesVersion: v1.6.4 networking: podSubnet: 10.244.0.0/16 etcd: endpoints: - http://192.168.60.71:2379 - http://192.168.60.72:2379 - http://192.168.60.73:2379
$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf #Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init.yaml
$ vi ~/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf $ source ~/.bashrc
$ kubectl create -f /root/kubeadm-ha/kube-flannel clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created
$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1 kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1 kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1 kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1
$ kubectl create -f /root/kubeadm-ha/kube-dashboard/ serviceaccount "kubernetes-dashboard" created clusterrolebinding "kubernetes-dashboard" created deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created
$ kubectl proxy --address='0.0.0.0' &
http://k8s-master1:30000
$ kubectl taint nodes --all node-role.kubernetes.io/master- node "k8s-master1" tainted
$ kubectl create -f /root/kubeadm-ha/kube-heapster
$ systemctl restart docker kubelet
$ kubectl get all --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1 kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1 kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1 kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1 kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1 kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1 kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1
http://k8s-master1:30000
scp -r /etc/kubernetes/ k8s-master2:/etc/ scp -r /etc/kubernetes/ k8s-master3:/etc/
$ systemctl daemon-reload && systemctl restart kubelet $ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago Docs: http://kubernetes.io/docs/ Main PID: 2780 (kubelet) Memory: 92.9M CGroup: /system.slice/kubelet.service ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-... └─2811 journalctl -k -f
$ vi ~/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf $ source ~/.bashrc
$ kubectl get nodes -o wide NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION k8s-master1 Ready 26m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64 k8s-master2 Ready 2m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 k8s-master3 Ready 2m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - --advertise-address=${HOST_IP}
$ vi /etc/kubernetes/kubelet.conf server: https://${HOST_IP}:6443
$ systemctl daemon-reload && systemctl restart docker kubelet
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt Certificate: Data: Version: 3 (0x2) Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=kubernetes Validity Not Before: Jun 22 16:22:44 2017 GMT Not After : Jun 22 16:22:44 2018 GMT Subject: CN=kube-apiserver, Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: 53:4b Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:k8s-master1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.71 Signature Algorithm: sha1WithRSAEncryption dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: 9e:78:ab:ce
$ mkdir -p /etc/kubernetes/pki-local $ cd /etc/kubernetes/pki-local
$ openssl genrsa -out apiserver.key 2048
$ openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr
$ vi apiserver.ext subjectAltName = DNS:${HOST_NAME},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:${HOST_IP}, IP:${VIRTUAL_IP}
$ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365 -extfile /etc/kubernetes/pki-local/apiserver.ext
$ openssl x509 -noout -text -in apiserver.crt Certificate: Data: Version: 3 (0x2) Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=kubernetes Validity Not Before: Jun 22 16:22:44 2017 GMT Not After : Jun 22 16:22:44 2018 GMT Subject: CN=kube-apiserver, Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: 53:4b Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:k8s-master3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.73, IP Address:192.168.60.80 Signature Algorithm: sha1WithRSAEncryption dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: 9e:78:ab:ce
$ cp apiserver.crt apiserver.key /etc/kubernetes/pki/
$ vi /etc/kubernetes/admin.conf server: https://${HOST_IP}:6443
$ vi /etc/kubernetes/controller-manager.conf server: https://${HOST_IP}:6443
$ vi /etc/kubernetes/scheduler.conf server: https://${HOST_IP}:6443
$ systemctl daemon-reload && systemctl restart docker kubelet
$ kubectl get pod --all-namespaces -o wide | grep k8s-master2 kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2 kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2 kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2 kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 $ kubectl get pod --all-namespaces -o wide | grep k8s-master3 kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3 kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3 kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3 kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3
$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 $ kubectl logs -n kube-system kube-controller-manager-k8s-master2 $ kubectl logs -n kube-system kube-controller-manager-k8s-master3 $ kubectl logs -n kube-system kube-scheduler-k8s-master1 $ kubectl logs -n kube-system kube-scheduler-k8s-master2 $ kubectl logs -n kube-system kube-scheduler-k8s-master3
$ kubectl get deploy --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system heapster 1 1 1 1 41m kube-system kube-dns 1 1 1 1 48m kube-system kubernetes-dashboard 1 1 1 1 43m kube-system monitoring-grafana 1 1 1 1 41m kube-system monitoring-influxdb 1 1 1 1 41m
$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns $ kubectl get pods --all-namespaces -o wide| grep kube-dns $ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard $ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard $ kubectl scale --replicas=3 -n kube-system deployment/heapster $ kubectl get pods --all-namespaces -o wide| grep heapster $ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana $ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana $ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb $ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb
$ yum install -y keepalived $ systemctl enable keepalived && systemctl restart keepalived
$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
$ vi /etc/keepalived/check_apiserver.sh #!/bin/bash err=0 for k in $( seq 1 10 ) do check_code=$(ps -ef|grep kube-apiserver | wc -l) if [ "$check_code" = "1" ]; then err=$(expr $err + 1) sleep 5 continue else err=0 break fi done if [ "$err" != "0" ]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi chmod a+x /etc/keepalived/check_apiserver.sh
$ ip a | grep 192.168.60
ip a
命令查看)$ vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state ${STATE} interface ${INTERFACE_NAME} mcast_src_ip ${HOST_IP} virtual_router_id 51 priority ${PRIORITY} advert_int 2 authentication { auth_type PASS auth_pass 4be37dc3b4c90194d1600c483e10ad1d } virtual_ipaddress { ${VIRTUAL_IP} } track_script { chk_apiserver } }
$ systemctl restart keepalived $ ping 192.168.60.80
$ vi /root/kubeadm-ha/nginx-default.conf stream { upstream apiserver { server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; } server { listen 8443; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass apiserver; } }
$ docker run -d -p 8443:8443 \ --name nginx-lb \ --restart always \ -v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \ nginx
$ curl -L 192.168.60.80:8443 | wc -l % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000 1
$ systemctl restart keepalived
$ systemctl status keepalived -l VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80
$ kubectl get -n kube-system configmap NAME DATA AGE extension-apiserver-authentication 6 4h kube-flannel-cfg 2 4h kube-proxy 1 4h
$ kubectl edit -n kube-system configmap/kube-proxy server: https://192.168.60.80:8443
$ kubectl get -n kube-system configmap/kube-proxy -o yaml
kubectl get pods --all-namespaces -o wide | grep proxy
$ systemctl restart docker kubelet keepalived
$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 $ kubectl get pods --all-namespaces -o wide | grep k8s-master2 $ kubectl get pods --all-namespaces -o wide | grep k8s-master3
$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' $ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' $ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}'
$ kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION xxxxxx.yyyyyy <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'
$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443
$ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago Docs: http://kubernetes.io/docs/ Main PID: 1146 (kubelet) Memory: 204.9M CGroup: /system.slice/kubelet.service ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require... ├─ 2553 journalctl -k -f ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl...
$ kubectl get nodes -o wide NAME STATUS AGE VERSION k8s-master1 Ready,SchedulingDisabled 5h v1.6.4 k8s-master2 Ready,SchedulingDisabled 4h v1.6.4 k8s-master3 Ready,SchedulingDisabled 4h v1.6.4 k8s-node1 Ready 6m v1.6.4 k8s-node2 Ready 4m v1.6.4 k8s-node3 Ready 4m v1.6.4 k8s-node4 Ready 3m v1.6.4 k8s-node5 Ready 3m v1.6.4 k8s-node6 Ready 3m v1.6.4 k8s-node7 Ready 3m v1.6.4 k8s-node8 Ready 3m v1.6.4
$ kubectl run nginx --image=nginx --port=80 deployment "nginx" created $ kubectl get pod -o wide -l=run=nginx NAME READY STATUS RESTARTS AGE IP NODE nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5
$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort service "nginx" exposed $ kubectl get svc -l=run=nginx NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx 10.105.151.69 <nodes> 80:31639/TCP 43s $ curl k8s-master2:31639 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>