基於kubeadm的kubernetes高可用集羣部署

kubeadm-highavailiability - 基於kubeadm的kubernetes高可用集羣部署

k8s logo



目錄

  1. 部署架構
    1. 概要部署架構
    2. 詳細部署架構
    3. 主機節點清單
  2. 安裝前準備
    1. 版本信息
    2. 所需docker鏡像
    3. 系統設置
  3. kubernetes安裝
    1. kubernetes相關服務安裝
    2. docker鏡像導入
  4. 第一臺master初始化
    1. 獨立etcd集羣部署
    2. kubeadm初始化
    3. flannel網絡組件安裝
    4. dashboard組件安裝
    5. heapster組件安裝
  5. master集羣高可用設置
    1. 複製配置
    2. 建立證書
    3. 修改配置
    4. 驗證高可用安裝
    5. keepalived安裝配置
    6. nginx負載均衡配置
    7. kube-proxy配置
    8. 驗證master集羣高可用
  6. node節點加入高可用集羣設置
    1. kubeadm加入高可用集羣
    2. 部署應用驗證集羣

部署架構

概要部署架構

ha logo

  • kubernetes高可用的核心架構是master的高可用,kubectl、客戶端以及nodes訪問load balancer實現高可用。

返回目錄html

詳細部署架構

k8s ha

  • kubernetes組件說明

kube-apiserver:集羣核心,集羣API接口、集羣各個組件通訊的中樞;集羣安全控制;node

etcd:集羣的數據中心,用於存放集羣的配置以及狀態信息,很是重要,若是數據丟失那麼集羣將沒法恢復;所以高可用集羣部署首先就是etcd是高可用集羣;linux

kube-scheduler:集羣Pod的調度中心;默認kubeadm安裝狀況下--leader-elect參數已經設置爲true,保證master集羣中只有一個kube-scheduler處於活躍狀態;nginx

kube-controller-manager:集羣狀態管理器,當集羣狀態與指望不一樣時,kcm會努力讓集羣恢復指望狀態,好比:當一個pod死掉,kcm會努力新建一個pod來恢復對應replicas set指望的狀態;默認kubeadm安裝狀況下--leader-elect參數已經設置爲true,保證master集羣中只有一個kube-controller-manager處於活躍狀態;git

kubelet: kubernetes node agent,負責與node上的docker engine打交道;github

kube-proxy: 每一個node上一個,負責service vip到endpoint pod的流量轉發,當前主要經過設置iptables規則實現。web

  • 負載均衡

keepalived集羣設置一個虛擬ip地址,虛擬ip地址指向k8s-master一、k8s-master二、k8s-master3。docker

nginx用於k8s-master一、k8s-master二、k8s-master3的apiserver的負載均衡。外部kubectl以及nodes訪問apiserver的時候就能夠用過keepalived的虛擬ip(192.168.60.80)以及nginx端口(8443)訪問master集羣的apiserver。bootstrap


返回目錄api

主機節點清單

主機名 IP地址 說明 組件
k8s-master1 192.168.60.71 master節點1 keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster
k8s-master2 192.168.60.72 master節點2 keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster
k8s-master3 192.168.60.73 master節點3 keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster
192.168.60.80 keepalived虛擬IP
k8s-node1 ~ 8 192.168.60.81 ~ 88 8個node節點 kubelet、kube-proxy

返回目錄

安裝前準備

版本信息

  • Linux版本:CentOS 7.3.1611
cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (Core)
  • docker版本:1.12.6
$ docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64
  • kubeadm版本:v1.6.4
$ kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
  • kubelet版本:v1.6.4
$ kubelet --version
Kubernetes v1.6.4

返回目錄

所需docker鏡像

  • 國內可使用daocloud加速器下載相關鏡像,而後經過docker save、docker load把本地下載的鏡像放到kubernetes集羣的所在機器上,daocloud加速器連接以下:

https://www.daocloud.io/mirror#accelerator-doc

  • 在本機MacOSX上pull相關docker鏡像
$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.4
$ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.4
$ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4
$ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.4
$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
$ docker pull quay.io/coreos/flannel:v0.7.1-amd64
$ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0
$ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
$ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
$ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
$ docker pull gcr.io/google_containers/etcd-amd64:3.0.17
$ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2
$ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
$ docker pull nginx:latest
$ docker pull gcr.io/google_containers/pause-amd64:3.0
  • 在本機MacOSX上獲取代碼,並進入代碼目錄
$ git clone https://github.com/cookeem/kubeadm-ha
$ cd kubeadm-ha
  • 在本機MacOSX上把相關docker鏡像保存成文件
$ mkdir -p docker-images
$ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.6.4
$ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.4
$ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4
$ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.6.4
$ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
$ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64
$ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0
$ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
$ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
$ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
$ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17
$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2
$ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
$ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0
$ docker save -o docker-images/nginx nginx:latest
  • 在本機MacOSX上把代碼以及docker鏡像複製到全部節點上
$ scp -r * root@k8s-master1:/root/kubeadm-ha
$ scp -r * root@k8s-master2:/root/kubeadm-ha
$ scp -r * root@k8s-master3:/root/kubeadm-ha
$ scp -r * root@k8s-node1:/root/kubeadm-ha
$ scp -r * root@k8s-node2:/root/kubeadm-ha
$ scp -r * root@k8s-node3:/root/kubeadm-ha
$ scp -r * root@k8s-node4:/root/kubeadm-ha
$ scp -r * root@k8s-node5:/root/kubeadm-ha
$ scp -r * root@k8s-node6:/root/kubeadm-ha
$ scp -r * root@k8s-node7:/root/kubeadm-ha
$ scp -r * root@k8s-node8:/root/kubeadm-ha

返回目錄

系統設置

  • 如下在kubernetes全部節點上都是使用root用戶進行操做

  • 在kubernetes全部節點上增長kubernetes倉庫

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
  • 在kubernetes全部節點上進行系統更新
$ yum update -y
  • 在kubernetes全部節點上關閉防火牆
$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
  • 在kubernetes全部節點上設置SELINUX爲permissive模式
$ vi /etc/selinux/config
SELINUX=permissive
  • 在kubernetes全部節點上設置iptables參數,不然kubeadm init會提示錯誤
$ vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
  • 在kubernetes全部節點上重啓主機
$ reboot

返回目錄

kubernetes安裝

kubernetes相關服務安裝

  • 在kubernetes全部節點上驗證SELINUX模式,必須保證SELINUX爲permissive模式,不然kubernetes啓動會出現各類異常
$ getenforce
Permissive
  • 在kubernetes全部節點上安裝並啓動kubernetes
$ yum install -y docker kubelet kubeadm kubernetes-cni
$ systemctl enable docker && systemctl start docker
$ systemctl enable kubelet && systemctl start kubelet

返回目錄

docker鏡像導入

  • 在kubernetes全部節點上導入docker鏡像
$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64
$ docker load -i /root/kubeadm-ha/docker-images/flannel
$ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64
$ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64
$ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64
$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64
$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64
$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64
$ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64
$ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64
$ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64
$ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64
$ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64
$ docker load -i /root/kubeadm-ha/docker-images/pause-amd64
$ docker load -i /root/kubeadm-ha/docker-images/nginx

$ docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-apiserver-amd64            v1.6.4              4e3810a19a64        5 weeks ago         150.6 MB
gcr.io/google_containers/kube-proxy-amd64                v1.6.4              e073a55c288b        5 weeks ago         109.2 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.6.4              0ea16a85ac34        5 weeks ago         132.8 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.6.4              1fab9be555e1        5 weeks ago         76.75 MB
gcr.io/google_containers/kubernetes-dashboard-amd64      v1.6.1              71dfe833ce74        6 weeks ago         134.4 MB
quay.io/coreos/flannel                                   v0.7.1-amd64        cd4ae0be5e1b        10 weeks ago        77.76 MB
gcr.io/google_containers/heapster-amd64                  v1.3.0              f9d33bedfed3        3 months ago        68.11 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64           1.14.1              fc5e302d8309        4 months ago        44.52 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64          1.14.1              f8363dbf447b        4 months ago        52.36 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64     1.14.1              1091847716ec        4 months ago        44.84 MB
gcr.io/google_containers/etcd-amd64                      3.0.17              243830dae7dd        4 months ago        168.9 MB
gcr.io/google_containers/heapster-grafana-amd64          v4.0.2              a1956d2a1a16        5 months ago        131.5 MB
gcr.io/google_containers/heapster-influxdb-amd64         v1.1.1              d3fccbedd180        5 months ago        11.59 MB
nginx                                                    latest              01f818af747d        6 months ago        181.6 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        14 months ago       746.9 kB

返回目錄

第一臺master初始化

獨立etcd集羣部署

  • 在k8s-master1節點上以docker方式啓動etcd集羣
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd0 \
--advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.60.71:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd
  • 在k8s-master2節點上以docker方式啓動etcd集羣
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd1 \
--advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.60.72:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd
  • 在k8s-master3節點上以docker方式啓動etcd集羣
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd2 \
--advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \
--listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
--initial-advertise-peer-urls=http://192.168.60.73:2380 \
--listen-peer-urls=http://0.0.0.0:2380 \
--initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
--initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \
--initial-cluster-state=new \
--auto-tls \
--peer-auto-tls \
--data-dir=/var/lib/etcd
  • 在k8s-master一、k8s-master二、k8s-master3上檢查etcd啓動狀態
$ docker exec -ti etcd ash

$ etcdctl member list
1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false
1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false
4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true

$ etcdctl cluster-health
member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379
member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379
member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379
cluster is healthy

$ exit

返回目錄

kubeadm初始化

  • 在k8s-master1上修改kubeadm-init.yaml文件,設置etcd.endpoints的${HOST_IP}爲k8s-master一、k8s-master二、k8s-master3的IP地址
$ vi /root/kubeadm-ha/kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: v1.6.4
networking:
  podSubnet: 10.244.0.0/16
etcd:
  endpoints:
  - http://192.168.60.71:2379
  - http://192.168.60.72:2379
  - http://192.168.60.73:2379
  • 若是使用kubeadm初始化集羣,啓動過程可能會卡在如下位置,那麼多是由於cgroup-driver參數與docker的不一致引發
  • [apiclient] Created API client, waiting for the control plane to become ready
  • journalctl -t kubelet -S '2017-06-08'查看日誌,發現以下錯誤
  • error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd"
  • 須要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemd爲KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
  • 在k8s-master1上使用kubeadm初始化kubernetes集羣,鏈接外部etcd集羣
$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init.yaml
  • 在k8s-master1上設置kubectl的環境變量KUBECONFIG,鏈接kubelet
$ vi ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

$ source ~/.bashrc

返回目錄

flannel網絡組件安裝

  • 在k8s-master1上安裝flannel pod網絡組件,必須安裝網絡組件,不然kube-dns pod會一直處於ContainerCreating
$ kubectl create -f /root/kubeadm-ha/kube-flannel
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
  • 在k8s-master1上驗證kube-dns成功啓動,大概等待3分鐘,驗證全部pods的狀態爲Running
$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   kube-apiserver-k8s-master1           1/1       Running   0          3m        192.168.60.71   k8s-master1
kube-system   kube-controller-manager-k8s-master1  1/1       Running   0          3m        192.168.60.71   k8s-master1
kube-system   kube-dns-3913472980-k9mt6            3/3       Running   0          4m        10.244.0.104    k8s-master1
kube-system   kube-flannel-ds-3hhjd                2/2       Running   0          1m        192.168.60.71   k8s-master1
kube-system   kube-proxy-rzq3t                     1/1       Running   0          4m        192.168.60.71   k8s-master1
kube-system   kube-scheduler-k8s-master1           1/1       Running   0          3m        192.168.60.71   k8s-master1

返回目錄

dashboard組件安裝

  • 在k8s-master1上安裝dashboard組件
$ kubectl create -f /root/kubeadm-ha/kube-dashboard/
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
  • 在k8s-master1上啓動proxy,映射地址到0.0.0.0
$ kubectl proxy --address='0.0.0.0' &
  • 在本機MacOSX上訪問dashboard地址,驗證dashboard成功啓動
http://k8s-master1:30000

dashboard


返回目錄

heapster組件安裝

  • 在k8s-master1上容許在master上部署pod,不然heapster會沒法部署
$ kubectl taint nodes --all node-role.kubernetes.io/master-
node "k8s-master1" tainted
  • 在k8s-master1上安裝heapster組件,監控性能
$ kubectl create -f /root/kubeadm-ha/kube-heapster
  • 在k8s-master1上重啓docker以及kubelet服務,讓heapster在dashboard上生效顯示
$ systemctl restart docker kubelet
  • 在k8s-master上檢查pods狀態
$ kubectl get all --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
kube-system   heapster-783524908-kn6jd                1/1       Running   1          9m        10.244.0.111    k8s-master1
kube-system   kube-apiserver-k8s-master1              1/1       Running   1          15m       192.168.60.71   k8s-master1
kube-system   kube-controller-manager-k8s-master1     1/1       Running   1          15m       192.168.60.71   k8s-master1
kube-system   kube-dns-3913472980-k9mt6               3/3       Running   3          16m       10.244.0.110    k8s-master1
kube-system   kube-flannel-ds-3hhjd                   2/2       Running   3          13m       192.168.60.71   k8s-master1
kube-system   kube-proxy-rzq3t                        1/1       Running   1          16m       192.168.60.71   k8s-master1
kube-system   kube-scheduler-k8s-master1              1/1       Running   1          15m       192.168.60.71   k8s-master1
kube-system   kubernetes-dashboard-2039414953-d46vw   1/1       Running   1          11m       10.244.0.109    k8s-master1
kube-system   monitoring-grafana-3975459543-8l94z     1/1       Running   1          9m        10.244.0.112    k8s-master1
kube-system   monitoring-influxdb-3480804314-72ltf    1/1       Running   1          9m        10.244.0.113    k8s-master1
  • 在本機MacOSX上訪問dashboard地址,驗證heapster成功啓動,查看Pods的CPU以及Memory信息是否正常呈現
http://k8s-master1:30000

heapster

  • 至此,第一臺master成功安裝,並已經完成flannel、dashboard、heapster的部署

返回目錄

master集羣高可用設置

複製配置

  • 在k8s-master1上把/etc/kubernetes/複製到k8s-master二、k8s-master3
scp -r /etc/kubernetes/ k8s-master2:/etc/
scp -r /etc/kubernetes/ k8s-master3:/etc/
  • 在k8s-master二、k8s-master3上重啓kubelet服務,並檢查kubelet服務狀態爲active (running)
$ systemctl daemon-reload && systemctl restart kubelet

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 2780 (kubelet)
   Memory: 92.9M
   CGroup: /system.slice/kubelet.service
           ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-...
           └─2811 journalctl -k -f
  • 在k8s-master二、k8s-master3上設置kubectl的環境變量KUBECONFIG,鏈接kubelet
$ vi ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

$ source ~/.bashrc
  • 在k8s-master二、k8s-master3檢測節點狀態,發現節點已經加進來
$ kubectl get nodes -o wide
NAME          STATUS    AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION
k8s-master1   Ready     26m       v1.6.4    <none>        CentOS Linux 7 (Core)   3.10.0-514.6.1.el7.x86_64
k8s-master2   Ready     2m        v1.6.4    <none>        CentOS Linux 7 (Core)   3.10.0-514.21.1.el7.x86_64
k8s-master3   Ready     2m        v1.6.4    <none>        CentOS Linux 7 (Core)   3.10.0-514.21.1.el7.x86_64
  • 在k8s-master二、k8s-master3上修改kube-apiserver.yaml的配置,${HOST_IP}改成本機IP
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
    - --advertise-address=${HOST_IP}
  • 在k8s-master2和k8s-master3上的修改kubelet.conf設置,${HOST_IP}改成本機IP
$ vi /etc/kubernetes/kubelet.conf
server: https://${HOST_IP}:6443
  • 在k8s-master2和k8s-master3上的重啓服務
$ systemctl daemon-reload && systemctl restart docker kubelet

返回目錄

建立證書

  • 在k8s-master2和k8s-master3上修改kubelet.conf後,因爲kubelet.conf配置的crt和key與本機IP地址不一致的狀況,kubelet服務會異常退出,crt和key必須從新制做。查看apiserver.crt的簽名信息,發現IP Address以及DNS綁定了k8s-master1,必須進行相應修改。
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 9486057293403496063 (0x83a53ed95c519e7f)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jun 22 16:22:44 2017 GMT
            Not After : Jun 22 16:22:44 2018 GMT
        Subject: CN=kube-apiserver,
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b:
                    53:4b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Alternative Name: 
                DNS:k8s-master1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.71
    Signature Algorithm: sha1WithRSAEncryption
         dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91:
         9e:78:ab:ce
  • 在k8s-master一、k8s-master二、k8s-master3上使用ca.key和ca.crt製做apiserver.crt和apiserver.key
$ mkdir -p /etc/kubernetes/pki-local

$ cd /etc/kubernetes/pki-local
  • 在k8s-master一、k8s-master二、k8s-master3上生成2048位的密鑰對
$ openssl genrsa -out apiserver.key 2048
  • 在k8s-master一、k8s-master二、k8s-master3上生成證書籤署請求文件
$ openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr
  • 在k8s-master一、k8s-master二、k8s-master3上編輯apiserver.ext文件,${HOST_NAME}修改成本機主機名,${HOST_IP}修改成本機IP地址,${VIRTUAL_IP}修改成keepalived的虛擬IP(192.168.60.80)
$ vi apiserver.ext
subjectAltName = DNS:${HOST_NAME},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:${HOST_IP}, IP:${VIRTUAL_IP}
  • 在k8s-master一、k8s-master二、k8s-master3上使用ca.key和ca.crt簽署上述請求
$ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365 -extfile /etc/kubernetes/pki-local/apiserver.ext
  • 在k8s-master一、k8s-master二、k8s-master3上查看新生成的證書:
$ openssl x509 -noout -text -in apiserver.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 9486057293403496063 (0x83a53ed95c519e7f)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jun 22 16:22:44 2017 GMT
            Not After : Jun 22 16:22:44 2018 GMT
        Subject: CN=kube-apiserver,
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b:
                    53:4b
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Alternative Name: 
                DNS:k8s-master3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.73, IP Address:192.168.60.80
    Signature Algorithm: sha1WithRSAEncryption
         dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91:
         9e:78:ab:ce
  • 在k8s-master一、k8s-master二、k8s-master3上把apiserver.crt和apiserver.key文件複製到/etc/kubernetes/pki目錄
$ cp apiserver.crt apiserver.key /etc/kubernetes/pki/

返回目錄

修改配置

  • 在k8s-master2和k8s-master3上修改admin.conf,${HOST_IP}修改成本機IP地址
$ vi /etc/kubernetes/admin.conf
    server: https://${HOST_IP}:6443
  • 在k8s-master2和k8s-master3上修改controller-manager.conf,${HOST_IP}修改成本機IP地址
$ vi /etc/kubernetes/controller-manager.conf
    server: https://${HOST_IP}:6443
  • 在k8s-master2和k8s-master3上修改scheduler.conf,${HOST_IP}修改成本機IP地址
$ vi /etc/kubernetes/scheduler.conf
    server: https://${HOST_IP}:6443
  • 在k8s-master一、k8s-master二、k8s-master3上重啓全部服務
$ systemctl daemon-reload && systemctl restart docker kubelet

返回目錄

驗證高可用安裝

  • 在k8s-master一、k8s-master二、k8s-master3任意節點上檢測服務啓動狀況,發現apiserver、controller-manager、kube-scheduler、proxy、flannel已經在k8s-master一、k8s-master二、k8s-master3成功啓動
$ kubectl get pod --all-namespaces -o wide | grep k8s-master2
kube-system   kube-apiserver-k8s-master2              1/1       Running   1          55s       192.168.60.72   k8s-master2
kube-system   kube-controller-manager-k8s-master2     1/1       Running   2          18m       192.168.60.72   k8s-master2
kube-system   kube-flannel-ds-t8gkh                   2/2       Running   4          18m       192.168.60.72   k8s-master2
kube-system   kube-proxy-bpgqw                        1/1       Running   1          18m       192.168.60.72   k8s-master2
kube-system   kube-scheduler-k8s-master2              1/1       Running   2          18m       192.168.60.72   k8s-master2

$ kubectl get pod --all-namespaces -o wide | grep k8s-master3
kube-system   kube-apiserver-k8s-master3              1/1       Running   1          1m        192.168.60.73   k8s-master3
kube-system   kube-controller-manager-k8s-master3     1/1       Running   2          18m       192.168.60.73   k8s-master3
kube-system   kube-flannel-ds-tmqmx                   2/2       Running   4          18m       192.168.60.73   k8s-master3
kube-system   kube-proxy-4stg3                        1/1       Running   1          18m       192.168.60.73   k8s-master3
kube-system   kube-scheduler-k8s-master3              1/1       Running   2          18m       192.168.60.73   k8s-master3
  • 在k8s-master一、k8s-master二、k8s-master3任意節點上經過kubectl logs檢查各個controller-manager和scheduler的leader election結果,能夠發現只有一個節點有效表示選舉正常
$ kubectl logs -n kube-system kube-controller-manager-k8s-master1
$ kubectl logs -n kube-system kube-controller-manager-k8s-master2
$ kubectl logs -n kube-system kube-controller-manager-k8s-master3

$ kubectl logs -n kube-system kube-scheduler-k8s-master1
$ kubectl logs -n kube-system kube-scheduler-k8s-master2
$ kubectl logs -n kube-system kube-scheduler-k8s-master3
  • 在k8s-master一、k8s-master二、k8s-master3任意節點上查看deployment的狀況
$ kubectl get deploy --all-namespaces
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   heapster               1         1         1            1           41m
kube-system   kube-dns               1         1         1            1           48m
kube-system   kubernetes-dashboard   1         1         1            1           43m
kube-system   monitoring-grafana     1         1         1            1           41m
kube-system   monitoring-influxdb    1         1         1            1           41m
  • 在k8s-master一、k8s-master二、k8s-master3任意節點上把kubernetes-dashboard、kube-dns、 scale up成replicas=3,保證各個master節點上都有運行
$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns
$ kubectl get pods --all-namespaces -o wide| grep kube-dns

$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard
$ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard

$ kubectl scale --replicas=3 -n kube-system deployment/heapster
$ kubectl get pods --all-namespaces -o wide| grep heapster

$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana
$ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana

$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb
$ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb

返回目錄

keepalived安裝配置

  • 在k8s-master、k8s-master二、k8s-master3上安裝keepalived
$ yum install -y keepalived

$ systemctl enable keepalived && systemctl restart keepalived
  • 在k8s-master一、k8s-master二、k8s-master3上備份keepalived配置文件
$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
  • 在k8s-master一、k8s-master二、k8s-master3上設置apiserver監控腳本,當apiserver檢測失敗的時候關閉keepalived服務,轉移虛擬IP地址
$ vi /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $( seq 1 10 )
do
    check_code=$(ps -ef|grep kube-apiserver | wc -l)
    if [ "$check_code" = "1" ]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
if [ "$err" != "0" ]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

chmod a+x /etc/keepalived/check_apiserver.sh
  • 在k8s-master一、k8s-master二、k8s-master3上查看接口名字
$ ip a | grep 192.168.60
  • 在k8s-master一、k8s-master二、k8s-master3上設置keepalived,參數說明以下:
  • state ${STATE}:爲MASTER或者BACKUP,只能有一個MASTER
  • interface ${INTERFACE_NAME}:爲本機的須要綁定的接口名字(經過上邊的ip a命令查看)
  • mcast_src_ip ${HOST_IP}:爲本機的IP地址
  • priority ${PRIORITY}:爲優先級,例如10二、10一、100,優先級越高越容易選擇爲MASTER,優先級不能同樣
  • ${VIRTUAL_IP}:爲虛擬的IP地址,這裏設置爲192.168.60.80
$ vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state ${STATE}
    interface ${INTERFACE_NAME}
    mcast_src_ip ${HOST_IP}
    virtual_router_id 51
    priority ${PRIORITY}
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 4be37dc3b4c90194d1600c483e10ad1d
    }
    virtual_ipaddress {
        ${VIRTUAL_IP}
    }
    track_script {
       chk_apiserver
    }
}
  • 在k8s-master一、k8s-master二、k8s-master3上重啓keepalived服務,檢測虛擬IP地址是否生效
$ systemctl restart keepalived
$ ping 192.168.60.80

返回目錄

nginx負載均衡配置

  • 在k8s-master一、k8s-master二、k8s-master3上修改nginx-default.conf設置,${HOST_IP}對應k8s-master一、k8s-master二、k8s-master3的地址。經過nginx把訪問apiserver的6443端口負載均衡到8433端口上
$ vi /root/kubeadm-ha/nginx-default.conf
stream {
    upstream apiserver {
        server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
        server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
        server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
    }

    server {
        listen 8443;
        proxy_connect_timeout 1s;
        proxy_timeout 3s;
        proxy_pass apiserver;
    }
}
  • 在k8s-master一、k8s-master二、k8s-master3上啓動nginx容器
$ docker run -d -p 8443:8443 \
--name nginx-lb \
--restart always \
-v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \
nginx
  • 在k8s-master一、k8s-master二、k8s-master3上檢測keepalived服務的虛擬IP地址指向
$ curl -L 192.168.60.80:8443 | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    14    0    14    0     0  18324      0 --:--:-- --:--:-- --:--:-- 14000
1
  • 業務恢復後務必重啓keepalived,不然keepalived會處於關閉狀態
$ systemctl restart keepalived
  • 在k8s-master一、k8s-master二、k8s-master3上查看keeplived日誌,有如下輸出表示當前虛擬IP地址綁定的主機
$ systemctl status keepalived -l
VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80

返回目錄

kube-proxy配置

  • 在k8s-master1上設置kube-proxy使用keepalived的虛擬IP地址,避免k8s-master1異常的時候全部節點的kube-proxy鏈接不上
$ kubectl get -n kube-system configmap
NAME                                 DATA      AGE
extension-apiserver-authentication   6         4h
kube-flannel-cfg                     2         4h
kube-proxy                           1         4h
  • 在k8s-master1上修改configmap/kube-proxy的server指向keepalived的虛擬IP地址
$ kubectl edit -n kube-system configmap/kube-proxy
        server: https://192.168.60.80:8443
  • 在k8s-master1上查看configmap/kube-proxy設置狀況
$ kubectl get -n kube-system configmap/kube-proxy -o yaml
  • 在k8s-master1上刪除全部kube-proxy的pod,讓proxy重建
kubectl get pods --all-namespaces -o wide | grep proxy
  • 在k8s-master一、k8s-master二、k8s-master3上重啓docker kubelet keepalived服務
$ systemctl restart docker kubelet keepalived

返回目錄

驗證master集羣高可用

  • 在k8s-master1上檢查各個節點pod的啓動狀態,每一個上都成功啓動heapster、kube-apiserver、kube-controller-manager、kube-dns、kube-flannel、kube-proxy、kube-scheduler、kubernetes-dashboard、monitoring-grafana、monitoring-influxdb。而且全部pod都處於Running狀態表示正常
$ kubectl get pods --all-namespaces -o wide | grep k8s-master1

$ kubectl get pods --all-namespaces -o wide | grep k8s-master2

$ kubectl get pods --all-namespaces -o wide | grep k8s-master3

返回目錄

node節點加入高可用集羣設置

kubeadm加入高可用集羣

  • 在k8s-master1上禁止在全部master節點上發佈應用
$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}'

$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}'

$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}'
  • 在k8s-master1上查看集羣的token
$ kubeadm token list
TOKEN           TTL         EXPIRES   USAGES                   DESCRIPTION
xxxxxx.yyyyyy   <forever>   <never>   authentication,signing   The default bootstrap token generated by 'kubeadm init'
  • 在k8s-node1 ~ k8s-node8上,${TOKEN}爲k8s-master1上顯示的token,${VIRTUAL_IP}爲keepalived的虛擬IP地址192.168.60.80
$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443

返回目錄

部署應用驗證集羣

  • 在k8s-node1 ~ k8s-node8上查看kubelet狀態,kubelet狀態爲active (running)表示kubelet服務正常啓動
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 1146 (kubelet)
   Memory: 204.9M
   CGroup: /system.slice/kubelet.service
           ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require...
           ├─ 2553 journalctl -k -f
           ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl...
           └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl...
  • 在k8s-master1上檢查各個節點狀態,發現全部k8s-nodes節點成功加入
$ kubectl get nodes -o wide
NAME          STATUS                     AGE       VERSION
k8s-master1   Ready,SchedulingDisabled   5h        v1.6.4
k8s-master2   Ready,SchedulingDisabled   4h        v1.6.4
k8s-master3   Ready,SchedulingDisabled   4h        v1.6.4
k8s-node1     Ready                      6m        v1.6.4
k8s-node2     Ready                      4m        v1.6.4
k8s-node3     Ready                      4m        v1.6.4
k8s-node4     Ready                      3m        v1.6.4
k8s-node5     Ready                      3m        v1.6.4
k8s-node6     Ready                      3m        v1.6.4
k8s-node7     Ready                      3m        v1.6.4
k8s-node8     Ready                      3m        v1.6.4
  • 在k8s-master1上測試部署nginx服務,nginx服務成功部署到k8s-node5上
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created

$ kubectl get pod -o wide -l=run=nginx
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-2662403697-pbmwt   1/1       Running   0          5m        10.244.7.6   k8s-node5
  • 在k8s-master1讓nginx服務外部可見
$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service "nginx" exposed

$ kubectl get svc -l=run=nginx
NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx     10.105.151.69   <nodes>       80:31639/TCP   43s

$ curl k8s-master2:31639
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 至此,kubernetes高可用集羣成功部署

返回目錄

相關文章
相關標籤/搜索