部署kubernetes1.8.3高可用集羣

Kubernetes做爲容器應用的管理平臺,經過對pod的運行狀態進行監控,而且根據主機或容器失效的狀態將新的pod調度到其餘node上,實現了應用層的高可用。node

針對kubernetes集羣,高可用性還包含如下兩個層面的考慮:git

  • etcd存儲的高可用
  • master節點的高可用

在開始以前,先貼一下架構圖:docker

etcd做爲kubernetes的中心數據庫,必須保證其不是單點。不過etcd集羣的部署很簡單,這裏就不細說了,以前寫過一鍵部署腳本,有興趣的同窗能夠往前翻。數據庫

在k8s全面容器化加上各類驗證機制以前,master節點的高可用部署還算簡單,如今k8s有了很是複雜的安全機制,在運維上增長了不小難度。bootstrap

在kubernetes中,master扮演着總控中心的角色,主要有三個服務apiservercontroller-managerscheduler,這三個服務經過不斷與node節點上的kubelet、kube-proxy進行通訊來維護整個集羣的健康工做狀態,若是master的服務沒法訪問到某個node,則會將該node標記爲不可用,再也不向其調度pod。vim

Master的三個組件都以容器的形式啓動,啓動他們的基礎工具是kubelet,他們都以static pod的形式啓動,並由kubelet進行監控和自動啓動。而kubelet自身的自啓動由systemd完成。api

APIserver做爲集羣的核心,負責集羣各功能模塊之間的通訊,集羣內的各個功能模塊經過apiserver將信息存入etcd,當須要獲取和操做這些數據時,則經過apiserver提供的rest接口來實現,從而實現各模塊之間的信息交互。安全

APIserver最主要的rest接口是資源對象的增刪查改,除此以外,它還提供了一類很特殊的rest接口KubernetesProxyAPI接口,這類接口的做用是代理rest請求,即apiserver把收到的rest請求轉發到某個node上的kubelet守護進程的rest端口上,由該kubelet進程負責響應。在kubernetes集羣以外訪問某個pod容器的服務(http服務)時,能夠用proxyAPI實現,這種場景多用於管理目的。網絡

每一個node節點上的kubelet每隔一個時間週期,就會調用一次apiserver的rest接口報告自身狀態,apiserver接收到這些信息後,將節點狀態信息更新到etcd。此外,kubelet也經過apiserver的watch接口監聽pod信息,若是監聽到新的pod副本被調度綁定到本節點,則執行pod對應的容器的建立和啓動邏輯;若是監聽到pod對象被刪除,則刪除本節點上的響應的pod容器;若是監聽到修改pod信息,則kubelet監聽到變化後,會相應地修改本節點的pod容器。架構

ControllerManager做爲集羣內部的管理控制中心,負責集羣內的node、pod副本、endpoint、namespace、serviceaccount、resourcequota等的管理,當某個node意外宕機時,ControllerManager會及時發現此故障並執行自動化修復流程,確保集羣始終處於預期的工做狀態。ControllerManager內部包含多個controller,每種controller都負責具體的控制流程。

ControllerManager中的NodeController模塊經過apiserver提供的watch接口,實時監控node的信息,並作相應處理。Scheduler經過apiserver的watch接口監聽到新建pod副本信息後,它會檢索全部符合該pod要求的node列表,開始執行pod調度邏輯,調度成功後將pod綁定到目標節點上。

通常來講,智能系統和自動系統一般會經過一個被稱爲操做系統的機構來不斷修正系統的工做狀態。在kubernetes集羣中,每一個controller都是這樣一個操做系統,它們經過APIserver提供的接口實時監控整個集羣裏的每一個資源對象的當前狀態,當發生各類故障致使系統狀態發生變化時,會嘗試着將系統狀態從「現有狀態」修正到「指望狀態」。

Scheduler的做用是將待調度的pod,包括經過apiserver新建立的pod及rc爲補足副本而建立的pod等,經過一些複雜的調度流程計算出最佳目標節點,而後綁定到該節點上。

以master的三個組件做爲一個部署單元,使用至少三個節點安裝master,而且須要保證任什麼時候候總有一套master能正常工做。


三個master節點,一個node節點:

master01,etcd0  uy05-13  192.168.5.42
master02,etcd1  uy08-07  192.168.5.104
master03,etcd2  uy08-08  192.168.5.105
node01          uy02-07  192.168.5.40

兩個lvs節點:

lvs01  uy-s-91  192.168.2.56
lvs02  uy-s-92  192.168.2.57

vip=192.168.6.15

kubernetes version: 1.8.3
docker version: 17.06.2-ce
etcd version: 3.2.9
OS version: debian stretch

使用lvs+keepalived對apiserver作負載均衡和高可用。

因爲controller-manager和scheduler會修改集羣的狀態信息,爲了保證同一時間只有一個實例能夠對集羣狀態信息進行讀寫,避免出現同步問題和一致性問題,這兩個組件須要開啓選舉功能,並選舉出一個leader,k8s採用的是租賃鎖(lease-lock)。而且,apiserver但願這兩個組件工做在同一個節點上,因此這兩個組件須要監聽127.0.0.1

一、爲三個節點安裝kubeadm、kubectl、kubelet。

# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main
EOF

# aptitude update
# aptitude install -y kubelet kubeadm kubectl

二、準備鏡像,自行科學下載...。

k8s-dns-dnsmasq-nanny-amd64.tar
k8s-dns-kube-dns-amd64.tar
k8s-dns-sidecar-amd64.tar
kube-apiserver-amd64.tar
kube-controller-manager-amd64.tar
kube-proxy-amd64.tar
kube-scheduler-amd64.tar
pause-amd64.tar
kubernetes-dashboard-amd64.tar
kubernetes-dashboard-init-amd64.tar
# for i in `ls`; do docker load -i $i; done

三、部署第一個master節點。

a、我直接使用了kubeadm來初始化第一個節點,kubeadm的使用其實有一些技巧,這裏我使用了一個配置文件:

# cat kubeadm-config.yml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: "192.168.5.42"
etcd:
  endpoints:
  - "http://192.168.5.42:2379"
  - "http://192.168.5.104:2379"
  - "http://192.168.5.105:2379"
kubernetesVersion: "v1.8.3"
apiServerCertSANs:
- uy05-13
- uy08-07
- uy08-08
- 192.168.6.15
- 127.0.0.1
- 192.168.5.42
- 192.168.5.104
- 192.168.5.105
- 192.168.122.1
- 10.244.0.1
- 10.96.0.1
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster
- kubernetes.default.svc.cluster.local
tokenTTL: 0s
networking:
  podSubnet: 10.244.0.0/16

b、執行初始化:

# kubeadm init --config=kubeadm-config.yml
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [uy05-13 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy05-13 uy08-07 uy08-08 uy-s-91 uy-s-92 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.42 192.168.6.15 127.0.0.1 192.168.5.42 192.168.5.104 192.168.5.105 192.168.122.1 10.244.0.1 10.96.0.1 192.168.2.56 192.168.2.57]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 26.002009 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node uy05-13 as master by adding a label and a taint
[markmaster] Master uy05-13 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5a87e1.b760be788520eee5
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 5a87e1.b760be788520eee5 192.168.6.15:6443 --discovery-token-ca-cert-hash sha256:7f2642ce5b6dd3cb4938d1aa067a3b43b906cdf7815eae095a77e41435bd8369

kubeadm自動生成了一套證書,建立了配置文件,用kubelet拉起了三個組件的靜態pod,並運行了kube-dns和kube-proxy。

c、讓master節點參與調度。

# kubectl taint nodes --all node-role.kubernetes.io/master-

d、安裝網絡插件,我這裏使用的是calico,文件自行從官網下載,修改CALICO_IPV4POOL_CIDR爲初始化時自定義的網段。

# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

# kubectl apply -f calico.yaml

這時,全部組件應該都正常運行起來了。

e、安裝dashboard插件,文件自行從官網下載。

修改service的端口類型爲NodePort:

# vim kubernetes-dashboard.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

# kubectl apply -f kubernetes-dashboard.yaml

這裏有權限問題,手動添加權限:

# cat rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-head
  labels:
    k8s-app: kubernetes-dashboard-head
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

# kubectl apply -f rbac.yaml

f、安裝heapster,文件自行從官網下載。

# kubectl apply -f heapster.yaml

這裏也有權限問題,權限無處不在...

# kubectl create clusterrolebinding heapster-binding --clusterrole=cluster-admin --serviceaccount=kube-system:heapster

或者:

# vim heapster-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heapster-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

這時dashboard上應該能看到圖了。

四、部署第二個master節點。

這裏因爲如今的版本開啓了node驗證,因此須要解決證書問題。

a、將第一個節點的配置文件和證書所有複製過來。

# scp -r /etc/kubernetes/* 192.168.5.104:`pwd`

b、使用CA爲新的節點簽發證書,並替換複製過來的證書。其中,apiserver使用的是多域名證書,相關的域名和IP我已經在初始化第一個節點的時候籤進去了,因此這裏不須要重籤。這裏不須要替換的證書文件包括:ca.crt、ca.key、front-proxy-ca.crt、front-proxy-ca.key、front-proxy-client.crt、front-proxy-client.key、sa.key、sa.pub、apiserver.crt、apiserver.key,其餘的須要重籤並替換。

#apiserver-kubelet-client
openssl genrsa -out apiserver-kubelet-client.key 2048
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,/CN=kube-apiserver-kubelet-client"
openssl x509 -req -set_serial $(date +%s%N) -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -out apiserver-kubelet-client.crt -days 365 -extensions v3_req -extfile apiserver-kubelet-client-openssl.cnf

[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#controller-manager
openssl genrsa -out controller-manager.key 2048
openssl req -new -key controller-manager.key -out controller-manager.csr -subj "/CN=system:kube-controller-manager"
openssl x509 -req -set_serial $(date +%s%N) -in controller-manager.csr -CA ca.crt -CAkey ca.key -out controller-manager.crt -days 365 -extensions v3_req -extfile controller-manager-openssl.cnf

[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#scheduler
openssl genrsa -out scheduler.key 2048
openssl req -new -key scheduler.key -out scheduler.csr -subj "/CN=system:kube-scheduler"
openssl x509 -req -set_serial $(date +%s%N) -in scheduler.csr -CA ca.crt -CAkey ca.key -out scheduler.crt -days 365 -extensions v3_req -extfile scheduler-openssl.cnf

[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#admin
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/O=system:masters/CN=kubernetes-admin"
openssl x509 -req -set_serial $(date +%s%N) -in admin.csr -CA ca.crt -CAkey ca.key -out admin.crt -days 365 -extensions v3_req -extfile admin-openssl.cnf

[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
#node
openssl genrsa -out $(hostname).key 2048
openssl req -new -key $(hostname).key -out $(hostname).csr -subj "/O=system:nodes/CN=system:node:$(hostname)" -config kubelet-openssl.cnf
openssl x509 -req -set_serial $(date +%s%N) -in $(hostname).csr -CA ca.crt -CAkey ca.key -out $(hostname).crt -days 365 -extensions v3_req -extfile kubelet-openssl.cnf

[ v3_req ]
# Extensions to add to a certificate request
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth

其實這幾個證書都是客戶端驗證,使用同一個配置便可。這裏爲每一個證書使用了不一樣的文件名,主要是由於還有幾個證書是服務端驗證,以及apiserver證書須要配置SAN,subjectAltName = @alt_names,當須要手動爲這些服務端配置生成證書時就得區分開了。

這裏的證書,在配置文件裏有的是用路徑引用的,有的是直接以key:value的形式使用的。

須要替換的證書文件實際上只有apiserver-kubelet-client.key和apiserver-kubelet-client.crt,將這兩個文件複製到/etc/kubernetes/pki/目錄下替換原文件。

其餘的證書須要讀取證書的內容替換到相應地配置文件裏面,/etc/kubernetes目錄下包含四個conf文件,admin的證書放到admin.conf裏面,controller-manager的證書放到controller-manager.conf裏面,scheduler的證書放到scheduler.conf裏面,node證書放到kubelet.conf裏面。

並且,證書內容不能直接讀取使用,須要用base64加密,具體來講是這樣:

# cat admin.crt | base64 -w 0

用加密的內容替換配置文件中對應的地方,改完以後應該是這樣:

# kubelet.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ca證書內容
    server: https://192.168.5.42:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:uy08-07
  name: system:node:uy08-07@kubernetes
current-context: system:node:uy08-07@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:uy08-07
  user:
    client-certificate-data: node證書內容
    client-key-data: node的key的內容

這裏kubelet做爲客戶端,須要修改節點名稱,節點驗證的時候會驗證域名。域名已經在apiserver的證書籤署好,節點啓動後會自動完成驗證。驗證完是這樣:

# kubectl get csr
NAME        AGE       REQUESTOR             CONDITION
csr-kwlj5   2d        system:node:uy05-13   Approved,Issued
csr-l9qkz   3d        system:node:uy08-07   Approved,Issued
csr-z9nmd   3d        system:node:uy08-08   Approved,Issued

其餘的三個配置文件與kubelet.conf相似,替換證書內容便可。

c、修改advertise-address爲本機地址。

# vim manifests/kube-apiserver.yaml
--advertise-address=192.168.5.104

d、修改好配置文件以後,這時候就能夠啓動kubelet了。這裏要提醒一下的是,在部署負載均衡器以前,apiserver的地址使用的是第一個節點的apiserver地址。

五、部署第三個master節點,請重複上面部署第二個master節點的步驟。

這時三個節點應該都運行起來了:

# kubectl get no
NAME      STATUS    ROLES     AGE       VERSION
uy05-13   Ready     master    3d        v1.8.3
uy08-07   Ready     <none>    3d        v1.8.3
uy08-08   Ready     <none>    3d        v1.8.3

六、將dns和heapster擴容到三個副本,讓三個節點都運行有dns和heapster。

# kubectl scale --replicas=3 deployment kube-dns -n kube-system
# kubectl scale --replicas=3 deployment heapster -n kube-system

七、部署負載衡器和高可用。

a、安裝lvs和keepalived。

# aptitude install -y ipvsadm keepalived

b、修改配置文件。

master節點:

# vim keepalived.conf

global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "/usr/bin/curl -k https://127.0.0.1:6443/api"
    interval 3
    weight -10
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    virtual_router_id 66
    advert_int 1
    state MASTER
    priority 100
    interface eno2
    mcast_src_ip 192.168.2.56
    authentication {
        auth_type PASS
        auth_pass 4743
    }
    unicast_peer {
        192.168.2.56
        192.168.2.57
    }
    virtual_ipaddress {
        192.168.6.15
    }
    track_script {
        CheckK8sMaster
    }
}

virtual_server 192.168.6.15 6443 {
    lb_algo rr
    lb_kind DR
    persistence_timeout 0
    delay_loop 20
    protocol TCP

    real_server 192.168.5.42 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }

    real_server 192.168.5.104 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }

    real_server 192.168.5.105 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }
}

slave節點:

# vim keepalived.conf

global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "/usr/bin/curl -k https://127.0.0.1:6443/api"
    interval 3
    weight -10
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    virtual_router_id 66
    advert_int 1
    state BACKUP
    priority 95
    interface eno2
    mcast_src_ip 192.168.2.57
    authentication {
        auth_type PASS
        auth_pass 4743
    }
    unicast_peer {
        192.168.2.56
        192.168.2.57
    }
    virtual_ipaddress {
        192.168.6.15
    }
    track_script {
        CheckK8sMaster
    }
}

virtual_server 192.168.6.15 6443 {
    lb_algo rr
    lb_kind DR
    persistence_timeout 0
    delay_loop 20
    protocol TCP

    real_server 192.168.5.42 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }

    real_server 192.168.5.104 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }

    real_server 192.168.5.105 6443 {
        weight 10
        TCP_CHECK {
            connect_timeout 10
        }
    }
}

c、爲各real server(也就是三個master節點)配置vip。

# vim /etc/network/interfaces

auto lo:15
iface lo:15 inet static
address 192.168.6.15
netmask 255.255.255.255

# ifconfig lo:15 192.168.6.15 netmask 255.255.255.255 up

d、修改arp內核參數。

net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_forward = 1
net.ipv4.nf_conntrack_max = 2048000
net.netfilter.nf_conntrack_max = 2048000

# sysctl -p

e、啓動服務。

# systemctl start keepalived
# systemctl enable keepalived
# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=1048576)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.6.15:6443 rr
  -> 192.168.5.42:6443            Route   10     0          0
  -> 192.168.5.104:6443           Route   10     0          0
  -> 192.168.5.105:6443           Route   10     0          0

八、將kubernetes集羣中全部須要訪問apiserver的地方所有改成vip。

這裏須要修改的地方包括:四個配置文件admin.conf、controller-manager.conf、scheduler.conf、kubelet.conf,以及kube-proxy和cluster-info的configmap。

修改配置文件就不說了,打開文件替換server地址便可。這裏說一下如何修改configmap:

# kubectl edit cm kube-proxy -n kube-system

apiVersion: v1
data:
  kubeconfig.conf: |
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://192.168.6.15:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: 2017-11-22T10:47:19Z
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "9703"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy
  uid: 836ffdfe-cf72-11e7-9b82-34e6d7899e5d
# kubectl edit cm cluster-info -n kube-public

apiVersion: v1
data:
  jws-kubeconfig-2a8d9c: eyJhbGciOiJIUzI1NiIsImtpZCI6IjJhOGQ5YyJ9..nBOva6m8fBYwn8qbe0CUA3pVF-WPXRe1Ynr3sAwPmKI
  kubeconfig: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01URXlNakV3TkRZME5Wb1hEVEkzTVRFeU1ERXdORFkwTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3Z4CmFJSkR4UTRjTFo3Y0xIbm1CWXFJY3ZVTENMSXY2ZCtlVGg0SzBnL2NEMXAzNVBaa2JKUE1YSXpOVjJDOVZodXMKMXVpTlQvQ3dOL245WXhtYk9WaHBZbXNySytuMzJ3dTB0TlhUdWhTQ1dFSU1SWGpkeno2TG0xaTNLWEorSXF4KwpTbTVVMXhaY01iTy9UT1ZXWG81TDBKai9PN0ZublB1cFd2SUtpZVRpT1lnckZuMHZsZlY4bVVCK2E5UFNSMnRSCkJDWFBwWFRTOG96ZFQ3alFoRE92N01KRTJKU0pjRHp1enBISVBuejF0RUNYS25SU0xpVm5rVE51L0RNek9LYWEKNFJiaUwvbDY2MDkra1BYL2JNVXNsdEVhTmVyS2tEME13SjBOakdvS0pEOWUvUldoa0ZTZWFKWVFFN0NXZk5nLwo3U01wblF0SGVhbVBCbDVFOTIwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEUlh2N0V3clVyQ0tyODVGU2pGcCtYd2JTQmsKRlFzcFR3ZEZEeFUvemxERitNVlJLL0QyMzdFQmdNbGg3ZndDV2llUjZnTFYrQmdlVGowU3BQWVl6ZVZJZEZYVQp0Z3lzYmQvVHNVcWNzQUEyeExiSnY4cm1nL2FTL3dScEQ0YmdlMS9Jb1EwTXFUV0FoZno2VklMajVkU0xWbVNOCmQzcXlFb0RDUGJnMGVadzBsdE5LbW9BN0p4VUhLOFhnTWRVNUZnelYvMi9XdUt2NkZodUdlUEt0cjYybUUvNkcKSy9BTTZqUHhKeXYrSm1VVVFCbllUQ2pCbU5nNjR2M0ZPSDhHMVBCdlhlUHNvZW5DQng5M3J6SFM1WWhnNHZ0dAoyelNnUGpHeUw0RkluZlF4MFdwNHJGYUZZMGFkQnV0VkRnbC9VTWI1eFdnSDN2Z0RBOEEvNGpka251dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        server: https://192.168.6.15:6443
      name: ""
    contexts: []
    current-context: ""
    kind: Config
    preferences: {}
    users: []
kind: ConfigMap
metadata:
  creationTimestamp: 2017-11-22T10:47:19Z
  name: cluster-info
  namespace: kube-public
  resourceVersion: "580570"
  selfLink: /api/v1/namespaces/kube-public/configmaps/cluster-info
  uid: 834a18c5-cf72-11e7-9b82-34e6d7899e5d

固然,修改配置文件以後須要重啓kubelet使配置生效。

九、驗證,嘗試經過vip請求apiserver將node節點添加到集羣。

# kubeadm join --token 2a8d9c.9b5a1c7c05269fb3 192.168.6.15:6443 --discovery-token-ca-cert-hash sha256:ce9e1296876ab076f7afb868f79020aa6b51542291d80b69f2f10cdabf72ca66
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.6.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.6.15:6443"
[discovery] Requesting info from "https://192.168.6.15:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.6.15:6443"
[discovery] Successfully established connection with API Server "192.168.6.15:6443"
[bootstrap] Detected server version: v1.8.3
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

十、至此,整個kubernetes集羣的高可用所有完成。

# kubectl get no
NAME      STATUS    ROLES     AGE       VERSION
uy02-07   Ready     <none>    22m       v1.8.3
uy05-13   Ready     master    5d        v1.8.3
uy08-07   Ready     <none>    5d        v1.8.3
uy08-08   Ready     <none>    5d        v1.8.3

# kubectl get po --all-namespaces
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-cnwlt                          1/1       Running   2          5d
kube-system   calico-kube-controllers-55449f8d88-dffp5   1/1       Running   2          5d
kube-system   calico-node-d6v5n                          2/2       Running   4          5d
kube-system   calico-node-fqxl2                          2/2       Running   0          5d
kube-system   calico-node-hbzd4                          2/2       Running   6          5d
kube-system   calico-node-tcltp                          2/2       Running   0          2h
kube-system   heapster-59ff54b574-ct5td                  1/1       Running   2          5d
kube-system   heapster-59ff54b574-d7hwv                  1/1       Running   0          5d
kube-system   heapster-59ff54b574-vxxbv                  1/1       Running   1          5d
kube-system   kube-apiserver-uy05-13                     1/1       Running   2          5d
kube-system   kube-apiserver-uy08-07                     1/1       Running   0          5d
kube-system   kube-apiserver-uy08-08                     1/1       Running   1          4d
kube-system   kube-controller-manager-uy05-13            1/1       Running   2          5d
kube-system   kube-controller-manager-uy08-07            1/1       Running   0          5d
kube-system   kube-controller-manager-uy08-08            1/1       Running   1          5d
kube-system   kube-dns-545bc4bfd4-4xf99                  3/3       Running   0          5d
kube-system   kube-dns-545bc4bfd4-8fv7p                  3/3       Running   3          5d
kube-system   kube-dns-545bc4bfd4-jbj9t                  3/3       Running   6          5d
kube-system   kube-proxy-8c59t                           1/1       Running   1          5d
kube-system   kube-proxy-bdx5p                           1/1       Running   2          5d
kube-system   kube-proxy-dmzm4                           1/1       Running   0          2h
kube-system   kube-proxy-gnfcx                           1/1       Running   0          5d
kube-system   kube-scheduler-uy05-13                     1/1       Running   2          5d
kube-system   kube-scheduler-uy08-07                     1/1       Running   0          5d
kube-system   kube-scheduler-uy08-08                     1/1       Running   1          5d
kube-system   kubernetes-dashboard-69c5c78645-4r8zw      1/1       Running   2          5d
# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
# kubectl cluster-info
Kubernetes master is running at https://192.168.6.15:6443
Heapster is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

相關文章
相關標籤/搜索