Kubernetes集羣多Master容錯配置實戰技巧

這裏主要介紹在Kubernetes的高可用多主容錯部署技巧,側重於kube-apiserver、kube-control-manager、kube-schedule的多節點部署,使多副節點能夠像主節點同樣操做(以K8s 1.13.1和Ubuntu 18.04LTS爲例)。在《Kubernetes集羣高可用的策略和實踐》提出了Kubernetes高可用架構的整體思路,《Kubernetes探祕-多master節點容錯部署 》中介紹了Kubernetes高可用部署的具體流程,在《Kubernetes 1.13.1的etcd集羣擴容實戰技巧》和《Kubernetes探祕-etcd節點和實例擴容》詳細介紹了Kubernetes的核心存儲etcd的高可用集羣多節點的擴展過程。node

一、kube-apiserver

修改兩處:bootstrap

開始編輯:api

sudo nano /etc/kubernetes/manifests/kube-apiserver.yaml

最後的kube-apiserver.yaml文件以下:架構

# /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --authorization-mode=Node,RBAC
    - --advertise-address=10.1.1.199
    - --allow-privileged=true
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
#    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
#    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
#    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
#    - --etcd-servers=https://127.0.0.1:2379
    - --etcd-cafile=/etc/kubernetes/pki/etcd-certs/ca.pem
    - --etcd-certfile=/etc/kubernetes/pki/etcd-certs/client.pem
    - --etcd-keyfile=/etc/kubernetes/pki/etcd-certs/client-key.pem
    - --etcd-servers=https://10.1.1.201:2379

    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.13.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.1.1.199
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}

注意併發

  • 這裏主要修改了--advertise-address=10.1.1.199和--etcd-servers=https://10.1.1.201:2379。
  • 兩者的IP地址不同,199爲虛擬IP,201爲當前節點的etcd服務地址。

二、kube-control-manager、kube-schedule

kube-control-manager和kube-schedule實例經過訪問apiserver服務接口來獲取集羣狀態和執行集羣內部管理、維護工做,支持多運行實例的併發訪問,對apiserver加鎖來選擇主控制器。ide

  • kube-control-manager主要負責節點狀態的一致性保障,包括/etc/kubernetes/manifests/kube-control-manager.yaml和etc/kubernetes/control-manager.conf兩個文件。
  • kube-schedule主要負責pod實例的調度,包括/etc/kubernetes/manifests/kube-schedule.yaml和etc/kubernetes/schedule.conf兩個文件。

Kubeadm的默認安裝,已經將kube-control-manager和kube-schedule的elect設置爲true,支持多實例運行,只須要將其複製到副節點的/etc/kubernetes就能夠了。spa

具體操做以下:.net

# 複製control-manager和schedule的配置文件到本地。
# 參考 https://my.oschina.net/u/2306127/blog/write/2991361

# 首先登陸到遠程節點,而後再執行下面的命令。

echo "Clone control-manager configuration file."
scp root@10.1.1.201:/etc/kubernetes/control-manager.conf /etc/kubernetes/
scp root@10.1.1.201:/etc/kubernetes/manifests/kube-control-manager.yaml /etc/kubernetes/manifests/

echo "Clone schedule configuration file."
scp root@10.1.1.201:/etc/kubernetes/schedule.conf /etc/kubernetes/
scp root@10.1.1.201:/etc/kubernetes/manifests/kube-schedule.yaml /etc/kubernetes/manifests/

重啓kubelet,將自動重啓control-manager和schedule實例。code

三、admin.conf

在主節點掛掉後,須要在副節點上使用kubectl。首先將admin.conf複製到副節點上,而後將其配置到本地帳戶。component

具體操做以下:

# 複製admin.conf
scp root@10.1.1.201:/etc/kubernetes/admin.conf /etc/kubernetes/

# 建立本地帳戶訪問目錄,用戶配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

admin.conf的主IP地址經過虛擬IP訪問,不須要進行任何修改。

四、kubectl操做

如今,以前的副節點已經能夠執行Master上面的全部操做了(全部節點均可以執行)。試一下:

# Kubernetes版本。
kubectl version

# 集羣信息,服務地址。
kubectl cluster-info

# 集羣節點列表。
kubectl get node -o wide

# 集羣運行的全部pod信息。
kubectl get pod --all-namespaces -o wide

檢查一下,新升級的副節點和主節點的輸出信息是否一致。若是不一致:

 

參考:

相關文章
相關標籤/搜索