Kubeadm 部署 Kubernetes 1.14.2 集羣

kubernetes-logo

kubernetes來源於希臘語,意爲舵手或領航員,從k8s的logo也能看出這個船舵圖標與其名稱對應。而咱們常說的k8s中的8表明的就是ubernete這個八個字符。這裏引用k8s中文社區文檔對k8s的描述:Kubernetes是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單而且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。html

環境、主從節點規劃

各個節點規劃

IP地址 角色 服務器系統
172.31.76.16 k8s從節點 CentOS 7.6
172.31.76.17 k8s從節點 CentOS 7.6
172.31.76.18 k8s主節點 CentOS 7.6

每一個節點軟件版本

軟件名稱 版本 做用
Docker 18.09.6 容器
Kubernetes 1.14.2 管理容器

Kubernetes安裝組件介紹

組件名稱 版本 做用
kubeadm 1.14.2-0 初始化k8s集羣工具
kubectl 1.14.2-0 k8s命令行工具,命令控制部署管理應用,CRUD各類資源
kubelet 1.14.2-0 運行於全部節點上,負責啓動容器和 Pod

準備工做

每臺節點服務器設置主機名

# 主節點主機名對應 172.31.76.18
hostnamectl --static set-hostname  k8s-master
# 從節點主機名對應 172.31.76.16 172.31.76.17
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2
複製代碼
  • 使用 hostnamectl命令能夠查看是否設置成功
# 使用hostnamectl命令 顯示信息
Static hostname: k8s-node-1
Transient hostname: docker_76_16
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 8919fc90446b48fcbeb2c6cf267caba2
           Boot ID: a684023646094b999b7ace62aed3cd2e
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-327.el7.x86_64
      Architecture: x86-64
複製代碼
  • 每一個節點的主機加入host 解析
# 編輯每臺機器的 /etc/hosts文件,寫入下面內容

172.31.76.16 k8s-node-1
172.31.76.17 k8s-node-2
172.31.76.18 k8s-master
複製代碼
  • 關閉每一個節點的防火牆
# 注意如下命令是下次生效
systemctl disable firewalld.service
systemctl stop firewalld.service

# 關閉防火牆當即生效
iptables -F

# 防火牆關閉後可使用如下命令查看防火牆狀態
systemctl status firewalld  
複製代碼
  • 臨時禁用SELINUX(它是一個 Linux 內核模塊,也是 Linux 的一個安全子系統),個人機器默認是關閉的
setenforce 0                  ##設置SELinux 成爲permissive模式 (不用重啓機器)

# 修改配置文件 (重啓機器生效)
vim /etc/selinux/config
SELINUX=disabled
複製代碼
  • 每一個節點關閉 swap
swapoff -a 
複製代碼

各個節點組件安裝

  • 通過前面的準備工做,接下來咱們開始安裝組件,注意一下組件每一個節點都須要安裝

Docker安裝

  • 請看我寫的關於Docker的文章

安裝 kubeadm、kubectl、kubelet

  • 安裝這幾個組件前先準備repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
複製代碼
  • 接着直接安裝 kubeadm、kubectl、kubelet這個三個組件
yum install -y kubelet kubeadm kubectl
複製代碼
  • kubeadm、kubectl、kubelet組件下載安裝成功

Kubernetes組件安裝

  • 啓動剛剛安裝的kubelet
systemctl enable kubelet && systemctl start kubelet

複製代碼

k8s Master 節點配置

準備鏡像文件

  • 國內環境因爲網絡不通暢問題,咱們只能手動下載好鏡像,再打上對應tag來製做本地鏡像
  • Master 節點獲取鏡像文件
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.2
docker pull mirrorgooglecontainers/kube-proxy:v1.14.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

複製代碼
  • 給拉取的鏡像文件打tag
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
複製代碼
  • 刪除拉取的初始鏡像,留下咱們加了tag的鏡像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.2           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.2  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.2          
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.2               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.3.10                      
docker rmi coredns/coredns:1.3.1
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker rmi k8s.gcr.io/kube-apiserver:v1.14.2           
docker rmi k8s.gcr.io/kube-controller-manager:v1.14.2  
docker rmi k8s.gcr.io/kube-scheduler:v1.14.2          
docker rmi k8s.gcr.io/kube-proxy:v1.14.2               
docker rmi k8s.gcr.io/pause:3.1                        
docker rmi k8s.gcr.io/etcd:3.3.10                      
docker rmi k8s.gcr.io/coredns:1.3.1
docker rmi quay.io/coreos/flannel:v0.10.0-amd64
複製代碼

加完tag完成以後的鏡像文件

開始安裝kubernetes

  • 輸入如下命令開始安裝kubernetes
# --kubernetes-version=v1.14.2 指定安裝的k8s版本
# --apiserver-advertise-address 用於指定使用k8s-master的哪一個network 端口進行通訊 
# --pod-network-cidr 用於指定Pod的網絡範圍,下面採用的是flannel方案(https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md)
kubeadm init --kubernetes-version=v1.14.2 --apiserver-advertise-address 172.31.76.18 --pod-network-cidr=10.244.0.0/16
複製代碼
  • 以下爲kubernetes初始化日誌打印
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.76.18 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.31.76.18 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.76.18]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501690 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: y6awgp.6bvxt8l3rie2du5s
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.76.18:6443 --token y6awgp.6bvxt8l3rie2du5s \
    --discovery-token-ca-cert-hash sha256:9989fe3160fe36c428ab2e05866f8d04a91704c5973dcf8025721c9e5e1b230c 
複製代碼
  • 注意: 看到上面Kubernetes初始化信息,咱們須要注意最後一句話,等會咱們子節點加入Kubernetes集羣就是使用這一句話
kubeadm join 172.31.76.18:6443 --token y6awgp.6bvxt8l3rie2du5s \
    --discovery-token-ca-cert-hash sha256:9989fe3160fe36c428ab2e05866f8d04a91704c5973dcf8025721c9e5e1b230c 
複製代碼

配置kubectl

# root 模式下導入環境變量
export KUBECONFIG=/etc/kubernetes/admin.conf

# 重啓 kubelet
systemctl restart kubelet
複製代碼

安裝Pod的網絡(flannel方案)

sysctl net.bridge.bridge-nf-call-iptables=1
複製代碼
  • 而後在k8s-master節點上執行kube-flannel.yaml配置,也可根據官方文檔來操做下載kube-flannel.yaml文件,下文也給出kube-flannel.yaml文件內容
kubectl apply -f kube-flannel.yaml
複製代碼

安裝Pod網絡

  • kube-flannel.yaml 文件
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
複製代碼
  • 查看Kubernetes的Pod 是否正常運行
kubectl get pods --all-namespaces -o wide
複製代碼

Pod正常運行

  • 查看Kubernetes主節點是否已經就緒
kubectl get nodes
複製代碼

Kubernetes主節點已經就緒

  • 最後別忘了執行(不執行使用kubectl命令會出現錯誤1)
mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config
複製代碼

k8s從節點(node)加入集羣

  • 前面準備工做中咱們已經在各個節點中安裝了kubelet kubeadm kubectl這三個組件,在搭建k8s master 主節點這一小節也提到過加入集羣的操做(忘記了能夠往上翻翻)
  • 按照配置主節點的內容在docker 中加入鏡像

加入集羣

# 基礎命令示例 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

kubeadm join 172.31.76.18:6443 --token pamsj1.4d5funpottlqofs1 --discovery-token-ca-cert-hash sha256:1152aa95b6a45e88211686b44a3080d643fa95b94ebf98c5041a7f88063f2f4e
複製代碼

node節點加入集羣

  • 咱們能夠在另外一臺node節點機器再次重複該操做node

  • 查看剛剛加入集羣的子節點linux

node節點加入集羣成功

  • 至此集羣的搭建完成。

子節點加入集羣注意事項

  • 加入集羣前保證子節點服務器已經打開了docker服務
  • 注意 token是否過時(默認24小時過時)
  • 子節點注意保持鏡像文件版本和主節點一致
  • 子節點準備工做安裝flannel網絡
  • 子節點若是加入集羣不成功出現錯誤,下次再加入集羣錢則使用 kubeadm reset 命令清除子節點加入集羣自動生成的配置文件

k8s集羣清理解散

  • 刪除子節點
# 查詢k8s集羣因此節點
kubectl get nodes

# 刪除子節點 ,<node name> 表明子節點名稱
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
複製代碼
  • 重置節點
# 不論主節點 仍是 子節點該命令都能重置節點
kubeadm reset
複製代碼

k8s集羣可視化管理工具Dashboard安裝

獲取Dashboard鏡像

  • 官方地址
  • 目前官方最新版本爲v1.10.1,和前面獲取國內鏡像文件同樣,咱們先獲取鏡像,在把鏡像打成對應tag的鏡像(注意是每一個節點都須要拉取鏡像)
# 拉取國內鏡像
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

# 從新標 tag
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

# 刪除國內拉取的鏡像
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
複製代碼

安裝Dashboard

# 官方文檔的安裝操做
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

# 下載kubernetes-dashboard.yaml安裝
kubectl create -f kubernetes-dashboard.yaml
複製代碼

可視化管理工具dashboard安裝成功

Dashboard訪問

  • Dashboard訪問有四種方式(kubectl proxy、NodePort、API Server、Ingress),官方項目中提示咱們用kubectl proxy 命令開啓代理,而後直接訪問地址http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/,若是是桌面電腦有瀏覽器固然能夠,可是咱們k8s部署在服務器上,這個方法顯然不適合。此外,還有NodePort和Ingress等方式,因爲因爲API服務器是公開的,能夠從外部訪問,因此這裏使用API Server的方式來訪問dashboard,其餘方式訪問能夠參考如下地址Kubernetes Dashboard v1.10.0安裝dashboard v1.10.1安裝

API Server的方式訪問 Dashboard

  • 首先咱們查看k8s運行的地址和端口號
#使用以下命令
kubectl cluster-info

# 集羣正常會獲得如下信息
Kubernetes master is running at https://172.31.76.18:6443
KubeDNS is running at https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

複製代碼
  • 接着咱們就能夠開始訪問Dashboard了
# 使用以下地址格式訪問
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
複製代碼
  • 根據如上格式訪問Dashboard會獲得拒絕訪問的信息,錯誤的緣由是k8s基於安全性的考慮,瀏覽器必需要安裝一個根證書,防止中間人攻擊(官方描述),接下來咱們來生成證書再操做。
{
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
    "reason": "Forbidden",
    "details": {
        "name": "https:kubernetes-dashboard:",
        "kind": "services"
    },
    "code": 403
}
複製代碼

生成證書(master 節點操做)

  • 生成 crt 文件
grep 'client-certificate-data' /etc/kubernetes/admin.conf | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
複製代碼
  • 生成 key 文件
grep 'client-key-data' /etc/kubernetes/admin.conf | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
複製代碼
  • 生成 p12 證書文件,須要設置生成證書密碼
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
複製代碼
  • 將生產的p12證書導入到谷歌瀏覽器中,證書導入也須要密碼,也就是上面步驟生成p12證書文件設置的密碼,證書導入成功以後重啓谷歌瀏覽器(如何導入證書這裏就不細說了)
  • 再次訪問以下地址就會提示咱們選擇剛剛導入的證書,接下來就會顯示以下圖所示的認證界面
https://172.31.76.18:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
複製代碼

k8s提示須要認證

  • 這裏咱們使用token認證,使用token認證前先建立dashboard用戶,
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF
複製代碼
  • 建立ClusterRoleBinding
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF
複製代碼
  • 而後咱們在獲取用戶的token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
複製代碼

獲取建立的dashboard用戶token

  • 而後將token輸入便可,至此Dashboard安裝完成

Dashboard安裝完成

刪除部署的dashboard

  • 若是dashboard部署不對,可使用以下命令刪除dashboard再從新部署
kubectl delete -f kubernetes-dashboard.yaml
複製代碼

搭建過程當中出現的錯誤

錯誤1: kubectl get nodes 命令出錯

錯誤描述

  • The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • node 節點使用kubectl get nodes命令不出意外也會出現上述錯誤描述,則咱們應該把master 節點的/etc/kubernetes/admin.conf文件複製到node節點/etc/kubernetes/目錄下再執行下面命令便可。
  • 解決:參考地址
mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config
複製代碼
  • 其實以上解決答案在咱們初始化master 節點的成功的打印信息中就已經提示咱們配置了,不信能夠翻看前文master 節點打印信息。

錯誤2: 子節點加入Kubernetes集羣出現錯誤

錯誤描述

  • FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
  • 解決:參考地址
  • 該錯誤的緣由主要是由於token過時了(token默認有效期爲24h),因此咱們只要在k8s master節點使用kubeadm命令從新建立新的token就行了
# 建立新token
kubeadm token create
# 獲取sha256
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | 
openssl dgst -sha256 -hex | sed 's/^.* //'
複製代碼

k8s master節點建立新的token

錯誤3:Kubeadm init 或者 join 出現錯誤

錯誤描述

[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
複製代碼
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
複製代碼
  • 重啓機器 reboot the machine,機器重啓以後若是docker 不是開機啓動的記得啓動docker服務
## 啓動 docker 服務
systemctl enable docker.service
## 啓動docker
systemctl start docker
複製代碼
  • 重啓服務器
# 重啓命令
reboot
複製代碼

錯誤4:子節點加入集羣node節點DNS 服務 CrashLoopBackOff

錯誤描述

node節點DNS CrashLoopBackOff

  • 解決:

查看有問題服務的日誌git

kubectl --namespace kube-system logs kube-flannel-ds-amd64-g997s

錯誤日誌:Error from server: Get https://172.31.76.17:10250/containerLogs/kube-system/kube-flannel-ds-amd64-g997s/kube-flannel: dial tcp 172.31.76.17:10250: connect: no route to host
複製代碼
  • 從錯誤日誌中能夠看出是默認網關的問題,加入網卡默認網關便可,默認網關添加具體須要看本身服務器而定。

錯誤5:子節點加入集羣node節點出現錯誤

錯誤描述(路由異常問題)

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

複製代碼
# 執行如下命令
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

#再次執行 kubeadm join .......命令
複製代碼

文章中若是有錯誤,請你們給我提出來,你們一塊兒學習進步,若是以爲個人文章給予你幫助,也請給我一個喜歡和關注,同時也歡迎訪問個人我的博客github

參考連接

相關文章
相關標籤/搜索