Docker學習-Kubernetes - 集羣部署

 

Docker學習

Docker學習-VMware Workstation 本地多臺虛擬機互通,主機網絡互通搭建html

Docker學習-Docker搭建Consul集羣node

Docker學習-簡單的私有DockerHub搭建linux

Docker學習-Spring Boot on Dockergit

Docker學習-Kubernetes - 集羣部署github

 

簡介

kubernetes,簡稱K8s,是用8代替8個字符「ubernete」而成的縮寫。是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單而且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。sql

Kubernetes是Google開源的一個容器編排引擎,它支持自動化部署、大規模可伸縮、應用容器化管理。在生產環境中部署一個應用程序時,一般要部署該應用的多個實例以便對應用請求進行負載均衡。
在Kubernetes中,咱們能夠建立多個容器,每一個容器裏面運行一個應用實例,而後經過內置的負載均衡策略,實現對這一組應用實例的管理、發現、訪問,而這些細節都不須要運維人員去進行復雜的手工配置和處理。
 

基本概念

Kubernetes 中的絕大部分概念都抽象成 Kubernetes 管理的一種資源對象docker

  • Master:Master 節點是 Kubernetes 集羣的控制節點,負責整個集羣的管理和控制。Master 節點上包含如下組件:
  • kube-apiserver:集羣控制的入口,提供 HTTP REST 服務
  • kube-controller-manager:Kubernetes 集羣中全部資源對象的自動化控制中心
  • kube-scheduler:負責 Pod 的調度
  • Node:Node 節點是 Kubernetes 集羣中的工做節點,Node 上的工做負載由 Master 節點分配,工做負載主要是運行容器應用。Node 節點上包含如下組件:shell

    • kubelet:負責 Pod 的建立、啓動、監控、重啓、銷燬等工做,同時與 Master 節點協做,實現集羣管理的基本功能。
    • kube-proxy:實現 Kubernetes Service 的通訊和負載均衡
    • 運行容器化(Pod)應用
  • Pod: Pod 是 Kubernetes 最基本的部署調度單元。每一個 Pod 能夠由一個或多個業務容器和一個根容器(Pause 容器)組成。一個 Pod 表示某個應用的一個實例數據庫

  • ReplicaSet:是 Pod 副本的抽象,用於解決 Pod 的擴容和伸縮
  • Deployment:Deployment 表示部署,在內部使用ReplicaSet 來實現。能夠經過 Deployment 來生成相應的 ReplicaSet 完成 Pod 副本的建立
  • Service:Service 是 Kubernetes 最重要的資源對象。Kubernetes 中的 Service 對象能夠對應微服務架構中的微服務。Service 定義了服務的訪問入口,服務的調用者經過這個地址訪問 Service 後端的 Pod 副本實例。Service 經過 Label Selector 同後端的 Pod 副本創建關係,Deployment 保證後端Pod 副本的數量,也就是保證服務的伸縮性。

Kubernetes 主要由如下幾個核心組件組成:express

  • etcd 保存了整個集羣的狀態,就是一個數據庫;
  • apiserver 提供了資源操做的惟一入口,並提供認證、受權、訪問控制、API 註冊和發現等機制;
  • controller manager 負責維護集羣的狀態,好比故障檢測、自動擴展、滾動更新等;
  • scheduler 負責資源的調度,按照預約的調度策略將 Pod 調度到相應的機器上;
  • kubelet 負責維護容器的生命週期,同時也負責 Volume(CSI)和網絡(CNI)的管理;
  • Container runtime 負責鏡像管理以及 Pod 和容器的真正運行(CRI);
  • kube-proxy 負責爲 Service 提供 cluster 內部的服務發現和負載均衡;

固然了除了上面的這些核心組件,還有一些推薦的插件:

  • kube-dns 負責爲整個集羣提供 DNS 服務
  • Ingress Controller 爲服務提供外網入口
  • Heapster 提供資源監控
  • Dashboard 提供 GUI

組件通訊

Kubernetes 多組件之間的通訊原理:

  • apiserver 負責 etcd 存儲的全部操做,且只有 apiserver 才直接操做 etcd 集羣
  • apiserver 對內(集羣中的其餘組件)和對外(用戶)提供統一的 REST API,其餘組件均經過 apiserver 進行通訊

    • controller manager、scheduler、kube-proxy 和 kubelet 等均經過 apiserver watch API 監測資源變化狀況,並對資源做相應的操做
    • 全部須要更新資源狀態的操做均經過 apiserver 的 REST API 進行
  • apiserver 也會直接調用 kubelet API(如 logs, exec, attach 等),默認不校驗 kubelet 證書,但能夠經過 --kubelet-certificate-authority 開啓(而 GKE 經過 SSH 隧道保護它們之間的通訊)

好比最典型的建立 Pod 的流程:​​

  • 用戶經過 REST API 建立一個 Pod
  • apiserver 將其寫入 etcd
  • scheduluer 檢測到未綁定 Node 的 Pod,開始調度並更新 Pod 的 Node 綁定
  • kubelet 檢測到有新的 Pod 調度過來,經過 container runtime 運行該 Pod
  • kubelet 經過 container runtime 取到 Pod 狀態,並更新到 apiserver 中

 

集羣部署

 

使用kubeadm工具安裝

1. master和node 都用yum 安裝kubelet,kubeadm,docker
2. master 上初始化:kubeadm init
3. master 上啓動一個flannel的pod
4. node上加入集羣:kubeadm join

 

準備環境

Centos7  192.168.50.21 k8s-master  
Centos7  192.168.50.22 k8s-node01
Centos7  192.168.50.23 k8s-node02

修改主機名(3臺機器都須要修改)

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

關閉防火牆

systemctl stop firewalld.service

配置docker yum源

yum install -y yum-utils device-mapper-persistent-data lvm2 wget
cd /etc/yum.repos.d
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置kubernetes yum 源

cd /opt/
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg
cd /etc/yum.repos.d
vi kubernetes.repo 
輸入如下內容
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

yum repolist

master和node 安裝kubelet,kubeadm,docker

yum install docker 
yum install kubelet-1.13.1
yum install kubeadm-1.13.1

master 上安裝kubectl

yum install kubectl-1.13.1

docker的配置

配置私有倉庫和鏡像加速地址,私有倉庫配置參見 http://www.javashuo.com/article/p-elpnlkxi-bb.html

vi /etc/docker/daemon.json

 

{
    "registry-mirror":[
        "http://hub-mirror.c.163.com"
    ],
    "insecure-registries":[
        "192.168.50.24:5000"
    ]
}

 

啓動docker

systemctl daemon-reload
systemctl start docker 
docker info

master 上初始化:kubeadm init 

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

初始化命令說明:

--apiserver-advertise-address

指明用 Master 的哪一個 interface 與 Cluster 的其餘節點通訊。若是 Master 有多個 interface,建議明確指定,若是不指定,kubeadm 會自動選擇有默認網關的 interface。

--pod-network-cidr

指定 Pod 網絡的範圍。Kubernetes 支持多種網絡方案,並且不一樣網絡方案對 --pod-network-cidr 有本身的要求,這裏設置爲 10.244.0.0/16 是由於咱們將使用 flannel 網絡方案,必須設置成這個 CIDR。

--image-repository

Kubenetes默認Registries地址是 k8s.gcr.io,在國內並不能訪問 gcr.io,在1.13版本中咱們能夠增長–image-repository參數,默認值是 k8s.gcr.io,將其指定爲阿里雲鏡像地址:registry.aliyuncs.com/google_containers。

--kubernetes-version=v1.13.1 

關閉版本探測,由於它的默認值是stable-1,會致使從https://dl.k8s.io/release/stable-1.txt下載最新的版本號,咱們能夠將其指定爲固定版本(最新版:v1.13.1)來跳過網絡請求。

初始化過程當中

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 是在下載鏡像文件,過程比較慢。
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.002300 seconds 這個過程也比較慢 能夠忽略
 
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.21]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.002300 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7ax0k4.nxpjjifrqnbrpojv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.50.21:6443 --token 7ax0k4.nxpjjifrqnbrpojv --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b

初始化過程說明:

1) [preflight] kubeadm 執行初始化前的檢查。
2) [kubelet-start] 生成kubelet的配置文件」/var/lib/kubelet/config.yaml」
3) [certificates] 生成相關的各類token和證書
4) [kubeconfig] 生成 KubeConfig 文件,kubelet 須要這個文件與 Master 通訊
5) [control-plane] 安裝 Master 組件,會從指定的 Registry 下載組件的 Docker 鏡像。
6) [bootstraptoken] 生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時會用到
7) [addons] 安裝附加組件 kube-proxy 和 kube-dns。
8) Kubernetes Master 初始化成功,提示如何配置常規用戶使用kubectl訪問集羣。
9) 提示如何安裝 Pod 網絡。
10) 提示如何註冊其餘節點到 Cluster。

 

異常狀況:

 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

運行

systemctl enable docker.service
systemctl enable kubelet.service

會提示如下錯誤

  [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:

配置host

cat >> /etc/hosts << EOF
192.168.50.21 k8s-master
192.168.50.22 k8s-node01
192.168.50.23 k8s-node02
EOF

再次運行初始化命令會出現

  [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2   --設置虛擬機CPU個數大於2
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

設置好虛擬機CPU個數,重啓後再次運行:

kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

 

[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull

解決辦法:docker.io倉庫對google的容器作了鏡像,能夠經過下列命令下拉取相關鏡像

先看下須要用到哪些

kubeadm config images list

配置yum源

[root@k8s-master opt]# vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.1
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - 192.168.50.21
controlPlaneEndpoint: "192.168.50.20:16443"
networking:
  # This CIDR is a Calico default. Substitute or remove for your CNI provider.
  podSubnet: "172.168.0.0/16"
 kubeadm config images pull --config /opt/kubeadm-config.yaml

初始化master

kubeadm init --config=kubeadm-config.yaml  --upload-certs

 

xecution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use


kubeadm會自動檢查當前環境是否有上次命令執行的「殘留」。若是有,必須清理後再行執行init。咱們能夠經過」kubeadm reset」來清理環境,以備重來。

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

==緣由==

這是由於kubelet沒啓動

==解決==

systemctl restart kubelet

若是啓動不了kubelet

kubelet.service - kubelet: The Kubernetes Node Agent

 

則多是swap交換分區還開啓的緣由
-關閉swap

swapoff -a

-配置kubelet

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

 再次運行

kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

 

 

配置 kubectl

kubectl 是管理 Kubernetes Cluster 的命令行工具,前面咱們已經在全部的節點安裝了 kubectl。Master 初始化完成後須要作一些配置工做,而後 kubectl 就能使用了。
依照 kubeadm init 輸出的最後提示,推薦用 Linux 普通用戶執行 kubectl。

  • 建立普通用戶centos
#建立普通用戶並設置密碼123456
useradd centos && echo "centos:123456" | chpasswd centos

#追加sudo權限,並配置sudo免密
sed -i '/^root/a\centos  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers

#保存集羣安全配置文件到當前用戶.kube目錄
su - centos
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#啓用 kubectl 命令自動補全功能(註銷從新登陸生效)
echo "source <(kubectl completion bash)" >> ~/.bashrc

須要這些配置命令的緣由是:Kubernetes 集羣默認須要加密方式訪問。因此,這幾條命令,就是將剛剛部署生成的 Kubernetes 集羣的安全配置文件,保存到當前用戶的.kube 目錄下,kubectl 默認會使用這個目錄下的受權信息訪問 Kubernetes 集羣。
若是不這麼作的話,咱們每次都須要經過 export KUBECONFIG 環境變量告訴 kubectl 這個安全配置文件的位置。
配置完成後centos用戶就可使用 kubectl 命令管理集羣了。

查看集羣狀態:

kubectl get cs

 

 

 

 

 

 部署網絡插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  

kubectl get 從新檢查 Pod 的狀態

 

 

部署worker節點

 在master機器保存生成號的鏡像文件

docker save -o master.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.13.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.1  registry.aliyuncs.com/google_containers/coredns:1.2.6  registry.aliyuncs.com/google_containers/etcd:3.2.24 registry.aliyuncs.com/google_containers/pause:3.1

注意對應的版本號

將master上保存的鏡像同步到節點上

scp master.tar node01:/root/
scp master.tar node02:/root/

將鏡像導入本地,node01,node02

 docker load< master.tar

配置host,node01,node02

cat >> /etc/hosts << EOF
192.168.50.21 k8s-master
192.168.50.22 k8s-node01
192.168.50.23 k8s-node02
EOF

配置iptables,node01,node02

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

-關閉swap,node01,node02

swapoff -a

-配置kubelet,node01,node02

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" 
systemctl enable docker.service
systemctl enable kubelet.service

啓動docker,node01,node02

service docker strat

部署網絡插件,node01,node02

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

獲取join指令,master

kubeadm token create --print-join-command
kubeadm token create --print-join-command
kubeadm join 192.168.50.21:6443 --token n9g4nq.kf8ppgpgb3biz0n5 --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b

 

在子節點運行指令 ,node01,node02

kubeadm join 192.168.50.21:6443 --token n9g4nq.kf8ppgpgb3biz0n5 --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.50.21:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.50.21:6443"
[discovery] Requesting info from "https://192.168.50.21:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.50.21:6443"
[discovery] Successfully established connection with API Server "192.168.50.21:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 4]
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在master上查看節點狀態

kubectl get nodes

 

 這種狀態是錯誤的 ,只有一臺聯機正確

查看node01,和node02發現 node01有些進程沒有徹底啓動

刪除node01全部運行的容器,node01

docker stop $(docker ps -q) & docker rm $(docker ps -aq)

重置 kubeadm ,node01

kubeadm reset

獲取join指令,master

kubeadm token create --print-join-command

再次在node01上運行join

 

 

 

查看node01鏡像運行狀態

  

查看master狀態

  

nodes狀態所有爲ready,因爲每一個節點都須要啓動若干組件,若是node節點的狀態是 NotReady,能夠查看全部節點pod狀態,確保全部pod成功拉取到鏡像並處於running狀態:

kubectl get pod --all-namespaces -o wide

  

配置kubernetes UI圖形化界面

建立kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1  
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 443
    targetPort: 8443
    nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

 

執行如下命令建立kubernetes-dashboard:

kubectl create -f kubernetes-dashboard.yaml

若是出現

Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.apps "kubernetes-dashboard" already exists

運行delete清理

kubectl delete -f kubernetes-dashboard.yaml

查看組件運行狀態

kubectl get pods --all-namespaces

 

 

 ErrImagePull 拉取鏡像失敗

手動拉取 並重置tag

docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

從新建立

  

 

ImagePullBackOff

默認狀況是會根據配置文件中的鏡像地址去拉取鏡像,若是設置爲IfNotPresent 和Never就會使用本地鏡像。

IfNotPresent :若是本地存在鏡像就優先使用本地鏡像。
Never:直接再也不去拉取鏡像了,使用本地的;若是本地不存在就報異常了。

 spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1  
        imagePullPolicy: IfNotPresent

 

查看映射狀態 

 kubectl get service  -n kube-system

 

 

 

 

建立可以訪問 Dashboard 的用戶

 新建文件 account.yaml ,內容以下:

# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

 

 複製token登錄

 

 

configmaps is forbidden: User "system:serviceaccount:kube-system:admin-user" cannot list resource "configmaps" in API group "" in the namespace "default" 

 受權用戶

kubectl create clusterrolebinding test:admin-user --clusterrole=cluster-admin --serviceaccount=kube-system:admin-user

 

 

本文參考:

http://www.javashuo.com/article/p-bshlgosu-hn.html

http://www.javashuo.com/article/p-raajjdoi-mq.html

相關文章
相關標籤/搜索