手工搭建 Kubernetes 集羣是一件很繁瑣的事情,爲了簡化這些操做,就產生了不少安裝配置工具,如 Kubeadm ,Kubespray,RKE 等組件,我最終選擇了官方的 Kubeadm 主要是不一樣的 Kubernetes 版本都有一些差別,Kubeadm 更新與支持的會好一些。Kubeadm 是 Kubernetes 官方提供的快速安裝和初始化 Kubernetes 集羣的工具,目前的還處於孵化開發狀態,跟隨 Kubernetes 每一個新版本的發佈都會同步更新, 強烈建議先看下官方的文檔瞭解下各個組件與對象的做用。html
關於其餘部署方式參考以下:node
準備3臺服務器,1 個Master 節點 2個 Node 節點(全部節點均需執行以下步驟);生產環境建議 3個 Master N 個 Node 節點,好作到擴展遷移與災備。git
系統版本github
$ cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
修改主機名web
$ sudo hostnamectl set-hostname kubernetes-master $ sudo hostnamectl set-hostname kubernetes-node-1 $ sudo hostnamectl set-hostname kubernetes-node-2
關閉防火牆docker
$ systemctl stop firewalld && systemctl disable firewalld
備註: 開放的端口shell
關閉 elinuxbootstrap
$ setenforce 0 $ sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
關閉 swap
$ swapoff -a
解決路由異常問題
$ echo "net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0" >> /etc/sysctl.d/k8s.conf $ sysctl -p /etc/sysctl.d/k8s.conf
或
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl –system
問題:[preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
安裝 docker(阿里雲鏡像)
$ curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun $ systemctl enable docker && systemctl start docker
安裝 kubelet kubeadm kubectl (阿里雲鏡像)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
備註:
能夠經過 yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet' 查詢可用的版本安裝指定的版本。
查看版本
$ docker --version Docker version 18.06.1-ce, build e68fc7a $ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} $ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} $ kubelet --version Kubernetes v1.12.1
拉取鏡像
因爲 k8s.gcr.io 訪問不了緣由,國人在 github 上同步一份鏡像,能夠經過以下 shell 腳本拉取(不一樣的 kubernetes 版本對應鏡像組件版本也不相同 ,以下我已經匹配好了)
$ touch pull_k8s_images.sh #!/bin/bash images=(kube-proxy:v1.12.1 kube-scheduler:v1.12.1 kube-controller-manager:v1.12.1 kube-apiserver:v1.12.1 kubernetes-dashboard-amd64:v1.10.0 heapster-amd64:v1.5.4 heapster-grafana-amd64:v5.0.4 heapster-influxdb-amd64:v1.5.2 etcd:3.2.24 coredns:1.2.2 pause:3.1 ) for imageName in ${images[@]} ; do docker pull anjia0532/google-containers.$imageName docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName docker rmi anjia0532/google-containers.$imageName done $ sh touch pull_k8s_images.sh
其餘同步鏡像源:
registry.cn-hangzhou.aliyuncs.com/google_containers
)查看該版本須要的容器鏡像版本
$ kubeadm config images list k8s.gcr.io/kube-apiserver:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.2 # 查看已 pull 好的鏡像 $ docker images
備註 :
官方文檔中說明,不一樣的 Kubernetes 版本拉取的鏡像也不一樣,如 1.12 已經不須要指定平臺了(amd64, arm, arm64, ppc64le or s390x),另外新版本 CoreDNS (kube-dns 的替代品) 服務組件也默認包含,無需指定 feature-gates=CoreDNS=true 配置
。
Here
v1.10.x
means the 「latest patch release of the v1.10 branch」.
${ARCH}
can be one of:amd64
,arm
,arm64
,ppc64le
ors390x
.If you run Kubernetes version 1.10 or earlier, and if you set
--feature-gates=CoreDNS=true
, you must also use thecoredns/coredns
image, instead of the threek8s-dns-*
images.In Kubernetes 1.11 and later, you can list and pull the images using the
kubeadm config images
sub-command:kubeadm config images list kubeadm config images pullStarting with Kubernetes 1.12, the
k8s.gcr.io/kube-*
,k8s.gcr.io/etcd
andk8s.gcr.io/pause
images don’t require an-${ARCH}
suffix.
Kubeadm 基本命令
# 建立一個 Master 節點 $ kubeadm init # 將一個 Node 節點加入到當前集羣中 $ kubeadm join <Master 節點的 IP 和端口 >
Kubeadm 部署 Kubernetes 集羣最關鍵的兩個步驟,kubeadm init 和 kubeadm join。能夠定製集羣組件的參數,新建 kubeadm.yaml 配置文件。
$ touch kubeadm.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: InitConfiguration controllerManagerExtraArgs: horizontal-pod-autoscaler-use-rest-clients: "true" horizontal-pod-autoscaler-sync-period: "10s" node-monitor-grace-period: "10s" apiServerExtraArgs: runtime-config: "api/all=true" kubernetesVersion: "v1.12.1"
備註:
若是報以下錯誤
your configuration file uses an old API spec: "kubeadm.k8s.io/v1alpha1". Please use kubeadm v1.11 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
請檢查是不是 1.1x 版本,官方建議使用 v1alpha3 版本,具體能夠看官方的變動文檔(https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)。
In Kubernetes 1.11 and later, the default configuration can be printed out using the kubeadm config print-default command. It is recommended that you migrate your old
v1alpha2
configuration tov1alpha3
using the kubeadm config migrate command, becausev1alpha2
will be removed in Kubernetes 1.13.For more details on each field in the
v1alpha3
configuration you can navigate to our API reference pages.
建立 Master 節點
$ kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.12.1 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [021rjsh216048s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.23.216.48] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [021rjsh216048s localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [021rjsh216048s localhost] and IPs [172.23.216.48 127.0.0.1 ::1] [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 24.503270 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node 021rjsh216048s as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node 021rjsh216048s as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "021rjsh216048s" as an annotation [bootstraptoken] using token: zbnjyn.d5ntetgw5mpp9blv [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.23.216.48:6443 --token zbnjyn.d5ntetgw5mpp9blv --discovery-token-ca-cert-hash sha256:3dff1b750972001675fb8f5284722733f014f60d4371cdffb36522cbda6acb98
kubeadm join 命令,就是用來給這個 Master 增長更多的 Node 節點,另外 Kubeadm 還會提示第一次使用 Kubernetes 集羣須要的配置命令
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Kubernetes 集羣默認須要加密方式訪問 ,這幾條命名就是將集羣的安全配置文件保存到當前用戶的 .kube/config 目錄下,kubectl 默認會使用這個目錄下的受權信息訪問 Kubernetes 集羣。
部署網絡插件
Container Network Interface (CNI) 最先是由 CoreOS 發起的容器網絡規範,是 Kubernetes 網絡插件的基礎。其基本思想爲:Container Runtime 在建立容器時,先建立好 network namespace,而後調用 CNI 插件爲這個 netns 配置網絡,其後再啓動容器內的進程。現已加入 CNCF,成爲 CNCF 主推的網絡模型。
常見的 CNI 網絡插件有不少能夠選擇:
這裏使用 Weave 插件(https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)。
$ kubectl apply -f https://git.io/weave-kube-1.6 或 $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 或 $ kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.5.0/weave-daemonset-k8s-1.8.yaml
備註:其餘功能:
或者選擇 flannel
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
備註CNI-Genie 是華爲 PaaS 團隊推出的同時支持多種網絡插件(支持 calico, canal, romana, weave 等)的 CNI 插件。
查看 Pod 狀態
$ kubectl get pods -n kube-system -l name=weave-net -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE weave-net-j9s27 2/2 Running 0 24h 172.23.216.49 kubernetes-node-1 <none> weave-net-p22s2 2/2 Running 0 24h 172.23.216.50 kubernetes-node-2 <none> weave-net-vnq7p 2/2 Running 0 24h 172.23.216.48 kubernetes-master <none> $ kubectl logs -n kube-system weave-net-j9s27 weave $ kubectl logs weave-net-j9s27 -n kube-system weave-npc
增長 Node 節點
$ kubeadm join 172.23.216.48:6443 --token zbnjyn.d5ntetgw5mpp9blv --discovery-token-ca-cert-hash sha256:3dff1b750972001675fb8f5284722733f014f60d4371cdffb36522cbda6acb98
若是須要從其它任意節點控制集羣,則須要複製 Master 的安全配置信息到每臺服務器
$ mkdir -p $HOME/.kube $ scp root@172.23.216.48:/etc/kubernetes/admin.conf $HOME/.kube/config $ chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl get nodes
查看全部節點
$ kubectl get nodes NAME STATUS ROLES AGE VERSION 021rjsh216048s Ready master 2d23h v1.12.1 021rjsh216049s Ready <none> 2d23h v1.12.1 021rjsh216050s Ready <none> 2d23h v1.12.1 $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-ps2s2 1/1 Running 0 2d23h coredns-576cbf47c7-qsxdx 1/1 Running 0 2d23h etcd-021rjsh216048s 1/1 Running 0 2d23h heapster-684777c4cb-qzz8f 1/1 Running 0 2d16h kube-apiserver-021rjsh216048s 1/1 Running 0 2d23h kube-controller-manager-021rjsh216048s 1/1 Running 1 2d23h kube-proxy-5fgf9 1/1 Running 0 2d23h kube-proxy-hknws 1/1 Running 0 2d23h kube-proxy-qc6xj 1/1 Running 0 2d23h kube-scheduler-021rjsh216048s 1/1 Running 1 2d23h kubernetes-dashboard-77fd78f978-pqdvw 1/1 Running 0 2d18h monitoring-grafana-56b668bccf-tm2cl 1/1 Running 0 2d16h monitoring-influxdb-5c5bf4949d-85d5c 1/1 Running 0 2d16h weave-net-5fq89 2/2 Running 0 2d23h weave-net-flxgg 2/2 Running 0 2d23h weave-net-vvdkq 2/2 Running 0 2d23h $ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE kube-system coredns-576cbf47c7-ps2s2 1/1 Running 0 2d23h 10.32.0.3 021rjsh216048s <none> kube-system coredns-576cbf47c7-qsxdx 1/1 Running 0 2d23h 10.32.0.2 021rjsh216048s <none> kube-system etcd-021rjsh216048s 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system heapster-684777c4cb-qzz8f 1/1 Running 0 2d16h 10.44.0.2 021rjsh216049s <none> kube-system kube-apiserver-021rjsh216048s 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-controller-manager-021rjsh216048s 1/1 Running 1 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-proxy-5fgf9 1/1 Running 0 2d23h 172.23.216.49 021rjsh216049s <none> kube-system kube-proxy-hknws 1/1 Running 0 2d23h 172.23.216.50 021rjsh216050s <none> kube-system kube-proxy-qc6xj 1/1 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kube-scheduler-021rjsh216048s 1/1 Running 1 2d23h 172.23.216.48 021rjsh216048s <none> kube-system kubernetes-dashboard-77fd78f978-pqdvw 1/1 Running 0 2d18h 10.36.0.1 021rjsh216050s <none> kube-system monitoring-grafana-56b668bccf-tm2cl 1/1 Running 0 2d16h 10.44.0.1 021rjsh216049s <none> kube-system monitoring-influxdb-5c5bf4949d-85d5c 1/1 Running 0 2d16h 10.36.0.2 021rjsh216050s <none> kube-system weave-net-5fq89 2/2 Running 0 2d23h 172.23.216.48 021rjsh216048s <none> kube-system weave-net-flxgg 2/2 Running 0 2d23h 172.23.216.50 021rjsh216050s <none> kube-system weave-net-vvdkq 2/2 Running 0 2d23h 172.23.216.49 021rjsh216049s <none>
查看健康狀態
$ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
其餘命令
#查看 master 節點的 token $ kubeadm token list | grep authentication,signing | awk '{print $1}' #查看 discovery-token-ca-cert-hash $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
查詢了官方最新版本是 v1.10.0 版本,上述腳本已經拉取此鏡像
修改 kubernetes-dashboard.yaml 文件,在 Dashboard Service 中添加 type: NodePort,暴露 Dashboard 服務。
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard
備註:
暴露服務不少種方式:
安裝插件
# 安裝 Dashboard 插件 $ kubectl create -f kubernetes-dashboard.yaml # 替換配置 $ kubectl replace --force -f kubernetes-dashboard.yaml
備註 :使用 proxy 本地訪問 ,集羣外訪須要使用 Ingress , 這裏先使用 NodePort 方式。
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml $ kubectl proxy Now access Dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
授予 Dashboard 帳戶集羣管理權限
建立一個 kubernetes-dashboard-admin 的 ServiceAccount 並授予集羣admin的權限,建立 kubernetes-dashboard-admin.rbac.yaml。
--- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system
執行
$ kubectl create -f kubernetes-dashboard-admin.rbac.yaml
或
$ kubectl apply -f https://raw.githubusercontent.com/batizhao/dockerfile/master/k8s/kubernetes-dashboard/kubernetes-dashboard-admin.rbac.yaml
查看 Dashboard 服務端口
[root@kubernetes-master ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6h40m kubernetes-dashboard NodePort 10.98.73.56 <none> 443:30828/TCP 63m
查看 kubernete-dashboard-admin 的 token
$ kubectl -n kube-system get secret | grep kubernetes-dashboard-admin kubernetes-dashboard-admin-token-4k82b kubernetes.io/service-account-token 3 75m $ kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-4k82b Name: kubernetes-dashboard-admin-token-4k82b Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: a904fbf5-d3aa-11e8-945d-0050569f4a19 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi00azgyYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE5MDRmYmY1LWQzYWEtMTFlOC05NDVkLTAwNTA1NjlmNGExOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.DGajGHRfLmtFpCyoHKn4wS0ZHKALfwMgTUTjmGSzBM3u1rr4hF51KFWBVwBPCkFQ1e1A5v6ENdhCNUQ_b66XohehJqKdgF_OBx5MXe0den_XVquJVlQRHVssL2BW-MjLXccuJ4LrKf4Q7sjGOqr4ivd6D39Bqjv7e6BxFUGO6vRPFzAme5dbJ7u28_DJZ1RGgVz-ylz3wCRZC89bP_3qqd1RK5G-gF2--RPA3atoCfrTIPzynu-y3qLQl6EWtC-hYywGb1oJPRa1it7EqTsLXmuOHqR_9tpDfJwiN9oDcnjU0ZHe6ifLcHWwRRka5tuSnKD6S3iRgaM47xtQe8yn4A
部署 Heapter 插件(統計 Nodes、Pods 的 CPU、內存、負載等功能,看官網說明已廢棄,未安裝)
mkdir -p ~/k8s/heapster cd ~/k8s/heapster wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml kubectl create -f ./
備註:
已過時,Heapter 將在 Kubernetes 1.13 版本中移除(https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md),推薦使用 metrics-server 與 Prometheus。
最後訪問 https://172.23.216.48:30828(新版本的谷歌啓用 HTTPS 安全性驗證,好像不行),使用火狐打開,輸入上述 Token 完成。
REFER:
https://kubernetes.io/docs/setup/independent/high-availability/
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
https://opsx.alibaba.com/mirror?lang=zh-CN
https://jimmysong.io/posts/kubernetes-dashboard-upgrade/
https://blog.frognew.com/2018/10/kubeadm-install-kubernetes-1.12.html
https://github.com/opsnull/follow-me-install-kubernetes-cluster
https://github.com/kubernetes/examples
https://github.com/kubernetes/dashboard