官方推薦方法:node
鏈接:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/nginx
運行推薦yaml:git
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
可是由於國內沒法上google的緣由,此辦法通常都會報錯。推薦如下辦法:github
daocloude鏡像地址:https://dashboard.daocloud.io/packagesweb
一、編寫yaml文件docker
dashboard.yamlcentos
apiVersion: extensions/v1beta1 kind: Deployment metadata: # Keep the name in sync with image version and # gce/coreos/kube-manifests/addons/dashboard counterparts name: kubernetes-dashboard-latest namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: kubernetes-dashboard version: latest kubernetes.io/cluster-service: "true" spec: containers: - name: kubernetes-dashboard image: daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64:v1.5.1
#這裏使用daoclod的鏡像,默認鏡像爲gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi ports: - containerPort: 9090 args: - --apiserver-host=http://192.168.50.131:8080 #這裏修改成本身環境master的地址 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30
dashboardsvc.yamlapi
apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" spec: selector: k8s-app: kubernetes-dashboard ports: - port: 80 targetPort: 9090
二、建立depolyment和servicebash
正常狀況下建立depolyment和service,網絡
kubectl create -f dashboard.yaml
kubectl create -f dashboardsvc.yaml
三、驗證
master上執行
[root@test03 pods]# kubectl get deploy --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard-latest 1 1 1 1 45m
[root@test03 pods]# kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.254.211.210 <none> 80/TCP 45m
[root@test03 pods]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default nginx01 1/1 Running 0 8d 172.30.42.2 node01
default nginx03 1/1 Running 0 10h 172.30.53.2 node02
kube-system kubernetes-dashboard-latest-713129511-8p8gn 1/1 Running 0 46m 172.30.53.3 node02
經過頁面訪問:http://192.168.50.131:8080/ui
ps:會跳轉到http://192.168.50.131:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
其實最終訪問的是:http://172.30.53.3:9090/ 及node上的pod
四、遇到的問題
就能夠經過頁面http://192.168.50.131:8080/ui就能夠訪問bashborad了。可是這裏有幾個坑
(1)國內沒法訪問gcr.io的鏡像,因此在dashboard.yaml中咱們使用daoclod的鏡像,能夠如今node節點中docker pull一下,確認是否能夠拉取鏡像。
個人環境不是很穩定,其中一個node02拉去成功,第二個node01拉取失敗,而且建立的pod正好在失敗的節點上。
[root@test03 pods]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default nginx01 1/1 Running 0 7d 172.30.42.2 node01
default nginx03 1/1 Running 0 3h 172.30.53.2 node02
kube-system kubernetes-dashboard-latest-2290711670-q5kzx 0/1 ImagePullBackOff 0 34m 172.30.42.3 node01
表示建立pod失敗,使用kubectl describe pods --namespace=kube-system,查看報錯日誌,通常都是鏡像拉取失敗
如今須要再node01中拉取鏡像,能夠經過node02的鏡像load便可。
node02:
[root@test01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/nginx latest e548f1a579cf 3 weeks ago 108.6 MB registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 5 months ago 208.6 MB daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 14 months ago 103.6 MB ansible/centos7-ansible-tag latest 688353a31fde 15 months ago 447.2 MB ansible/centos7-ansible latest 688353a31fde 15 months ago 447.2 MB docker.io/kubernetes/pause latest f9d5de079539 3 years ago 239.8 kB gcr.io/google_containers/pause 2.0 f9d5de079539 3 years ago 239.8 kB [root@test01 ~]# docker save daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64:v1.5.1 > bashborad.tar [root@test01 ~]# ll total 160592 -rw-------. 1 root root 1026 Feb 5 23:08 anaconda-ks.cfg -rw-r--r--. 1 root root 103781888 Mar 15 00:20 bashborad.tar -r--r--r--. 1 root root 60654199 Aug 27 2013 VMwareTools-9.6.0-1294478.tar.gz
scp bashborad.tar到node01 執行load便可,docker image查看,發現鏡像已經導入
docker load < dashboard.tar
此時node上鏡像已經都已經拉取成功,須要銷燬從新建立deployment和service
master上執行
kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system kubectl delete svc kubernetes-dashboard --namespace=kube-system 建立: kubectl create -f bashborad.yaml kubectl create -f bashboradsvc.yaml
二、flannel網路問題,安裝成功,訪問頁面失敗
訪問http://192.168.50.131:8080/ui,沒有反應
訪問:http://192.168.50.131:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
報錯以下:
Error: 'dial tcp 172.30.53.3:9090: getsockopt: connection timed out' Trying to reach: 'http://172.30.53.3:9090/'
根據提示可以知道192.168.50.131沒法訪問172.30.53.3:9090。ifconfig查看master上的網絡,發現沒有flanne網絡,可能以前忘記重啓了,systemctl restart flanneld
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.30.10.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::e051:bfff:fefe:d009 prefixlen 64 scopeid 0x20<link> ether e2:51:bf:fe:d0:09 txqueuelen 0 (Ethernet) RX packets 1249 bytes 1625343 (1.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 481 bytes 42378 (41.3 KiB) TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0
三、node節點上鏡像pull成功,可是仍是安裝失敗
dashboard.yaml中
daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64:v1.5.1
沒有選擇版本,若是爲 daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64,則表示拉去lastest版本,即
daocloud.io/gfkchinanetquest/kubernetes-dashboard-amd64:lastest
可是由於咱們是離線load的,並無lastest版本,因此會報錯。
四、不修改dashboard.yaml內容的方法
load鏡像後,tag鏡像爲
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1便可。(後續補充)
五、過兩天重啓節點後沒法登錄dashboard
報錯
Error: 'dial tcp 172.30.43.3:9090: getsockopt: connection timed out' Trying to reach: 'http://172.30.43.3:9090/'
此172.30.43.3是上次pod的ip地址,重啓後pod的ip地址已經變動,不知爲什麼還會重定向到這個ip!!
搜索沒有相關答案,但願刪除deploy,svc後重建。
刪除deploy的時候報錯
kubectl delete deploy kubernetes-dashboard-latest --namespace=kube-system error: timed out waiting for the condition
沒法刪除deploy!!
google答案,建議
journalctl -u kube-controller-manager 查看日誌
service kube-apiserver restart
service kube-controller-manager restart
service kube-scheduler restart重啓服務
解決,不行就先刪除pod在刪除deploy。