Kubernetes Dashboard 是 k8s集羣的一個 WEB UI管理工具,代碼託管在 github 上,地址:https://github.com/kubernetes...html
經過https進行訪問必須要使用證書和密鑰,在Kubernetes中能夠經過配置一個加密憑證(TLS secret)來提供。node
這裏只是拿來本身使用,建立一個本身簽名的證書。nginx
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ./tls.key -out ./tls.crt -subj "/CN=18.16.202.163"
將會產生兩個文件tls.key和tls.crt,你能夠改爲本身的文件名或放在特定的目錄下(若是你是爲公共服務器建立的,請保證這個不會被別人訪問到)。後面的192.168.126.130
是個人服務器IP地址,你能夠改爲本身的。git
下一步,將這兩個文件的信息建立爲一個Kubernetes的secret訪問憑證,我將名稱指定爲 hongda-com-tls-secret
,這在後面的Ingress配置時將會用到。若是你修改了這個名字,注意後面的配置yaml文件也須要同步修改。github
kubectl -n kube-system create secret tls hongda-com-tls-secret --key ./tls.key --cert ./tls.crt
查看:web
kubectl get secret -n kube-system |grep hongda hongda-com-tls-secret kubernetes.io/tls 2 43s
image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.hongda.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: hongda-com-tls-secret hosts: - k8s.hongda.com nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule rbac: clusterAdminRole: true
相比默認配置,修改瞭如下配置項:docker
ingress.class
爲 nginx,讓咱們安裝 Nginx Ingress Controller 來反向代理 Kubernetes Dashboard 服務;因爲 Kubernetes Dashboard 後端服務是以 https 方式監聽的,而 Nginx Ingress Controller 默認會以 HTTP 協議將請求轉發給後端服務,用secure-backends
這個 annotation 來指示 Nginx Ingress Controller 以 HTTPS 協議將請求轉發給後端服務helm install stable/kubernetes-dashboard \ -n kubernetes-dashboard \ --namespace kube-system \ -f kubernetes-dashboard.yaml
[root@master /]# helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml NAME: kubernetes-dashboard LAST DEPLOYED: Tue Aug 6 16:11:37 2019 NAMESPACE: kube-system STATUS: DEPLOYED RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 0/1 1 0 <invalid> ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE kubernetes-dashboard-848b8dd798-gtddg 0/1 ContainerCreating 0 <invalid> ==> v1/Secret NAME TYPE DATA AGE kubernetes-dashboard Opaque 0 <invalid> ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.108.244.10 <none> 443/TCP <invalid> ==> v1/ServiceAccount NAME SECRETS AGE kubernetes-dashboard 1 <invalid> ==> v1beta1/ClusterRoleBinding NAME AGE kubernetes-dashboard <invalid> ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE kubernetes-dashboard k8s.hongda.com 80, 443 <invalid> NOTES: ********************************************************************************* *** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install *** ********************************************************************************* From outside the cluster, the server URL(s) are: https://k8s.hongda.com
[root@master /]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-gts57 1/1 Running 1 3d6h 10.244.2.2 slaver2 <none> <none> coredns-5c98db65d4-qhwrw 1/1 Running 1 3d6h 10.244.1.2 slaver1 <none> <none> etcd-master 1/1 Running 2 3d6h 18.16.202.163 master <none> <none> kube-apiserver-master 1/1 Running 2 3d6h 18.16.202.163 master <none> <none> kube-controller-manager-master 1/1 Running 6 3d6h 18.16.202.163 master <none> <none> kube-flannel-ds-amd64-2lwl8 1/1 Running 0 3d1h 18.16.202.227 slaver1 <none> <none> kube-flannel-ds-amd64-9bjck 1/1 Running 0 3d1h 18.16.202.95 slaver2 <none> <none> kube-flannel-ds-amd64-gxxqg 1/1 Running 0 3d1h 18.16.202.163 master <none> <none> kube-proxy-8cwj4 1/1 Running 0 107m 18.16.202.163 master <none> <none> kube-proxy-j9zpz 1/1 Running 0 107m 18.16.202.227 slaver1 <none> <none> kube-proxy-vfgjv 1/1 Running 0 107m 18.16.202.95 slaver2 <none> <none> kube-scheduler-master 1/1 Running 6 3d6h 18.16.202.163 master <none> <none> kubernetes-dashboard-64f97ccb4f-nbpkx 0/1 ImagePullBackOff 0 33m 10.244.0.4 master <none> <none> tiller-deploy-6787c946f8-6b5tv 1/1 Running 0 44m 10.244.1.4 slaver1 <none> <none>
查看線上版本:後端
[root@master /]# helm search kubernetes-dashboard NAME CHART VERSION APP VERSION DESCRIPTION stable/kubernetes-dashboard 0.6.0 1.8.3 General-purpose web UI for Kubernetes clusters
應該是版本不一致,阿里雲裏最新版本爲1.8.3
,而helm安裝配置版本爲1.10.1
,因此致使沒有拉取到鏡像api
[root@master /]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/ "stable" has been added to your repositories [root@master /]# helm search kubernetes-dashboard NAME CHART VERSION APP VERSION DESCRIPTION stable/kubernetes-dashboard 1.8.0 1.10.1 General-purpose web UI for Kubernetes clusters
更換倉庫之後,再次安裝,仍是同樣的問題,查看瀏覽器
[root@master /]# kubectl get namespace NAME STATUS AGE default Active 3d8h ingress-nginx Active 152m kube-node-lease Active 3d8h kube-public Active 3d8h kube-system Active 3d8h [root@master /]# kubectl describe pod kubernetes-dashboard-7ffdf885d6-t4htt -n kube-system Name: kubernetes-dashboard-7ffdf885d6-t4htt Namespace: kube-system Priority: 0 Node: master/18.16.202.163 Start Time: Wed, 31 Jul 2019 16:46:40 +0800 Labels: app=kubernetes-dashboard kubernetes.io/cluster-service=true pod-template-hash=7ffdf885d6 release=kubernetes-dashboard Annotations: <none> Status: Pending IP: 10.244.0.20 Controlled By: ReplicaSet/kubernetes-dashboard-7ffdf885d6 Containers: kubernetes-dashboard: Container ID: Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 Image ID: Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-pph4g (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kubernetes-dashboard-token-pph4g: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-pph4g Optional: false QoS Class: Guaranteed Node-Selectors: node-role.kubernetes.io/edge= Tolerations: node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/master:PreferNoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m47s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-7ffdf885d6-t4htt to master Normal Pulling 89s (x4 over 3m45s) kubelet, master Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3" Warning Failed 74s (x4 over 3m30s) kubelet, master Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 74s (x4 over 3m30s) kubelet, master Error: ErrImagePull Normal BackOff 61s (x6 over 3m30s) kubelet, master Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3" Warning Failed 46s (x7 over 3m30s) kubelet, master Error: ImagePullBackOff
明顯是特麼的拉取的k8s.gcr.io
域名下面的,拉取不到。
好吧,我仍是拉取不到。
從Docker Hub
中拉取一個相同版本的,替換
docker pull sacred02/kubernetes-dashboard-amd64:v1.10.1
docker tag sacred02/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker rmi sacred02/kubernetes-dashboard-amd64:v1.10.1
helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
[root@master /]# helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE kubernetes-dashboard 1 Wed Jul 31 17:11:35 2019 DEPLOYED kubernetes-dashboard-1.8.0 1.10.1 kube-system nginx-ingress 1 Wed Jul 31 13:59:14 2019 DEPLOYED nginx-ingress-1.11.5 0.25.0 ingress-nginx
查看po,svc:
[root@master /]# kubectl get po,svc --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default pod/curl-6bf6db5c4f-vhsqc 1/1 Running 1 10d 10.244.2.3 slaver2 <none> <none> ingress-nginx pod/nginx-ingress-controller-b89575c7f-2xtkk 1/1 Running 0 26m 18.16.202.163 master <none> <none> ingress-nginx pod/nginx-ingress-default-backend-7b8b45bd49-g4mbz 1/1 Running 0 26m 10.244.0.23 master <none> <none> kube-system pod/coredns-5c98db65d4-gts57 1/1 Running 7 11d 10.244.2.2 slaver2 <none> <none> kube-system pod/coredns-5c98db65d4-qhwrw 1/1 Running 6 11d 10.244.1.2 slaver1 <none> <none> kube-system pod/etcd-master 1/1 Running 4 11d 18.16.202.163 master <none> <none> kube-system pod/kube-apiserver-master 1/1 Running 4 11d 18.16.202.163 master <none> <none> kube-system pod/kube-controller-manager-master 1/1 Running 8 11d 18.16.202.163 master <none> <none> kube-system pod/kube-flannel-ds-amd64-2lwl8 1/1 Running 0 11d 18.16.202.227 slaver1 <none> <none> kube-system pod/kube-flannel-ds-amd64-9bjck 1/1 Running 0 11d 18.16.202.95 slaver2 <none> <none> kube-system pod/kube-flannel-ds-amd64-gxxqg 1/1 Running 3 11d 18.16.202.163 master <none> <none> kube-system pod/kube-proxy-8cwj4 1/1 Running 3 8d 18.16.202.163 master <none> <none> kube-system pod/kube-proxy-j9zpz 1/1 Running 0 8d 18.16.202.227 slaver1 <none> <none> kube-system pod/kube-proxy-vfgjv 1/1 Running 0 8d 18.16.202.95 slaver2 <none> <none> kube-system pod/kube-scheduler-master 1/1 Running 8 11d 18.16.202.163 master <none> <none> kube-system pod/kubernetes-dashboard-848b8dd798-gtddg 1/1 Running 0 40s 10.244.0.24 master <none> <none> kube-system pod/tiller-deploy-6787c946f8-6b5tv 1/1 Running 0 8d 10.244.1.4 slaver1 <none> <none> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d <none> ingress-nginx service/nginx-ingress-controller LoadBalancer 10.111.25.193 <pending> 80:31577/TCP,443:31246/TCP 26m app=nginx-ingress,component=controller,release=nginx-ingress ingress-nginx service/nginx-ingress-default-backend ClusterIP 10.106.126.222 <none> 80/TCP 26m app=nginx-ingress,component=default-backend,release=nginx-ingress kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d k8s-app=kube-dns kube-system service/kubernetes-dashboard ClusterIP 10.108.244.10 <none> 443/TCP 40s app=kubernetes-dashboard,release=kubernetes-dashboard kube-system service/tiller-deploy ClusterIP 10.98.116.74 <none> 44134/TCP 8d app=helm,name=tiller
[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes-dashboard-token-4v624 kubernetes.io/service-account-token 3 5m42s [root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-4v624 Name: kubernetes-dashboard-token-4v624 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 6688cc3b-5f28-4e38-a37a-67c0927752ab Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi00djYyNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2ODhjYzNiLTVmMjgtNGUzOC1hMzdhLTY3YzA5Mjc3NTJhYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Wq6xvzLSJNnt9Zg9u5J-85RB0-Slf6HMFfHzNwDGJDn3Yc2lfxL88YXi0ForX4Q9F0v96nt_GNKOm6DB8FGoKR3cALeWpeuoXSSY_ryY8tj6KFN1mrOlvVnRRgsk_lReOxLZexvR58OQ7N04pDrZ6Okr3PDB22i-31xPaVPBt6BhZU5ee6VZyXr7y3pj8VAJSki7tnr7ZRlG6WJizrMf25sZ9xdznwcGJ7yGz2gD3moYhNKQa5KPwcLOGTfg3GuLUNoQjdz5wUmvx4X2YMhfj6Fx7I3mZzr9whrfhO2PWuNtFheaKscSg2UyIPH5Zav9WTSzXxDedORh8BjX3cUJcQ
k8s.hongda.com
[root@master /]# ping k8s.hongda.com PING k8s.hongda.com (13.209.58.121) 56(84) bytes of data. From 18.16.202.169 (18.16.202.169): icmp_seq=2 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1)) From 18.16.202.169 (18.16.202.169): icmp_seq=3 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1)) ^C --- k8s.hongda.com ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2002ms