目錄html
國內不fq安裝K8S一: 安裝docker
國內不fq安裝K8S二: 安裝kubernet
國內不fq安裝K8S三: 使用helm安裝kubernet-dashboard
國內不fq安裝K8S四: 安裝過程當中遇到的問題和解決方法node
本文是按照"青蛙小白"的博客一步一步執行的:(全程無問題)
https://blog.frognew.com/2019/07/kubeadm-install-kubernetes-1.15.htmllinux
$ curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz $ tar -zxvf helm-v2.14.1-linux-amd64.tar.gz $ cd linux-amd64/ $ cp helm /usr/local/bin/
建立helm-rbac.yaml文件:nginx
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
建立tiller使用的service account: tiller並分配合適的角色給它docker
$ kubectl create -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
使用helm部署tiller:api
helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
tiller默認被部署在k8s集羣中的kube-system這個namespace下:bash
kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 0 83s helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
注意因爲某些緣由須要網絡能夠訪問gcr.io和kubernetes-charts.storage.googleapis.com,若是沒法訪問能夠經過helm init --service-account tiller --tiller-image
/tiller:v2.13.1 --skip-refresh使用私有鏡像倉庫中的tiller鏡像,如: 網絡
helm init --service-account tiller --tiller-image gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.1 --skip-refresh
若是錯過了怎麼辦?能夠用"kubectl edit deployment tiller-deploy -n kube-system"修改默認的gcr源便可,其餘gcr源同理。app
最後在node1上修改helm chart倉庫的地址爲azure提供的鏡像地址:curl
helm repo add stable http://mirror.azure.cn/kubernetes/charts "stable" has been added to your repositories helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts
咱們將kub1(192.168.15.174)作爲邊緣節點,打上Label:
$ kubectl label node kub1 node-role.kubernetes.io/edge= node/kub1 labeled $ kubectl get node NAME STATUS ROLES AGE VERSION kub1 Ready edge,master 6h43m v1.15.2 kub2 Ready <none> 6h36m v1.15.2
stable/nginx-ingress chart的值文件ingress-nginx.yaml以下:
controller: replicaCount: 1 hostNetwork: true nodeSelector: node-role.kubernetes.io/edge: '' affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx-ingress - key: component operator: In values: - controller topologyKey: kubernetes.io/hostname tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule defaultBackend: nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
安裝nginx-ingress
$ helm repo update $ helm install stable/nginx-ingress \ -n nginx-ingress \ --namespace ingress-nginx \ -f ingress-nginx.yaml
若是訪問http://192.168.15.174返回default backend,則部署完成。
若是backend的pod找不到image和上面處理tiller沒image的方法同樣,再也不多說。
kubernetes-dashboard.yaml:
image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule rbac: clusterAdminRole: true
注意上面有hosts選項,由於我是在局域網裏測試的,因此直接把兩處hosts選項刪掉,install以後使用IP訪問是同樣的。
$ helm install stable/kubernetes-dashboard \ -n kubernetes-dashboard \ --namespace kube-system \ -f kubernetes-dashboard.yaml
$ kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes-dashboard-token-5d5b2 kubernetes.io/service-account-token 3 4h24m $ kubectl describe -n kube-system secret/kubernetes-dashboard-token-5d5b2 Name: kubernetes-dashboard-token-5d5b2 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 82c89647-1a1c-450f-b2bb-8753de12f104 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi01ZDViMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjgyYzg5NjQ3LTFhMWMtNDUwZi1iMmJiLTg3NTNkZTEyZjEwNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.UF2Fnq-SnqM3oAIwJFvXsW64SAFstfHiagbLoK98jWuyWDPoYyPQvdB1elRsJ8VWSzAyTyvNw2MD9EgzfDdd9_56yWGNmf4Jb6prbA43PE2QQHW69kLiA6seP5JT9t4V_zpjnhpGt0-hSfoPvkS4aUnJBllldCunRGYrxXq699UDt1ah4kAmq5MqhH9l_9jMtcPwgpsibBgJY-OD8vElITv63fP4M16DFtvig9u0EnIwhAGILzdLSkfwBJzLvC_ukii_2A9e-v2OZBlTXYgNQ1MnS7CvU8mu_Ycoxqs0r1kZ4MjlNOUOt6XFjaN8BlPwfEPf2VNx0b1ZgZv-euQQtA
在dashboard的登陸窗口使用上面的token登陸。
訪問地址:https://192.168.15.174而後選擇token方式登陸,而後用把上面的token粘貼進去便可。
metrics-server.yaml:
args: - --logtostderr - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP nodeSelector: node-role.kubernetes.io/edge: '' tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
$ helm install stable/metrics-server \ -n metrics-server \ --namespace kube-system \ -f metrics-server.yaml
使用命令獲取到關於集羣節點基本的指標信息:
$ kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kub1 433m 5% 2903Mi 37% kub2 101m 1% 1446Mi 18% $ kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-5c98db65d4-7n4gm 7m 14Mi coredns-5c98db65d4-s5zfr 7m 14Mi etcd-kub1 49m 72Mi kube-apiserver-kub1 61m 219Mi kube-controller-manager-kub1 36m 47Mi kube-flannel-ds-amd64-mssbt 5m 17Mi kube-flannel-ds-amd64-pb4dz 5m 15Mi kube-proxy-hc4kh 1m 17Mi kube-proxy-rp4cx 1m 18Mi kube-scheduler-kub1 3m 15Mi kubernetes-dashboard-77f9fd6985-ctwmc 1m 23Mi metrics-server-75bfbbbf76-6blkn 4m 17Mi tiller-deploy-7dd9d8cd47-ztl7w 1m 12Mi