在開始以前,建議請先裝好虛擬機,或者租一個Linux雲服務器,本次重點任務是實踐,一共給你們準備了18個實戰的TASK。前端
首先咱們來複習一下Docker,也讓沒有安裝Docker的同窗來安裝一下環境。node
sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-
sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
sudo apt-get install docker-ce docker-ce-cli containerd.io
vim /etc/docker/daemon.json
{ "registry-mirrors": [ "https://dockerhub.azk8s.cn", "https://reg-mirror.qiniu.com" ], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["0.0.0.0/0"], "storage-driver": "overlay2" }
啓動一個內有Web版2048遊戲的容器:python
docker run -d -P daocloud.io/daocloud/dao-2048
啓動一個本身的鏡像Registry服務:linux
docker service create --name registry --publish 5000:5000 registry:2
拉取一個Nginx鏡像,並推送到本身的Registry:nginx
docker pull nginx docker tag nginx 127.0.0.1:5000/nginx docker push 127.0.0.1:5000/nginx
此次咱們來構建一個基於OpenAPI的Web APP:git
pip3 install connexion flask_cors swagger-ui-bundle -i https://mirrors.aliyun.com/pypi/simple/
openapi: "3.0.0" info: title: Hello Pet version: "1.0" servers: - url: http://localhost:9090/v1.0 paths: /pets/{pet_id}: get: description: Returns pets based on ID summary: Find pets by ID operationId: app.get_pets_by_id responses: '200': description: pet response content: 'application/json' : schema: type: object $ref: '#/components/schemas/Pet' parameters: - name: pet_id in: path description: ID of pet to use required: true schema: type: integer components: schemas: Pet: type: object required: - petType properties: petType: type: string discriminator: propertyName: petType mapping: dog: Dog cat: Cat lizard: Lizard Cat: allOf: - $ref: '#/components/schemas/Pet' - type: object
import connexion from flask_cors import CORS def get_pets_by_id(pet_id: int) -> list: return [{ 'name': 'lucky', 'petType': 'dog', 'bark': 'woof!' }, { 'name': 'kitty', 'petType': 'cat', 'meow': 'meow!' }, { 'name': 'jack', 'petType': 'lizard', 'loveRocks': True }][pet_id%3] if __name__ == '__main__': app = connexion.FlaskApp(__name__, port=9090, specification_dir='./') app.add_api('spec.yaml', arguments={'title': 'Simple Pet'}) CORS(app.app) app.run()
python app.py
基於上一個TASK,進一步構建Pet應用的鏡像:github
FROM python:alpine-3.8 copy ./ /opt/app RUN pip3 install connexion swagger-ui-bundle flask_cors -i https://mirrors.aliyun.com/pypi/simple/ ENV PYTHONPATH /opt/app WORKDIR /opt/app CMD ["python", "app"]
docker build -t petapp -f ./Dockerfile .
docker run petapp
繼續,咱們來發布Pet應用,來嘗試把它發佈到咱們剛剛啓動的私有Registry。web
體驗Potainer管理容器:docker
docker run -d -p 9000:9000 \ --name portainer --restart=always \ -v /var/run/docker.sock:/var/run/docker.sock \ portainer/portainer
接下來咱們來複習一下Kubernetes,首先是幾個基本的組件:shell
咱們如今來用kubeadm 安裝 k8s。
172.19.0.21 debian-21 172.19.0.22 debian-22 172.19.0.23 debian-23
apt install openssh-server vim /etc/ssh/sshd_config
PermitRootLogin yes
vim /etc/sysctl.conf
net.ipv4.ip_forward=1
sysctl --system
註釋掉swap分區:
vim /etc/fstab
swapoff -a
具體請看以前Docker的部分。
使用kubeadm安裝 control plane:
apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl
使用以下腳本:
images=( kube-apiserver:v1.17.0 kube-controller-manager:v1.17.0 kube-scheduler:v1.17.0 kube-proxy:v1.17.0 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
kubeadm init --apiserver-advertise-address 172.19.0.31 --pod-network-cidr 10.30.0.0/16 --service-cidr 10.31.0.0/16
這個時候,彷佛仍是比較順暢的,部分輸出以下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.19.0.31:6443 --token ay1y19.hyemvesfsl66wiic \ --discovery-token-ca-cert-hash sha256:b98a644c2f163ec19996599856b2d9b537ec78a0e61d920251239ec3131e5430
master節點運行:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
node節點運行(在node節點搭建完成後進行):
kubeadm join 172.19.0.31:6443 --token ay1y19.hyemvesfsl66wiic \ --discovery-token-ca-cert-hash sha256:b98a644c2f163ec19996599856b2d9b537ec78a0e61d920251239ec3131e5430
kubectl apply -f flannel.yaml
flannel.yaml:
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.30.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
flannel 國內鏡像:
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0 quay.io/coreos/flannel:v0.11.0-amd64
部份內容和master節點雷同, 好比運行時安裝與配置, kubelet安裝。
部署petapp到k8s集羣。
(擴展:掛載在nginx後,scale到4個pod,建立NodePortService)
kubectl run --generator=run-pod/v1 nginx --image=nginx --image-pull-policy=IfNotPresent
kubectl create deployment --image=nginx nginx
# deployment.yaml apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
當 Pod 指定到某個節點上時,首先建立的是一個 emptyDir
卷,而且只要 Pod 在該節點上運行,卷就一直存在。 就像它的名稱表示的那樣,卷最初是空的。 儘管 Pod 中的容器掛載 emptyDir
卷的路徑可能相同也可能不一樣,可是這些容器均可以讀寫 emptyDir
卷中相同的文件。 當 Pod 由於某些緣由被從節點上刪除時,emptyDir
卷中的數據也會永久刪除。
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
搭建NFS(Network File System)
安裝包:
sudo apt-get install nfs-kernel-server nfs-common
建立共享目錄:
sudo mkdir -p /var/nfsshare/xxx sudo chmod -R 777 /var/nfsshare/xxx
在 /etc/exports 加入:
/var/nfsshare/xxx xxx.xxx.xxx.*(rw,sync,no_root_squash,no_all_squash)
或者相似:
/var/nfsshare/xxx *(rw,sync,no_root_squash,no_all_squash)
啓動服務:
/etc/init.d/nfs-kernel-server restart
pvc與pv的綁定
hostpath 掛載:
kind: PersistentVolume metadata: name: data namespace: data spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /home/data --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data namespace: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeName: "data"
使用nfs捲進行掛載:
apiVersion: v1 kind: PersistentVolume metadata: name: data namespace: data spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: # FIXME: use the right IP server: 172.19.0.11 path: "/var/nfsshare/data" persistentVolumeReclaimPolicy: Recycle --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data namespace: data spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi volumeName: "data"
nginx ingress controller的安裝
咱們這裏使用獨立的ingress controller, 能夠使用 nginx-ingress。nginx ingress controller能夠使用多種方式部署(好比pod或daemonset),也能夠使用helm完成部署。
這裏給出daemonset直接配置的部署方式:
kubectl apply -f common/ns-and-sa.yaml kubectl apply -f common/default-server-secret.yaml kubectl apply -f common/nginx-config.yaml kubectl apply -f rbac/rbac.yaml kubectl apply -f daemon-set/nginx-ingress.yaml
一些自定義的配置, 須要修改common/nginx-config.yaml, 好比:
kind: ConfigMap apiVersion: v1 metadata: name: nginx-config namespace: nginx-ingress data: proxy-connect-timeout: "10s" proxy-read-timeout: "10s" client-max-body-size: "100m"
根據域名的不一樣將流量導向指定的app
apple-app
kind: Pod apiVersion: v1 metadata: name: apple-app labels: app: apple spec: containers: - name: apple-app image: hashicorp/http-echo args: - "-text=apple" --- kind: Service apiVersion: v1 metadata: name: apple-service spec: selector: app: apple ports: - port: 5678 # Default port for image
banana-app
kind: Pod apiVersion: v1 metadata: name: banana-app labels: app: banana spec: containers: - name: banana-app image: hashicorp/http-echo args: - "-text=banana" --- kind: Service apiVersion: v1 metadata: name: banana-service spec: selector: app: banana ports: - port: 5678 # Default port for image kubectl apply -f apple.yaml kubectl apply -f banana.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /apple backend: serviceName: apple-service servicePort: 5678 - path: /banana backend: serviceName: banana-service servicePort: 5678
kubectl apply -f apple.yaml kubectl apply -f banana.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /apple backend: serviceName: apple-service servicePort: 5678 - path: /banana backend: serviceName: banana-service servicePort: 5678
kubectl apply -f ingress.yaml
metric server的安裝
git clone https://github.com/kubernetes-incubator/metrics-server.git cd metrics-server/ kubectl create -f deploy/1.8+/
images=( metrics-server-amd64:v0.3.6 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName gcr.io/google_containers/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
deploy前要修改參數, 找到
- name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.6 args: - --cert-dir=/tmp - --secure-port=4443
注意添加args
--kubelet-insecure-tls=true --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
kube dashboard的安裝
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
使用kubectl top以及kube dashboard觀察k8s平臺的指標
admin-user-sa.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard
admin-user-crb.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard root@debian-11:~/app/dashboard# cat admin-user-crb.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
獲取secret token
kubectl -n kubernetes-dashboard describe secret `kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}'`
建立Node節點,動手配置TLS加入kube集羣。