主機列表:node
主機名 Centos版本 ip docker version flannel version 主機配置 k8s版本 master 7.8.2003 192.168.214.128 19.03.13 v0.13.0-rc2 2C2G v1.19.8 node01 7.8.2003 192.168.214.129 19.03.13 / 2C2G v1.19.8 node02 7.8.2003 192.168.214.130 19.03.13 / 2C2G v1.19.8
共有3臺服務器 1臺master,2臺node。linux
1 [root@centos7 ~] systemctl stop firewalld && systemctl disable firewalld 2 # 永久關閉selinux 3 [root@centos7 ~] sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config;cat /etc/selinux/config 4 # 臨時關閉selinux 5 root@centos7 ~] setenforce 0
三個機器對應分別執行git
1 [root@centos7 ~] hostnamectl set-hostname master 2 [root@centos7 ~] hostnamectl set-hostname node01 3 [root@centos7 ~] hostnamectl set-hostname node02
退出從新登錄便可顯示新設置的主機名mastergithub
1 [root@master ~] cat >> /etc/hosts << EOF 2 192.168.214.128 master 3 192.168.214.129 node01 4 192.168.214.130 node02 5 EOF
[root@master ~] cat /sys/class/dmi/id/product_uuid
保證各節點mac和uuid惟一docker
[root@master ~] swapoff -a
若須要重啓後也生效,在禁用swap後還需修改配置文件/etc/fstab,註釋swapexpress
[root@master ~] sed -i.bak '/swap/s/^/#/' /etc/fstab
本文的k8s網絡使用flannel,該網絡須要設置內核參數bridge-nf-call-iptables=1apache
1 [root@master ~] sysctl net.bridge.bridge-nf-call-iptables=1 2 net.bridge.bridge-nf-call-iptables = 1 3 [root@master ~] sysctl net.bridge.bridge-nf-call-ip6tables=1 4 net.bridge.bridge-nf-call-ip6tables = 1
1 [root@master ~] cat <<EOF > /etc/sysctl.d/k8s.conf 2 net.bridge.bridge-nf-call-ip6tables = 1 3 net.bridge.bridge-nf-call-iptables = 1 4 EOF 5 [root@master ~] sysctl -p /etc/sysctl.d/k8s.conf 6 net.bridge.bridge-nf-call-ip6tables = 1 7 net.bridge.bridge-nf-call-iptables = 1
1 [root@master ~] cat <<EOF > /etc/yum.repos.d/kubernetes.repo 2 [kubernetes] 3 name=Kubernetes 4 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 5 enabled=1 6 gpgcheck=1 7 repo_gpgcheck=1 8 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 9 EOF
[] 中括號中的是repository id,惟一,用來標識不一樣倉庫
name 倉庫名稱,自定義
baseurl 倉庫地址
enable 是否啓用該倉庫,默認爲1表示啓用
gpgcheck 是否驗證從該倉庫得到程序包的合法性,1爲驗證
repo_gpgcheck 是否驗證元數據的合法性 元數據就是程序包列表,1爲驗證
gpgkey=URL 數字簽名的公鑰文件所在位置,若是gpgcheck值爲1,此處就須要指定gpgkey文件的位置,若是gpgcheck值爲0就不須要此項了json
1 [root@master ~] yum clean all 2 [root@master ~] yum -y makecache
master node節點都執行本部分操做。bootstrap
1 [root@master ~] yum install -y yum-utils device-mapper-persistent-data lvm2
#3.設置鏡像的倉庫vim
1 [root@master ~] yum-config-manager \ 2 --add-repo \ 3 https://download.docker.com/linux/centos/docker-ce.repo 4 #默認是從國外的,不推薦 5 #推薦使用國內的 6 [root@master ~] yum-config-manager \ 7 --add-repo \ 8 https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1 [root@master ~] yum list docker-ce --showduplicates | sort -r
1 [root@master ~] yum install docker-ce-19.03.13 docker-ce-cli-19.03.13 containerd.io -y
指定安裝的docker版本爲19.03.13
1 [root@master ~] systemctl start docker 2 [root@master ~] systemctl enable docker
1 [root@master ~] yum -y install bash-completion
1 [root@master ~] source /etc/profile.d/bash_completion.sh
因爲Docker Hub的服務器在國外,下載鏡像會比較慢,能夠配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里雲加速器、DaoCloud 加速器,本文以阿里加速器配置爲例。
登錄地址爲:https://account.aliyun.com ,使用支付寶帳戶登陸
配置daemon.json文件
1 [root@master ~] mkdir -p /etc/docker 2 [root@master ~] tee /etc/docker/daemon.json <<-'EOF' 3 { 4 "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"] 5 } 6 EOF
重啓服務
1 [root@master ~] systemctl daemon-reload 2 [root@master ~] systemctl restart docker
加速器配置完成
1 [root@master ~] docker --version 2 [root@master ~] docker run hello-world
經過查詢docker版本和運行容器hello-world來驗證docker是否安裝成功。
修改daemon.json,新增‘」exec-opts」: [「native.cgroupdriver=systemd」’
1 [root@master ~] vim /etc/docker/daemon.json 2 { 3 "registry-mirrors": ["https://23h04een.mirror.aliyuncs.com"], 4 "exec-opts": ["native.cgroupdriver=systemd"] 5 }
1 [root@master ~] systemctl daemon-reload 2 [root@master ~] systemctl restart docker
修改cgroupdriver是爲了消除告警:
[WARNING IsDockerSystemdCheck]: detected 「cgroupfs」 as the Docker cgroup driver. The recommended driver is 「systemd」. Please follow the guide at https://kubernetes.io/docs/setup/cri/
master node節點都執行本部分操做。
1 [root@master ~] yum list kubelet --showduplicates | sort -r
本文安裝的kubelet版本是1.16.4,該版本支持的docker版本爲1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
1 [root@master ~] yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8
kubelet 運行在集羣全部節點上,用於啓動Pod和容器等對象的工具
kubeadm 用於初始化集羣,啓動集羣的命令工具
kubectl 用於和集羣通訊的命令行,經過kubectl能夠部署和管理應用,查看各類資源,建立、刪除和更新各類組件
啓動kubelet並設置開機啓動
1 [root@master ~] systemctl enable kubelet && systemctl start kubelet
[root@master ~] echo "source <(kubectl completion bash)" >> ~/.bash_profile [root@master ~] source .bash_profile
master須要運行起來沒有報錯,再往下走,而node節點這裏狀態不是active,只要沒報錯能夠往下走。
1 [root@master ~]# systemctl status kubelet 2 ● kubelet.service - kubelet: The Kubernetes Node Agent 3 Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) 4 Drop-In: /usr/lib/systemd/system/kubelet.service.d 5 └─10-kubeadm.conf 6 Active: active (running) since Wed 2021-03-03 07:56:55 EST; 52min ago 7 Docs: https://kubernetes.io/docs/ 8 Main PID: 1350 (kubelet) 9 Tasks: 19 10 Memory: 139.0M 11 CGroup: /system.slice/kubelet.service 12 └─1350 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 13 14 Mar 03 07:59:58 master kubelet[1350]: E0303 07:59:58.872371 1350 pod_workers.go:191] Error syncing pod 6e1a9d9d-148c-4866-b6f2-4c1fbfaef8fa ("coredns-f9fd979d6-tgwj5_kube-system(6e1a9d9d-148c-4866-b6f2-4c1fbfaef8fa)"), skipping: failed to "C...148c-4866-b6f2-4c1fb 15 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.292810 1350 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-ttb6f_kube-system": CNI failed to retrieve network namespace... 16 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.339552 1350 pod_container_deletor.go:79] Container "81bc507c034f6e55212e82d1f3f4d12ecd28c3634095d3230e714eb6b3d03be3" not found in pod's containers 17 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.341532 1350 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "81bc507c034f6e55212e82d1f3f4d12ecd28c3634095d3230e714eb6b3d03be3" 18 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.347622 1350 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-tgwj5_kube-system": CNI failed to retrieve network namespace... 19 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.383585 1350 pod_container_deletor.go:79] Container "b0c032c78d077d21930da33ea95a971251c97e8c64bde258cc462da3bb5fe3c3" not found in pod's containers 20 Mar 03 07:59:59 master kubelet[1350]: W0303 07:59:59.384676 1350 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "b0c032c78d077d21930da33ea95a971251c97e8c64bde258cc462da3bb5fe3c3" 21 Mar 03 08:00:04 master kubelet[1350]: E0303 08:00:04.605581 1350 docker_sandbox.go:572] Failed to retrieve checkpoint for sandbox "9fca6d888406d4620913e3ccf7bc8cca41fb44cc56158dc8daa6e78479724e5b": checkpoint is not found 22 Mar 03 08:00:04 master kubelet[1350]: E0303 08:00:04.661462 1350 kuberuntime_manager.go:951] PodSandboxStatus of sandbox "55306b6580519c06a5b0c78a6eeaccdf38fe8331170aa7c011c0aec567dfd16b" for pod "coredns-f9fd979d6-tgwj5_kube-system(6e1a9d9d-148c-4866-b6f2-4c1f... 23 Mar 03 08:00:05 master kubelet[1350]: E0303 08:00:05.695207 1350 kuberuntime_manager.go:951] PodSandboxStatus of sandbox "5380257f06e5f7ea0cc39094b97825d737afa1013f003ff9097a5d5ef34c78ae" for pod "coredns-f9fd979d6-ttb6f_kube-system(63f24147-b4d2-4635-bc61-f7da... 24 Hint: Some lines were ellipsized, use -l to show in full.
Kubernetes幾乎全部的安裝組件和Docker鏡像都放在goolge本身的網站上,直接訪問可能會有網絡問題,這裏的解決辦法是從阿里雲鏡像倉庫下載鏡像,拉取到本地之後改回默認的鏡像tag。本文經過運行image.sh腳本方式拉取鏡像。
1 [root@master ~] vim image.sh 2 #!/bin/bash 3 url=registry.aliyuncs.com/google_containers 4 version=v1.19.8 5 images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`) 6 for imagename in ${images[@]} ; do 7 docker pull $url/$imagename 8 docker tag $url/$imagename k8s.gcr.io/$imagename 9 docker rmi -f $url/$imagename 10 done
url爲阿里雲鏡像倉庫地址,version爲安裝的kubernetes版本。
運行腳本image.sh,下載指定版本的鏡像
1 [root@master ~] chmod 775 image.sh 2 [root@master ~] ./image.sh 3 [root@master ~] docker images
master節點執行本部分操做。
1 [root@master ~] kubeadm init --apiserver-advertise-address=192.168.214.128 --kubernetes-version v1.19.8 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address爲master的IP地址
這裏的--pod-network-cidr和kube-flannel.yml中的network地址保持一致
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
記錄kubeadm join的輸出,後面須要這個命令將node節點和其餘control plane節點加入集羣中。
1 kubeadm join 192.168.214.128:6443 --token 4xpmwx.nw6psmvn9qi4d3cj \ 2 > --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d
coredns是Pending狀態,沒有運行起來,沒有關係,跟後面的flannel-ds有關
kube-system coredns-66bff467f8-c79f7 0/1 Pending 0 24h kube-system coredns-66bff467f8-ncpd6 0/1 Pending 0 24h # flannel運行起來後,這兩個就會變爲running
1 [root@master ~]# kubectl get pod -n kube-system -o wide 2 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 3 coredns-f9fd979d6-tgwj5 1/1 Running 4 9h 10.244.0.11 master <none> <none> 4 coredns-f9fd979d6-ttb6f 1/1 Running 4 9h 10.244.0.10 master <none> <none> 5 etcd-master 1/1 Running 8 9h 172.17.114.109 master <none> <none> 6 kube-apiserver-master 1/1 Running 5 132m 172.17.114.109 master <none> <none> 7 kube-controller-manager-master 1/1 Running 8 130m 172.17.114.109 master <none> <none> 8 kube-flannel-ds-hs79q 1/1 Running 5 9h 172.17.114.109 master <none> <none> 9 kube-flannel-ds-qtt7s 1/1 Running 2 9h 172.17.114.103 node01 <none> <none> 10 kube-proxy-24dnq 1/1 Running 2 9h 172.17.114.103 node01 <none> <none> 11 kube-proxy-vbdg2 1/1 Running 4 9h 172.17.114.109 master <none> <none> 12 kube-scheduler-master 1/1 Running 7 130m 172.17.114.109 master <none> <none>
健康檢查
1 [root@master ~]# kubectl get cs 2 Warning: v1 ComponentStatus is deprecated in v1.19+ 3 NAME STATUS MESSAGE ERROR 4 scheduler Healthy ok 5 controller-manager Healthy ok 6 etcd-0 Healthy {"health":"true"} 7 8 [root@master ~]# kubectl get ns 9 NAME STATUS AGE 10 default Active 9h 11 kube-node-lease Active 9h 12 kube-public Active 9h 13 kube-system Active 9h 14 kubernetes-dashboard Active 7h6m
若是初始化失敗,可執行kubeadm reset後從新初始化
1 [root@master ~] kubeadm reset 2 [root@master ~] rm -rf $HOME/.kube/config
1 [root@master ~] echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile 2 [root@master ~] source .bash_profile
本文全部操做都在root用戶下執行,若爲非root用戶,則執行以下操做:
1 mkdir -p $HOME/.kube 2 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 chown $(id -u):$(id -g) $HOME/.kube/config
由於國內網絡沒法解析raw.githubusercontent.com,所以先訪問https://tool.chinaz.com/dns/?type=1&host=raw.githubusercontent.com&ip=查看raw.githubusercontent.com的真實IP,並對應修改host
cat >> /etc/hosts << EOF 151.101.108.133 raw.githubusercontent.com EOF
vi kube-flannel.yml文件內容以下
1 --- 2 apiVersion: policy/v1beta1 3 kind: PodSecurityPolicy 4 metadata: 5 name: psp.flannel.unprivileged 6 annotations: 7 seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default 8 seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default 9 apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default 10 apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default 11 spec: 12 privileged: false 13 volumes: 14 - configMap 15 - secret 16 - emptyDir 17 - hostPath 18 allowedHostPaths: 19 - pathPrefix: "/etc/cni/net.d" 20 - pathPrefix: "/etc/kube-flannel" 21 - pathPrefix: "/run/flannel" 22 readOnlyRootFilesystem: false 23 # Users and groups 24 runAsUser: 25 rule: RunAsAny 26 supplementalGroups: 27 rule: RunAsAny 28 fsGroup: 29 rule: RunAsAny 30 # Privilege Escalation 31 allowPrivilegeEscalation: false 32 defaultAllowPrivilegeEscalation: false 33 # Capabilities 34 allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] 35 defaultAddCapabilities: [] 36 requiredDropCapabilities: [] 37 # Host namespaces 38 hostPID: false 39 hostIPC: false 40 hostNetwork: true 41 hostPorts: 42 - min: 0 43 max: 65535 44 # SELinux 45 seLinux: 46 # SELinux is unused in CaaSP 47 rule: 'RunAsAny' 48 --- 49 kind: ClusterRole 50 apiVersion: rbac.authorization.k8s.io/v1 51 metadata: 52 name: flannel 53 rules: 54 - apiGroups: ['extensions'] 55 resources: ['podsecuritypolicies'] 56 verbs: ['use'] 57 resourceNames: ['psp.flannel.unprivileged'] 58 - apiGroups: 59 - "" 60 resources: 61 - pods 62 verbs: 63 - get 64 - apiGroups: 65 - "" 66 resources: 67 - nodes 68 verbs: 69 - list 70 - watch 71 - apiGroups: 72 - "" 73 resources: 74 - nodes/status 75 verbs: 76 - patch 77 --- 78 kind: ClusterRoleBinding 79 apiVersion: rbac.authorization.k8s.io/v1 80 metadata: 81 name: flannel 82 roleRef: 83 apiGroup: rbac.authorization.k8s.io 84 kind: ClusterRole 85 name: flannel 86 subjects: 87 - kind: ServiceAccount 88 name: flannel 89 namespace: kube-system 90 --- 91 apiVersion: v1 92 kind: ServiceAccount 93 metadata: 94 name: flannel 95 namespace: kube-system 96 --- 97 kind: ConfigMap 98 apiVersion: v1 99 metadata: 100 name: kube-flannel-cfg 101 namespace: kube-system 102 labels: 103 tier: node 104 app: flannel 105 data: 106 cni-conf.json: | 107 { 108 "name": "cbr0", 109 "cniVersion": "0.3.1", 110 "plugins": [ 111 { 112 "type": "flannel", 113 "delegate": { 114 "hairpinMode": true, 115 "isDefaultGateway": true 116 } 117 }, 118 { 119 "type": "portmap", 120 "capabilities": { 121 "portMappings": true 122 } 123 } 124 ] 125 } 126 net-conf.json: | 127 { 128 "Network": "10.244.0.0/16", 129 "Backend": { 130 "Type": "vxlan" 131 } 132 } 133 --- 134 apiVersion: apps/v1 135 kind: DaemonSet 136 metadata: 137 name: kube-flannel-ds 138 namespace: kube-system 139 labels: 140 tier: node 141 app: flannel 142 spec: 143 selector: 144 matchLabels: 145 app: flannel 146 template: 147 metadata: 148 labels: 149 tier: node 150 app: flannel 151 spec: 152 affinity: 153 nodeAffinity: 154 requiredDuringSchedulingIgnoredDuringExecution: 155 nodeSelectorTerms: 156 - matchExpressions: 157 - key: kubernetes.io/os 158 operator: In 159 values: 160 - linux 161 hostNetwork: true 162 priorityClassName: system-node-critical 163 tolerations: 164 - operator: Exists 165 effect: NoSchedule 166 serviceAccountName: flannel 167 initContainers: 168 - name: install-cni 169 image: quay.io/coreos/flannel:v0.13.1-rc2 170 command: 171 - cp 172 args: 173 - -f 174 - /etc/kube-flannel/cni-conf.json 175 - /etc/cni/net.d/10-flannel.conflist 176 volumeMounts: 177 - name: cni 178 mountPath: /etc/cni/net.d 179 - name: flannel-cfg 180 mountPath: /etc/kube-flannel/ 181 containers: 182 - name: kube-flannel 183 image: quay.io/coreos/flannel:v0.13.1-rc2 184 command: 185 - /opt/bin/flanneld 186 args: 187 - --ip-masq 188 - --kube-subnet-mgr 189 resources: 190 requests: 191 cpu: "100m" 192 memory: "50Mi" 193 limits: 194 cpu: "100m" 195 memory: "50Mi" 196 securityContext: 197 privileged: false 198 capabilities: 199 add: ["NET_ADMIN", "NET_RAW"] 200 env: 201 - name: POD_NAME 202 valueFrom: 203 fieldRef: 204 fieldPath: metadata.name 205 - name: POD_NAMESPACE 206 valueFrom: 207 fieldRef: 208 fieldPath: metadata.namespace 209 volumeMounts: 210 - name: run 211 mountPath: /run/flannel 212 - name: flannel-cfg 213 mountPath: /etc/kube-flannel/ 214 volumes: 215 - name: run 216 hostPath: 217 path: /run/flannel 218 - name: cni 219 hostPath: 220 path: /etc/cni/net.d 221 - name: flannel-cfg 222 configMap: 223 name: kube-flannel-cfg
1 [root@master ~] kubectl apply -f kube-flannel.yml
1 kubeadm join 192.168.214.128:6443 --token 4xpmwx.nw6psmvn9qi4d3cj \ 2 --discovery-token-ca-cert-hash sha256:c7cbe95a66092c58b4da3ad20874f0fe2b6d6842d28b2762ffc8d36227d7a0a7
1 [root@master ~] kubectl get nodes
Dashboard提供了能夠實現集羣管理、工做負載、服務發現和負載均衡、存儲、字典配置、日誌視圖等功能。
vi recommended.yaml
文件內容以下
1 # Copyright 2017 The Kubernetes Authors. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 apiVersion: v1 16 kind: Namespace 17 metadata: 18 name: kubernetes-dashboard 19 20 --- 21 22 apiVersion: v1 23 kind: ServiceAccount 24 metadata: 25 labels: 26 k8s-app: kubernetes-dashboard 27 name: kubernetes-dashboard 28 namespace: kubernetes-dashboard 29 30 --- 31 32 kind: Service 33 apiVersion: v1 34 metadata: 35 labels: 36 k8s-app: kubernetes-dashboard 37 name: kubernetes-dashboard 38 namespace: kubernetes-dashboard 39 spec: 40 ports: 41 - port: 443 42 targetPort: 8443 43 nodePort: 30001 44 type: NodePort 45 selector: 46 k8s-app: kubernetes-dashboard 47 48 --- 49 50 apiVersion: v1 51 kind: Secret 52 metadata: 53 labels: 54 k8s-app: kubernetes-dashboard 55 name: kubernetes-dashboard-certs 56 namespace: kubernetes-dashboard 57 type: Opaque 58 59 --- 60 61 apiVersion: v1 62 kind: Secret 63 metadata: 64 labels: 65 k8s-app: kubernetes-dashboard 66 name: kubernetes-dashboard-csrf 67 namespace: kubernetes-dashboard 68 type: Opaque 69 data: 70 csrf: "" 71 72 --- 73 74 apiVersion: v1 75 kind: Secret 76 metadata: 77 labels: 78 k8s-app: kubernetes-dashboard 79 name: kubernetes-dashboard-key-holder 80 namespace: kubernetes-dashboard 81 type: Opaque 82 83 --- 84 85 kind: ConfigMap 86 apiVersion: v1 87 metadata: 88 labels: 89 k8s-app: kubernetes-dashboard 90 name: kubernetes-dashboard-settings 91 namespace: kubernetes-dashboard 92 93 --- 94 95 kind: Role 96 apiVersion: rbac.authorization.k8s.io/v1 97 metadata: 98 labels: 99 k8s-app: kubernetes-dashboard 100 name: kubernetes-dashboard 101 namespace: kubernetes-dashboard 102 rules: 103 # Allow Dashboard to get, update and delete Dashboard exclusive secrets. 104 - apiGroups: [""] 105 resources: ["secrets"] 106 resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] 107 verbs: ["get", "update", "delete"] 108 # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. 109 - apiGroups: [""] 110 resources: ["configmaps"] 111 resourceNames: ["kubernetes-dashboard-settings"] 112 verbs: ["get", "update"] 113 # Allow Dashboard to get metrics. 114 - apiGroups: [""] 115 resources: ["services"] 116 resourceNames: ["heapster", "dashboard-metrics-scraper"] 117 verbs: ["proxy"] 118 - apiGroups: [""] 119 resources: ["services/proxy"] 120 resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] 121 verbs: ["get"] 122 123 --- 124 125 kind: ClusterRole 126 apiVersion: rbac.authorization.k8s.io/v1 127 metadata: 128 labels: 129 k8s-app: kubernetes-dashboard 130 name: kubernetes-dashboard 131 rules: 132 # Allow Metrics Scraper to get metrics from the Metrics server 133 - apiGroups: ["metrics.k8s.io"] 134 resources: ["pods", "nodes"] 135 verbs: ["get", "list", "watch"] 136 137 --- 138 139 apiVersion: rbac.authorization.k8s.io/v1 140 kind: RoleBinding 141 metadata: 142 labels: 143 k8s-app: kubernetes-dashboard 144 name: kubernetes-dashboard 145 namespace: kubernetes-dashboard 146 roleRef: 147 apiGroup: rbac.authorization.k8s.io 148 kind: Role 149 name: kubernetes-dashboard 150 subjects: 151 - kind: ServiceAccount 152 name: kubernetes-dashboard 153 namespace: kubernetes-dashboard 154 155 --- 156 157 apiVersion: rbac.authorization.k8s.io/v1 158 kind: ClusterRoleBinding 159 metadata: 160 name: kubernetes-dashboard 161 roleRef: 162 apiGroup: rbac.authorization.k8s.io 163 kind: ClusterRole 164 name: kubernetes-dashboard 165 subjects: 166 - kind: ServiceAccount 167 name: kubernetes-dashboard 168 namespace: kubernetes-dashboard 169 170 --- 171 172 kind: Deployment 173 apiVersion: apps/v1 174 metadata: 175 labels: 176 k8s-app: kubernetes-dashboard 177 name: kubernetes-dashboard 178 namespace: kubernetes-dashboard 179 spec: 180 replicas: 1 181 revisionHistoryLimit: 10 182 selector: 183 matchLabels: 184 k8s-app: kubernetes-dashboard 185 template: 186 metadata: 187 labels: 188 k8s-app: kubernetes-dashboard 189 spec: 190 containers: 191 - name: kubernetes-dashboard 192 image: kubernetesui/dashboard:v2.2.0 193 imagePullPolicy: Always 194 ports: 195 - containerPort: 8443 196 protocol: TCP 197 args: 198 - --auto-generate-certificates 199 - --namespace=kubernetes-dashboard 200 # Uncomment the following line to manually specify Kubernetes API server Host 201 # If not specified, Dashboard will attempt to auto discover the API server and connect 202 # to it. Uncomment only if the default does not work. 203 # - --apiserver-host=http://my-address:port 204 volumeMounts: 205 - name: kubernetes-dashboard-certs 206 mountPath: /certs 207 # Create on-disk volume to store exec logs 208 - mountPath: /tmp 209 name: tmp-volume 210 livenessProbe: 211 httpGet: 212 scheme: HTTPS 213 path: / 214 port: 8443 215 initialDelaySeconds: 30 216 timeoutSeconds: 30 217 securityContext: 218 allowPrivilegeEscalation: false 219 readOnlyRootFilesystem: true 220 runAsUser: 1001 221 runAsGroup: 2001 222 volumes: 223 - name: kubernetes-dashboard-certs 224 secret: 225 secretName: kubernetes-dashboard-certs 226 - name: tmp-volume 227 emptyDir: {} 228 serviceAccountName: kubernetes-dashboard 229 nodeSelector: 230 "kubernetes.io/os": linux 231 # Comment the following tolerations if Dashboard must not be deployed on master 232 tolerations: 233 - key: node-role.kubernetes.io/master 234 effect: NoSchedule 235 236 --- 237 238 kind: Service 239 apiVersion: v1 240 metadata: 241 labels: 242 k8s-app: dashboard-metrics-scraper 243 name: dashboard-metrics-scraper 244 namespace: kubernetes-dashboard 245 spec: 246 ports: 247 - port: 8000 248 targetPort: 8000 249 selector: 250 k8s-app: dashboard-metrics-scraper 251 252 --- 253 254 kind: Deployment 255 apiVersion: apps/v1 256 metadata: 257 labels: 258 k8s-app: dashboard-metrics-scraper 259 name: dashboard-metrics-scraper 260 namespace: kubernetes-dashboard 261 spec: 262 replicas: 1 263 revisionHistoryLimit: 10 264 selector: 265 matchLabels: 266 k8s-app: dashboard-metrics-scraper 267 template: 268 metadata: 269 labels: 270 k8s-app: dashboard-metrics-scraper 271 annotations: 272 seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' 273 spec: 274 containers: 275 - name: dashboard-metrics-scraper 276 image: kubernetesui/metrics-scraper:v1.0.6 277 ports: 278 - containerPort: 8000 279 protocol: TCP 280 livenessProbe: 281 httpGet: 282 scheme: HTTP 283 path: / 284 port: 8000 285 initialDelaySeconds: 30 286 timeoutSeconds: 30 287 volumeMounts: 288 - mountPath: /tmp 289 name: tmp-volume 290 securityContext: 291 allowPrivilegeEscalation: false 292 readOnlyRootFilesystem: true 293 runAsUser: 1001 294 runAsGroup: 2001 295 serviceAccountName: kubernetes-dashboard 296 nodeSelector: 297 "kubernetes.io/os": linux 298 # Comment the following tolerations if Dashboard must not be deployed on master 299 tolerations: 300 - key: node-role.kubernetes.io/master 301 effect: NoSchedule 302 volumes: 303 - name: tmp-volume 304 emptyDir: {}
1 [root@client ~] sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
因爲默認的鏡像倉庫網絡訪問不通,故改爲阿里鏡像
1 [root@client ~] sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
配置NodePort,外部經過https://NodeIp:NodePort 訪問Dashboard,此時端口爲30001
1 [root@client ~] kubectl apply -f recommended.yaml
1 [root@client ~] kubectl get all -n kubernetes-dashboard
建立service account並綁定默認cluster-admin管理員集羣角色:
1 # 建立用戶 2 $ kubectl create serviceaccount dashboard-admin -n kube-system 3 # 用戶受權 4 $ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
1 [root@client ~] kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
令牌爲:
1 eyJhbGciOiJSUzI1NiIsImtpZCI6ImhPdFJMQVZCVmRlZ25HOUJ5enNUSFV5M3Z6NnVkSTBQR0gzbktqbUl3bGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tczhtbTciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2JjNWY0ZTktNmQwYy00MjYxLWFiNDItMDZhM2QxNjVkNmI4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.N5eoH_rjPVX4a4BJwi2P-f3ohVLq2W0afzkaWomFk5iBe1KMczv5zAzz0EYZwnHkNOZ-Ts7Z6Z778vo7fR5_B-6KRNvdE5rg5Cq6fZKBlRvqB9FqYA0a_3JXCc0FK1et-97ycM20XtekM1KBt0uyHkKnPzJJBkqAedtALbJ0ccWU_WmmKVHKzfgXTiQNdiM-mqyelIxQa7i0KFFdnyN7Euhh4uk_ueXeLlZeeijxWTOpu9p91jMuN45xFuny0QkxxQcWlnjL8Gz7mELemEMOQEbhZRcKSHleZ72FVpjvHwn0gg7bQuaNc-oUKUxB5VS-h7CF8aOjk-yrLvoaY3Af-g
請使用瀏覽器訪問:https://192.168.214.128:30001/
接受風險,輸入令牌,便可圖形化管理k8s集羣
解決k8s集羣在節點運行kubectl出現的錯誤:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
出現這個問題的緣由是kubectl命令須要使用kubernetes-admin來運行
將主節點(master節點)中的【/etc/kubernetes/admin.conf】文件拷貝到從節點相同目錄下:
scp -r /etc/kubernetes/admin.conf ${node1}:/etc/kubernetes/admin.conf
配置環境變量:
echo 「export KUBECONFIG=/etc/kubernetes/admin.conf」 >> ~/.bash_profile
當即生效:
source ~/.bash_profile
部署一個服務,在master和node節點上 curl https://localhost:30001,
均可以訪問,外部不能訪問master的30001端口,node節點也訪問不了master的30001端口,外部能夠訪問node節點的30001端口,然而master上的防火牆是關閉的,經過httpserver開一個服務,外部能訪問進去,kubectl看也彷佛運行都正常。
修改/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml
註釋掉--port=0
保存便可,不須要重啓服務,k8s會自動重啓