1、部署環境node
2、主機配置(每臺主機都要作)linux
主機名 | ip | cpu | ram |
master | 192.168.137.10 | 2 | 3G |
node1 | 192.168.137.11 | 1 | 3G |
一、每臺主機在 /etc/hosts 添加如下內容:git
二、關閉防火牆、selinux、swapgithub
systemctl stop firewalld
systemctl disable firewalld
修改:vim /etc/selinux/config docker
swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab
三、對2臺主機進行免密設置shell
1)、CentOS7默認沒有啓動ssh無密登陸,去掉/etc/ssh/sshd_config其中1行的註釋,每臺服務器都要設置json
#PubkeyAuthentication yes
而後重啓ssh服務bootstrap
systemctl restart sshd
2)、在master機器的/root執行:ssh-keygen -t rsa命令,一直按回車。2臺機器都要執行。vim
[root@master ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:aMUO8b/EkylqTMb9+71ePnQv0CWQohsaMeAbMH+t87M root@master The key's randomart image is: +---[RSA 2048]----+ | o ... . | | = o= . o | | + oo=. . . | | =.Boo o . .| | . OoSoB . o | | =.+.+ o. ...| | + o o .. +| | . o . ..+.| | E ....+oo| +----[SHA256]-----+
3)、在master上合併公鑰到authorized_keys文件centos
[root@master ~]# cd /root/.ssh/
[root@master .ssh]# cat id_rsa.pub>> authorized_keys
4)、將master的authorized_keys複製到node1和node2節點
scp /root/.ssh/authorized_keys root@192.168.137.11:/root/.ssh/
測試,master上能夠用ip免密直接登陸,可是用名字還須要輸入一次yes,輸入一次以後之後就能夠了
[root@master]# ssh master The authenticity of host 'master (192.168.137.10)' can't be established. ECDSA key fingerprint is 5c:c6:69:04:26:65:40:7c:d0:c6:24:8d:ff:bd:5f:ef. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'master,192.168.137.10' (ECDSA) to the list of known hosts. Last login: Mon Dec 10 15:34:51 2018 from 192.168.137.1
[root@master]# ssh node1 The authenticity of host 'node1 (192.168.137.11)' can't be established. ECDSA key fingerprint is 8f:73:57:db:d8:3e:9e:22:52:ba:10:7a:6b:aa:5e:e2. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1' (ECDSA) to the list of known hosts. Last login: Mon Dec 10 16:25:53 2018 from master
四、加載 modprobe bridge
modprobe bridge
五、配置內核參數
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0 EOF
使文件生效
sysctl -p /etc/sysctl.d/k8s.conf
六、修改Linux 資源配置文件,調高ulimit最大打開數和systemctl管理的服務文件最大打開數
echo "* soft nofile 655360" >> /etc/security/limits.conf echo "* hard nofile 655360" >> /etc/security/limits.conf echo "* soft nproc 655360" >> /etc/security/limits.conf echo "* hard nproc 655360" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf
hard limits自AIX 4.1版本開始引入。hard limits 應由AIX系統管理員設置,只有security組的成員能夠將此值增大,用戶自己能夠減少此限定值,可是其更改將隨着該用戶從系統退出而失效
soft limits 是AIX核心使用的限制進程對系統資源的使用的上限值。此值可由任何人更改,但不能超出hard limits值。這裏要注意的是隻有security組的成員可以使更改永久生效普通用戶的更改在其退出系統後將失效
1)soft nofile和hard nofile示,單個用用戶的軟限制爲1000,硬限制爲1200,即表示單用戶能打開的最大文件數量爲1000,無論它開啓多少個shell。
2)soft nproc和hard nproc 單個用戶可用的最大進程數量,軟限制和硬限制
3)memlock 一個任務鎖住的物理內存的最大值(這裏設置成無限制)
七、配置國內 yum源地址、epel源地址、Kubernetes源地址
cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak yum install -y wget rm -rf /etc/yum.repos.d/* wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
九、安裝其餘依賴包
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl
十、配置時間同步
yum install chrony -y
修改vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server 192.168.137.10 iburst
註釋掉原有的server內容,把原有的時鐘同步服務設置爲master結點上的時鐘同步
rm -rf /etc/localtime /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone systemctl enable chronyd.service systemctl start chronyd.service chronyc sources
3、安裝docker(2臺主機)
一、刪除老docker
1)查詢已安裝的docker
yum list installed | grep docker
2)若是有,就yum remove
3) 刪除docker文件
rm -rf /var/lib/docker
二、設置docker yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
三、列出版本
yum list docker-ce --showduplicates | sort -r
四、安裝18.06.1版本(注意,最好不要安裝最新版本,特別是18.06.3,這個版本會致使後面初始化master的時候報錯)
yum install -y docker-ce-18.06.1.ce-3.el7
五、配置鏡像加速器和docker數據存放路徑
新建:/etc/docker/daemon.json
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://s5klxlmp.mirror.aliyuncs.com"], "graph": "/home/docker-data" } EOF
說明:https://s5klxlmp.mirror.aliyuncs.com 這個地址是登陸阿里雲後,拿到的
六、啓動docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker
若是報如下錯誤:
[root@node1 ~]# journalctl -xe Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21+08:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21+08:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21+08:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21+08:00" level=info msg="containerd successfully booted in 0.006065s" Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.620543305+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4203c3870, READY" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621314464+08:00" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621323002+08:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621345935+08:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0 <nil>}]" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621352865+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621374447+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42017e3c0, CONNECTING" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.621481017+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42017e3c0, READY" module=grpc Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.629882317+08:00" level=warning msg="Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man dockerd` to refer to dm.thinpooldev section." s Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.775919807+08:00" level=info msg="Creating filesystem xfs on device docker-253:1-201421627-base, mkfs args: [-m crc=0,finobt=0 /dev/mapper/docker-253:1-201421627-base]" storage-driver=devicemapper Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.776837868+08:00" level=info msg="Error while creating filesystem xfs on device docker-253:1-201421627-base: exit status 1" storage-driver=devicemapper Mar 04 21:22:21 node1 dockerd[3925]: Error starting daemon: error initializing graphdriver: exit status 1 Mar 04 21:22:21 node1 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Mar 04 21:22:21 node1 systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker.service has failed. -- -- The result is failed. Mar 04 21:22:21 node1 systemd[1]: Unit docker.service entered failed state. Mar 04 21:22:21 node1 systemd[1]: docker.service failed. Mar 04 21:22:22 node1 systemd[1]: docker.service holdoff time over, scheduling restart. Mar 04 21:22:22 node1 systemd[1]: Stopped Docker Application Container Engine. -- Subject: Unit docker.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker.service has finished shutting down. Mar 04 21:22:22 node1 systemd[1]: start request repeated too quickly for docker.service Mar 04 21:22:22 node1 systemd[1]: Failed to start Docker Application Container Engine. -- Subject: Unit docker.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit docker.service has failed. -- -- The result is failed. Mar 04 21:22:22 node1 systemd[1]: Unit docker.service entered failed state. Mar 04 21:22:22 node1 systemd[1]: docker.service failed. Mar 04 21:30:01 node1 systemd[1]: Started Session 6 of user root. -- Subject: Unit session-6.scope has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit session-6.scope has finished starting up. -- -- The start-up result is done. Mar 04 21:30:01 node1 CROND[3961]: (root) CMD (/usr/lib64/sa/sa1 1 1)
那麼執行如下語句
yum update xfsprogs -y
systemctl start docker.service
systemctl enable docker.service
systemctl status docker.service
4、安裝kubeadm、kubelet、kubectl(2臺主機)
yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 --disableexcludes=kubernetes
--disableexcludes 指跳過特定安裝包
修改kubelet配置文件
sed -i "s/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\"/" /etc/sysconfig/kubelet
啓動
systemctl enable kubelet
systemctl start kubelet
kubelet 服務會暫時啓動不了,先不用管它
5、下載鏡像(只在master上執行)
一、生成默認配置
kubeadm config print init-defaults > /root/kubeadm.conf
二、修改 /root/kubeadm.conf,使用國內阿里的imageRepository: registry.aliyuncs.com/google_containers
三、下載鏡像
kubeadm config images pull --config /root/kubeadm.conf
[root@master ~]# docker images|grep ali registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 8fa56d18961f 3 months ago 80.2MB registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 9508b7d8008d 3 months ago 79.6MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 d82530ead066 3 months ago 146MB registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 f1ff9b7e3d6e 3 months ago 181MB registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 4 months ago 40MB registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 14 months ago 742kB
tag鏡像爲k8s.gcr.io的形式
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3 docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3 docker tag registry.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi -f registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/coredns:1.2.6
docker rmi -f registry.aliyuncs.com/google_containers/etcd:3.2.24
docker rmi -f registry.aliyuncs.com/google_containers/pause:3.1
6、部署master(只在master上執行)
一、初始化master節點
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16
能夠看到上面部署成功了
二、爲了普通用戶使用,須要執行下面
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
三、記住最後的一句語,後面將node加入master的時候用到
kubeadm join 192.168.137.10:6443 --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76
上面這句,其實也不必定用記住,用下面的方法也能夠得到token和hash值
1)獲取token
[root@master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS v6zife.f06w6ub82vsmi0ql 23h 2019-03-12T20:49:26Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
默認狀況下 Token
過時是時間是24小時,若是 Token
過時之後,能夠輸入如下命令,生成新的 Token
kubeadm token create
2)獲取hash值
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76
四、驗證
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78d4cf999f-99fpq 0/1 Pending 0 22m kube-system coredns-78d4cf999f-cz8b6 0/1 Pending 0 22m kube-system etcd-master 1/1 Running 0 21m kube-system kube-apiserver-master 1/1 Running 0 21m kube-system kube-controller-manager-master 1/1 Running 0 21m kube-system kube-proxy-56pxn 1/1 Running 0 22m kube-system kube-scheduler-master 1/1 Running 0 21m
發現 coredns pod處於Pending狀態,先無論
7、部署calico網絡(只在master上執行)
一、下載相關文件
1)下載rbac-kdd.yaml並部署
curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O
上面的方式下載的文件版本多是最新的,不必定跟我安裝的版本兼容,我將本版本的文件內容粘貼以下:
而後執行:
kubectl apply -f rbac-kdd.yaml
2)下載calico.yaml,並修改配置,而後部署
curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml -O
修改typha_service_name
calico網絡,默認是ipip模式(在每臺node主機建立一個tunl0網口,這個隧道連接全部的node容器網絡,官網推薦不一樣的ip網段適合,好比aws的不一樣區域主機),
修改爲BGP模式,它會以daemonset方式安裝在全部node主機,每臺主機啓動一個bird(BGP client),它會將calico網絡內的全部node分配的ip段告知集羣內的主機,並經過本機的網卡eth0或者ens160轉發數據;
修改replicas
修改pod的網段(和第五節的3小節的podSubnet一致)
二、下載calico網絡須要的docker鏡像,版本能夠看calico.yaml裏面的
docker pull calico/node:v3.3.4
docker pull calico/cni:v3.3.4 docker pull calico/typha:v3.3.4
三、部署calico.yaml
kubectl apply -f calico.yaml
[root@master ~]# kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-mnzxh 1/2 Running 0 5m51s kube-system calico-typha-64f566d6c9-j4rwc 0/1 Pending 0 5m51s kube-system coredns-86c58d9df4-67xbh 1/1 Running 0 36m kube-system coredns-86c58d9df4-t9xgt 1/1 Running 0 36m kube-system etcd-master 1/1 Running 0 35m kube-system kube-apiserver-master 1/1 Running 0 35m kube-system kube-controller-manager-master 1/1 Running 0 35m kube-system kube-proxy-8xg28 1/1 Running 0 36m kube-system kube-scheduler-master 1/1 Running 0 35m
這裏calico-typha 沒起來,那是由於咱們的node節點還沒安裝,這裏先無論。
八、部署node(只在node節點上執行)
一、下載node須要的鏡像
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 docker pull registry.aliyuncs.com/google_containers/pause:3.1 docker pull calico/node:v3.3.4 docker pull calico/cni:v3.3.4 docker pull calico/typha:v3.3.4
二、tag鏡像
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3 docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi -f registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 docker rmi -f registry.aliyuncs.com/google_containers/pause:3.1
三、將node加入集羣(命令請看第六大節的第3小節)
kubeadm join 192.168.137.10:6443 --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76
[root@node1 ~]# kubeadm join 192.168.137.10:6443 --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.137.10:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.10:6443" [discovery] Requesting info from "https://192.168.137.10:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.137.10:6443" [discovery] Successfully established connection with API Server "192.168.137.10:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
出現上面的信息,表示node加入集羣成功,去master執行如下命令:
狀態都是Ready,說明集羣部署成功了
9、部署 Dashboard(只在master節點上執行)
從版本1.7開始,儀表板再也不具備默認授予的徹底管理員權限。全部權限都被撤銷,而且只授予了使 Dashboard
工做所需的最小權限。
一、部署dashboard以前,咱們須要生成證書,否則後面會https訪問登陸不了。
mkdir -p /etc/kubernetes/certs cd /etc/kubernetes/certs
[root@master certs]# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048 Generating RSA private key, 2048 bit long modulus ......+++ ............+++ e is 65537 (0x10001)
[root@master certs]# openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key writing RSA key
下面這步一路回車就能夠
[root@master certs]# openssl req -new -key dashboard.key -out dashboard.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:
[root@master certs]# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt Signature ok subject=/C=XX/L=Default City/O=Default Company Ltd Getting Private key
二、建立secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/certs -n kube-system
三、下載kubernetes-dashboard.yaml
curl https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml -O
四、註釋掉kubernetes-dashboard.yaml裏面的Secret,由於咱們上面本身建立了一個,不須要自帶的了
五、修改yaml配置文件image部分,指定鏡像從阿里雲鏡像倉庫拉取
鏡像:registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
六、修改yaml的service爲NodePort方式
七、部署kubernetes-dashboard.yaml
kubectl apply -f kubernetes-dashboard.yaml
查看是否部署成功
查看svc
八、用google瀏覽器查看
Dashboard 支持 Kubeconfig 和 Token 兩種認證方式,咱們這裏選擇Token認證方式登陸,爲了能用Token登陸,咱們必須先建立一個叫admin-user的服務帳號
1)在master節點上建立 dashboard-adminuser.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
而後執行
kubectl create -f dashboard-adminuser.yaml
說明:上面建立了一個叫admin-user的服務帳號,並放在kube-system命名空間下,並將cluster-admin角色綁定到admin-user帳戶,這樣admin-user帳戶就有了管理員的權限。默認狀況下,kubeadm建立集羣時已經建立了cluster-admin角色,咱們直接綁定便可。
2)查看admin-user帳戶的token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
將上面的token放入瀏覽器裏面的 「令牌」,登陸便可
注意:出於安全考慮,默認配置下Kubernetes不會將Pod調度到Master節點。若是但願將k8s-master也看成Node使用,能夠執行以下命令:
kubectl taint node master node-role.kubernetes.io/master-
若是要恢復 Master Only 狀態,執行以下命令:
kubectl taint node master node-role.kubernetes.io/master="":NoSchedule