我這裏使用的是五臺CentOS-7.7的虛擬機,具體信息以下表:css
系統版本 | IP地址 | 節點角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
CentOS-7.7 | 192.168.243.138 | master | >=2 | >=2G | m1 |
CentOS-7.7 | 192.168.243.136 | master | >=2 | >=2G | m2 |
CentOS-7.7 | 192.168.243.141 | master | >=2 | >=2G | m3 |
CentOS-7.7 | 192.168.243.139 | worker | >=2 | >=2G | s1 |
CentOS-7.7 | 192.168.243.140 | worker | >=2 | >=2G | s2 |
這五臺機器均需事先安裝好Docker,因爲安裝過程比較簡單這裏不進行介紹,能夠參考官方文檔:html
一、主機名必須每一個節點都不同,而且保證全部點之間能夠經過hostname互相訪問。設置hostname:node
# 查看主機名 $ hostname # 修改主機名 $ hostnamectl set-hostname <your_hostname>
配置host,使全部節點之間能夠經過hostname互相訪問:linux
$ vim /etc/hosts 192.168.243.138 m1 192.168.243.136 m2 192.168.243.141 m3 192.168.243.139 s1 192.168.243.140 s2
二、安裝依賴包:nginx
# 更新yum $ yum update # 安裝依賴包 $ yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
三、關閉防火牆、swap,重置iptables:git
# 關閉防火牆 $ systemctl stop firewalld && systemctl disable firewalld # 重置iptables $ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT # 關閉swap $ swapoff -a $ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab # 關閉selinux $ setenforce 0 # 關閉dnsmasq(不然可能致使docker容器沒法解析域名) $ service dnsmasq stop && systemctl disable dnsmasq # 重啓docker服務 $ systemctl restart docker
四、系統參數設置:github
# 製做配置文件 $ cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 EOF # 生效文件 $ sysctl -p /etc/sysctl.d/kubernetes.conf
工具說明:web
一、首先添加k8s的源:docker
$ bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF'
二、安裝k8s相關組件:shell
$ yum install -y kubelet kubeadm kubectl $ systemctl enable --now kubelet.service
kubectl
是用於與k8s集羣交互的一個命令行工具,操做k8s基本離不開這個工具,因此該工具所支持的命令比較多。好在kubectl
支持設置命令補全,使用kubectl completion -h
能夠查看各個平臺下的設置示例。這裏以Linux平臺爲例,演示一下如何設置這個命令補全,完成如下操做後就可使用tap
鍵補全命令了:
[root@m1 ~]# yum install bash-completion -y [root@m1 ~]# source /usr/share/bash-completion/bash_completion [root@m1 ~]# source <(kubectl completion bash) [root@m1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc [root@m1 ~]# printf " # Kubectl shell completion source '$HOME/.kube/completion.bash.inc' " >> $HOME/.bash_profile [root@m1 ~]# source $HOME/.bash_profile
一、在兩個主節點上執行以下命令安裝keepalived(一主一備),我這裏選擇在m1
和m2
節點上進行安裝:
$ yum install -y keepalived
二、分別在兩臺機器上建立keepalived配置文件的存放目錄:
$ mkdir -p /etc/keepalived
三、在m1
(角色爲master)上建立配置文件以下:
[root@m1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id keepalive-master } vrrp_script check_apiserver { # 檢測腳本路徑 script "/etc/keepalived/check-apiserver.sh" # 多少秒檢測一次 interval 3 # 失敗的話權重-2 weight -2 } vrrp_instance VI-kube-master { state MASTER # 定義節點角色 interface ens32 # 網卡名稱 virtual_router_id 68 priority 100 dont_track_primary advert_int 3 virtual_ipaddress { # 自定義虛擬ip 192.168.243.100 } track_script { check_apiserver } }
四、在m2
(角色爲backup)上建立配置文件以下:
[root@m2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id keepalive-backup } vrrp_script check_apiserver { script "/etc/keepalived/check-apiserver.sh" interval 3 weight -2 } vrrp_instance VI-kube-master { state BACKUP interface ens32 virtual_router_id 68 priority 99 dont_track_primary advert_int 3 virtual_ipaddress { 192.168.243.100 } track_script { check_apiserver } }
五、分別在m1
和m2
節點上建立keepalived的檢測腳本,這個腳本比較簡單,能夠自行根據需求去完善:
$ vim /etc/keepalived/check-apiserver.sh #!/bin/sh netstat -ntlp |grep 6443 || exit 1
六、完成上述步驟後,啓動keepalived:
# 分別在master和backup上啓動keepalived服務 $ systemctl enable keepalived && service keepalived start # 檢查狀態 $ service keepalived status # 查看日誌 $ journalctl -f -u keepalived # 查看虛擬ip $ ip a
使用kubeadm
建立的k8s集羣,大部分組件都是以docker容器的方式去運行的,因此kubeadm
在初始化master
節點的時候須要拉取相應的組件鏡像。可是kubeadm
默認是從Google的k8s.gcr.io
上拉取鏡像,所以在國內天然是沒法成功拉取到所需的鏡像。
要解決這種狀況要麼***,要麼手動拉取國內與之對應的鏡像到本地而後改下tag
。我這裏選擇後者,首先查看kubeadm
須要拉取的鏡像列表:
[root@m1 ~]# kubeadm config images list W0830 19:17:13.056761 81487 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] k8s.gcr.io/kube-apiserver:v1.19.0 k8s.gcr.io/kube-controller-manager:v1.19.0 k8s.gcr.io/kube-scheduler:v1.19.0 k8s.gcr.io/kube-proxy:v1.19.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.9-1 k8s.gcr.io/coredns:1.7.0 [root@m1 ~]#
我這裏是從阿里雲的容器鏡像倉庫去拉取,可是有個問題就是版本號可能會與kubeadm
中定義的對不上,這就須要咱們自行到鏡像倉庫查詢確認:
例如,我這裏kubeadm
列出的版本號是v1.19.0
,但阿里雲鏡像倉庫上倒是v1.19.0-rc.1
。找到對應的版本號後,爲了不重複的工做,我這裏就寫了個shell腳本去完成鏡像的拉取及修改tag
:
[root@m1 ~]# vim pullk8s.sh #!/bin/bash ALIYUN_KUBE_VERSION=v1.19.0-rc.1 KUBE_VERSION=v1.19.0 KUBE_PAUSE_VERSION=3.2 ETCD_VERSION=3.4.9-1 DNS_VERSION=1.7.0 username=registry.cn-hangzhou.aliyuncs.com/google_containers images=( kube-proxy-amd64:${ALIYUN_KUBE_VERSION} kube-scheduler-amd64:${ALIYUN_KUBE_VERSION} kube-controller-manager-amd64:${ALIYUN_KUBE_VERSION} kube-apiserver-amd64:${ALIYUN_KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd-amd64:${ETCD_VERSION} coredns:${DNS_VERSION} ) for image in ${images[@]} do docker pull ${username}/${image} # 此處需刪除「-amd64」,不然kuadm仍是沒法識別本地鏡像 new_image=`echo $image|sed 's/-amd64//g'` if [[ $new_image == *$ALIYUN_KUBE_VERSION* ]] then new_kube_image=`echo $new_image|sed "s/$ALIYUN_KUBE_VERSION//g"` docker tag ${username}/${image} k8s.gcr.io/${new_kube_image}$KUBE_VERSION else docker tag ${username}/${image} k8s.gcr.io/${new_image} fi docker rmi ${username}/${image} done [root@m1 ~]# sh pullk8s.sh
腳本執行完後,此時查看Docker鏡像列表應以下:
[root@m1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.19.0 b2d80fe68e4f 6 weeks ago 120MB k8s.gcr.io/kube-controller-manager v1.19.0 a7cd7b6717e8 6 weeks ago 116MB k8s.gcr.io/kube-apiserver v1.19.0 1861e5423d80 6 weeks ago 126MB k8s.gcr.io/kube-scheduler v1.19.0 6d4fe43fdd0d 6 weeks ago 48.4MB k8s.gcr.io/etcd 3.4.9-1 d4ca8726196c 2 months ago 253MB k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 2 months ago 45.2MB k8s.gcr.io/pause 3.2 80d28bedfe5d 6 months ago 683kB [root@m1 ~]#
建立kubeadm
用於初始化master
節點的配置文件:
[root@m1 ~]# vim kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.19.0 # 指定控制面板的訪問端點,這裏的ip爲keepalived的虛擬ip controlPlaneEndpoint: "192.168.243.100:6443" networking: # This CIDR is a Calico default. Substitute or remove for your CNI provider. podSubnet: "172.22.0.0/16" # 指定pod所使用的網段
而後執行以下命令進行初始化:
[root@m1 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs W0830 20:05:29.447773 88394 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.0 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1] and IPs [10.96.0.1 192.168.243.138 192.168.243.100] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [192.168.243.138 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [192.168.243.138 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 173.517640 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e [mark-control-plane] Marking the node m1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 5l7pv5.5iiq4atzlazq0b7x [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \ --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 \ --control-plane --certificate-key a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \ --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 [root@m1 ~]#
kubeadm join
命令,後面添加其餘master節點以及worker節點時須要用到而後在master
節點上執行以下命令拷貝配置文件:
[root@m1 ~]# mkdir -p $HOME/.kube [root@m1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@m1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
查看當前的Pod信息:
[root@m1 ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-f9fd979d6-kg4lf 0/1 Pending 0 9m9s kube-system coredns-f9fd979d6-t8xzj 0/1 Pending 0 9m9s kube-system etcd-m1 1/1 Running 0 9m22s kube-system kube-apiserver-m1 1/1 Running 1 9m22s kube-system kube-controller-manager-m1 1/1 Running 1 9m22s kube-system kube-proxy-rjgnw 1/1 Running 0 9m9s kube-system kube-scheduler-m1 1/1 Running 1 9m22s [root@m1 ~]#
使用curl
命令請求一下健康檢查接口,返回ok表明沒問題:
[root@m1 ~]# curl -k https://192.168.243.100:6443/healthz ok [root@m1 ~]#
建立配置文件存放目錄:
[root@m1 ~]# mkdir -p /etc/kubernetes/addons
在該目錄下建立calico-rbac-kdd.yaml
配置文件:
[root@m1 ~]# vi /etc/kubernetes/addons/calico-rbac-kdd.yaml # Calico Version v3.1.3 # https://docs.projectcalico.org/v3.1/releases#v3.1.3 kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: - apiGroups: [""] resources: - namespaces verbs: - get - list - watch - apiGroups: [""] resources: - pods/status verbs: - update - apiGroups: [""] resources: - pods verbs: - get - list - watch - patch - apiGroups: [""] resources: - services verbs: - get - apiGroups: [""] resources: - endpoints verbs: - get - apiGroups: [""] resources: - nodes verbs: - get - list - update - watch - apiGroups: ["extensions"] resources: - networkpolicies verbs: - get - list - watch - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - globalnetworkpolicies - globalnetworksets - networkpolicies - clusterinformations - hostendpoints verbs: - create - get - list - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system
而後分別執行以下命令完成calico
的安裝:
[root@m1 ~]# kubectl apply -f /etc/kubernetes/addons/calico-rbac-kdd.yaml [root@m1 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
查看狀態:
[root@m1 ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5bc4fc6f5f-pdjls 1/1 Running 0 2m47s kube-system calico-node-tkdmv 1/1 Running 0 2m47s kube-system coredns-f9fd979d6-kg4lf 1/1 Running 0 23h kube-system coredns-f9fd979d6-t8xzj 1/1 Running 0 23h kube-system etcd-m1 1/1 Running 1 23h kube-system kube-apiserver-m1 1/1 Running 2 23h kube-system kube-controller-manager-m1 1/1 Running 2 23h kube-system kube-proxy-rjgnw 1/1 Running 1 23h kube-system kube-scheduler-m1 1/1 Running 2 23h [root@m1 ~]#
使用以前保存的kubeadm join
命令加入集羣,可是要注意master
和worker
的join
命令是不一樣的不要搞錯了。分別在m2
和m3
上執行:
$ kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \ --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 \ --control-plane --certificate-key a455fb8227dd15882b57b11f3587187181b972d95524bb3ef43e78f76360121e
master
節點的join
命令包含--control-plane --certificate-key
參數而後等待一會,該命令執行成功會輸出以下內容:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m3] and IPs [10.96.0.1 192.168.243.141 192.168.243.100] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost m3] and IPs [192.168.243.141 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost m3] and IPs [192.168.243.141 127.0.0.1 ::1] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node m3 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node m3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
而後按照提示完成kubectl
配置文件的拷貝:
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
而且此時6443
端口應該是被監聽的:
[root@m2 ~]# netstat -lntp |grep 6443 tcp6 0 0 :::6443 :::* LISTEN 31910/kube-apiserve [root@m2 ~]#
但join
命令執行成功不必定表明就加入集羣成功,此時須要回到m1
節點上去查看節點是否爲Ready
狀態:
[root@m1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m1 Ready master 24h v1.19.0 m2 NotReady master 3m47s v1.19.0 m3 NotReady master 3m31s v1.19.0 [root@m1 ~]#
能夠看到m2
和m3
都是NotReady
狀態,表明沒有成功加入到集羣。因而我使用以下命令查看日誌:
$ journalctl -f
發現是萬惡的網絡問題(牆)致使沒法成功拉取pause
鏡像:
8月 31 20:09:11 m2 kubelet[10122]: W0831 20:09:11.713935 10122 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d 8月 31 20:09:12 m2 kubelet[10122]: E0831 20:09:12.442430 10122 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized 8月 31 20:09:17 m2 kubelet[10122]: E0831 20:09:17.657880 10122 kuberuntime_manager.go:730] createPodSandbox for pod "calico-node-jksvg_kube-system(5b76b6d7-0bd9-4454-a674-2d2fa4f6f35e)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.2": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
因而在m2
和m3
上執行以下命令拷貝m1
上以前用於拉取國內鏡像的腳本並執行:
$ scp -r m1:/root/pullk8s.sh /root/pullk8s.sh $ sh /root/pullk8s.sh
執行完成並等待幾分鐘後,回到m1
節點再次查看nodes
信息,此次就都是Ready
狀態了:
[root@m1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m1 Ready master 24h v1.19.0 m2 Ready master 14m v1.19.0 m3 Ready master 13m v1.19.0 [root@m1 ~]#
與上一小節的步驟基本是相同的,只不過是在s1
和s2
節點上執行而已,kubeadm join
命令不要搞錯了就行,因此這裏簡略帶過:
# 使用以前保存的join命令加入集羣 $ kubeadm join 192.168.243.100:6443 --token 5l7pv5.5iiq4atzlazq0b7x \ --discovery-token-ca-cert-hash sha256:0fdc9947984a1c655861349dbd251d581bd6ec336c1ab8d9013cf302412b2140 # 耐心等待一會,能夠觀察下日誌 $ journalctl -f
成功將全部的worker節點加入集羣后,至此咱們就完成了k8s高可用集羣的搭建。此時集羣的node
信息以下:
[root@m1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m1 Ready master 24h v1.19.0 m2 Ready master 60m v1.19.0 m3 Ready master 60m v1.19.0 s1 Ready <none> 9m45s v1.19.0 s2 Ready <none> 119s v1.19.0 [root@m1 ~]#
pod
信息以下:
[root@m1 ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5bc4fc6f5f-pdjls 1/1 Running 0 73m kube-system calico-node-8m8lz 1/1 Running 0 9m43s kube-system calico-node-99xps 1/1 Running 0 60m kube-system calico-node-f48zw 1/1 Running 0 117s kube-system calico-node-jksvg 1/1 Running 0 60m kube-system calico-node-tkdmv 1/1 Running 0 73m kube-system coredns-f9fd979d6-kg4lf 1/1 Running 0 24h kube-system coredns-f9fd979d6-t8xzj 1/1 Running 0 24h kube-system etcd-m1 1/1 Running 1 24h kube-system kube-apiserver-m1 1/1 Running 2 24h kube-system kube-controller-manager-m1 1/1 Running 2 24h kube-system kube-proxy-22h6p 1/1 Running 0 9m43s kube-system kube-proxy-khskm 1/1 Running 0 60m kube-system kube-proxy-pkrgm 1/1 Running 0 60m kube-system kube-proxy-rjgnw 1/1 Running 1 24h kube-system kube-proxy-t4pxl 1/1 Running 0 117s kube-system kube-scheduler-m1 1/1 Running 2 24h [root@m1 ~]#
在m1
節點上建立nginx-ds.yml
配置文件,內容以下:
apiVersion: v1 kind: Service metadata: name: nginx-ds labels: app: nginx-ds spec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: app: nginx-ds template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80
而後執行以下命令建立nginx ds:
[root@m1 ~]# kubectl create -f nginx-ds.yml service/nginx-ds created daemonset.apps/nginx-ds created [root@m1 ~]#
稍等一會後,檢查Pod狀態是否正常:
[root@m1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ds-6nnpm 1/1 Running 0 2m32s 172.22.152.193 s1 <none> <none> nginx-ds-bvpqj 1/1 Running 0 2m32s 172.22.78.129 s2 <none> <none> [root@m1 ~]#
在每一個節點上去嘗試ping
Pod IP:
[root@s1 ~]# ping 172.22.152.193 PING 172.22.152.193 (172.22.152.193) 56(84) bytes of data. 64 bytes from 172.22.152.193: icmp_seq=1 ttl=63 time=0.269 ms 64 bytes from 172.22.152.193: icmp_seq=2 ttl=63 time=0.240 ms 64 bytes from 172.22.152.193: icmp_seq=3 ttl=63 time=0.228 ms 64 bytes from 172.22.152.193: icmp_seq=4 ttl=63 time=0.229 ms ^C --- 172.22.152.193 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.228/0.241/0.269/0.022 ms [root@s1 ~]#
而後檢查Service的狀態:
[root@m1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d1h nginx-ds NodePort 10.105.139.228 <none> 80:31145/TCP 3m21s [root@m1 ~]#
在每一個節點上嘗試下訪問該服務,能正常訪問表明Service的IP也是通的:
[root@m1 ~]# curl 10.105.139.228:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@m1 ~]#
而後在每一個節點檢查NodePort
的可用性,nginx-ds
的NodePort
爲31145
。以下能正常訪問表明NodePort
也是正常的:
[root@m3 ~]# curl 192.168.243.140:31145 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@m3 ~]#
須要建立一個Nginx Pod,首先定義一個pod-nginx.yaml
配置文件,內容以下:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
而後基於該配置去建立Pod:
[root@m1 ~]# kubectl create -f pod-nginx.yaml pod/nginx created [root@m1 ~]#
使用以下命令進入到Pod裏:
[root@m1 ~]# kubectl exec nginx -i -t -- /bin/bash
查看dns配置:
root@nginx:/# cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local localdomain options ndots:5 root@nginx:/#
接着測試是否能夠正確解析Service的名稱。以下能根據nginx-ds
這個名稱解析出對應的IP:10.105.139.228
,表明dns也是正常的:
root@nginx:/# ping nginx-ds PING nginx-ds.default.svc.cluster.local (10.105.139.228): 48 data bytes
到m1
節點上執行以下命令將其關機:
[root@m1 ~]# init 0
而後查看虛擬IP是否成功漂移到了m2
節點上:
[root@m2 ~]# ip a |grep 192.168.243.100 inet 192.168.243.100/32 scope global ens32 [root@m2 ~]#
接着測試可否在m2
或m3
節點上使用kubectl
與集羣進行交互,能正常交互則表明集羣具有了必定程度的高可用性:
[root@m2 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION m1 NotReady master 3d v1.19.0 m2 Ready master 16m v1.19.0 m3 Ready master 13m v1.19.0 s1 Ready <none> 2d v1.19.0 s2 Ready <none> 47h v1.19.0 [root@m2 ~]#
dashboard是k8s提供的一個可視化操做界面,用於簡化咱們對集羣的操做和管理,在界面上咱們能夠很方便的查看各類信息、操做Pod、Service等資源,以及建立新的資源等。dashboard的倉庫地址以下,
dashboard的部署也比較簡單,首先定義dashboard-all.yaml
配置文件,內容以下:
apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30005 type: NodePort selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.3 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.4 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
建立dashboard服務:
[root@m1 ~]# kubectl create -f dashboard-all.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [root@m1 ~]#
查看deployment
運行狀況:
[root@m1 ~]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 29s [root@m1 ~]#
查看dashboard pod運行狀況:
[root@m1 ~]# kubectl --namespace kubernetes-dashboard get pods -o wide |grep dashboard dashboard-metrics-scraper-7b59f7d4df-q4jqj 1/1 Running 0 5m27s 172.22.152.198 s1 <none> <none> kubernetes-dashboard-5dbf55bd9d-nqvjz 1/1 Running 0 5m27s 172.22.202.17 m1 <none> <none> [root@m1 ~]#
查看dashboard service的運行狀況:
[root@m1 ~]# kubectl get services kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.104.217.178 <none> 443:30005/TCP 5m57s [root@m1 ~]#
查看30005
端口是否有被正常監聽:
[root@m1 ~]# netstat -ntlp |grep 30005 tcp 0 0 0.0.0.0:30005 0.0.0.0:* LISTEN 4085/kube-proxy [root@m1 ~]#
爲了集羣安全,從 1.7 開始,dashboard 只容許經過 https 訪問,咱們使用NodePort的方式暴露服務,可使用 https://NodeIP:NodePort 地址訪問。例如使用curl
進行訪問:
[root@m1 ~]# curl https://192.168.243.138:30005 -k <!-- Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Kubernetes Dashboard</title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png" /> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="styles.988f26601cdcb14da469.css"></head> <body> <kd-root></kd-root> <script src="runtime.ddfec48137b0abfd678a.js" defer></script><script src="polyfills-es5.d57fe778f4588e63cc5c.js" nomodule defer></script><script src="polyfills.49104fe38e0ae7955ebb.js" defer></script><script src="scripts.391d299173602e261418.js" defer></script><script src="main.b94e335c0d02b12e3a7b.js" defer></script></body> </html> [root@m1 ~]#
-k
參數指定不驗證證書進行https請求關於自定義證書
默認dashboard的證書是自動生成的,確定是非安全的證書,若是你們有域名和對應的安全證書能夠本身替換掉。使用安全的域名方式訪問dashboard。
在dashboard-all.yaml
中增長dashboard啓動參數,能夠指定證書文件,其中證書文件是經過secret注進來的。
- –tls-cert-file
- dashboard.cer
- –tls-key-file
- dashboard.key
Dashboard 默認只支持 token 認證,因此若是使用 KubeConfig 文件,須要在該文件中指定 token,咱們這裏使用token的方式登陸。
首先建立service account:
[root@m1 ~]# kubectl create sa dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@m1 ~]#
建立角色綁定關係:
[root@m1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created [root@m1 ~]#
查看dashboard-admin
的secret名稱:
[root@m1 ~]# kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}' dashboard-admin-token-ph7h2 [root@m1 ~]#
打印secret的token:
[root@m1 ~]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}') [root@m1 ~]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6IkVnaDRYQXgySkFDOGdDMnhXYXJWbkY2WVczSDVKeVJRaE5vQ0ozOG5PanMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcGg3aDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjA1ZWY3OTAtOWY3OC00NDQzLTgwMDgtOWRiMjU1MjU0MThkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.xAO3njShhTRkgNdq45nO7XNy242f8XVs-W4WBMui-Ts6ahdZECoNegvWjLDCEamB0UW72JeG67f2yjcWohANwfDCHobRYPkOhzrVghkdULbrCCGai_fe60Svwf_apSmlKP3UUdu16M4GxopaTlINZpJY_z5KJ4kLq66Y1rjAA6j9TI4Ue4EazJKKv0dciv6NsP28l7-nvUmhj93QZpKqY3PQ7vvcPXk_sB-jjSSNJ5ObWuGeDBGHgQMRI4F1XTWXJBYClIucsbu6MzDA8yop9S7Ci8D00QSa0u3M_rqw-3UHtSxQee41uVVjIASfnCEVayKDIbJzG3gc2AjqGqJhkQ [root@m1 ~]#
獲取到token後,使用瀏覽器訪問https://192.168.243.138:30005
,因爲是dashboard是自籤的證書,因此此時瀏覽器會提示警告。不用理會直接點擊「高級」 -> 「繼續前往」便可:
而後輸入token:
成功登陸後首頁以下:
可視化界面也沒啥可說的,這裏就不進一步介紹了,能夠自行探索一下。