kubeadm搭建kubernetes-1.13.2集羣

kubeadm是官方推出的部署工具,旨在下降kubernetes使用門檻與提升集羣部署的便捷性. 同時愈來愈多的官方文檔,圍繞kubernetes容器化部署爲環境, 因此容器化部署kubernetes已成爲趨勢.html

1、環境準備:node

一、基礎環境linux

kubernetes-集羣環境
主機名 系統環境 容器環境 kubernetes版本 IP地址 角色 CPU核
k8s-master CentOS Linux 7 Docker version 1.13.1 v1.13.2 192.168.10.5 master  4核
k8s-node-1 CentOS Linux 7 Docker version 1.13.1 v1.13.2 192.168.10.8 node 8核
k8s-node-2 CentOS Linux 7 Docker version 1.13.1 v1.13.2 192.168.10.9 node 8核

 

 

 

 

 

 

 

 

二、初始化環境:web

2.1關閉防火牆
systemctl stop firewalld && systemctl disable firewalld && setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/configdocker


二、2關閉交換分區express

swapoff -a && sysctl -w vm.swappiness=0apache

2.3.設置主機名,修改hostsjson

hostnamectl --static set-hostname  k8s-master hostnamectl --static set-hostname  k8s-node-1 hostnamectl --static set-hostname  k8s-node-2

   vim /etc/hostsbootstrap

192.168.10.5 k8s-master 192.168.10.8 k8s-node-1
192.168.10.9 k8s-node-2

2.三、安裝docker(全部節點)vim

yum -y install docker && systemctl start docker

2.四、kubernetes repo源(全部節點)

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

2.五、安裝kubelet、kubeadm、kubectl(全部節點)

yum install -y kubelet kubeadm kubectl

[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@k8s-master ~]# kube
kubeadm kubectl kubelet
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:33:30Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}

 

2、開始安裝master節點:

一、下載基礎集羣組件鏡像(master和node節點都須要下載):

[root@k8s-master yaml]# cat docker_pull.sh docker pull mirrorgooglecontainers/kube-apiserver:v1.13.2 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.2 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.2 docker pull mirrorgooglecontainers/kube-proxy:v1.13.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull docker.io/dockerofwj/flannel docker tag mirrorgooglecontainers/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2 docker tag mirrorgooglecontainers/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag docker.io/dockerofwj/flannel quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.2    docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.2 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.2   docker rmi mirrorgooglecontainers/kube-proxy:v1.13.2            docker rmi mirrorgooglecontainers/pause:3.1                         docker rmi mirrorgooglecontainers/etcd:3.2.24                       docker rmi coredns/coredns:1.2.6 docker rmi docker.io/dockerofwj/flannel
[root@k8s-node-2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-apiserver                         v1.13.2             177db4b8e93a        Less than a second ago   181 MB k8s.gcr.io/kube-proxy                             v1.13.2             01cfa56edcfc        Less than a second ago   80.3 MB docker.io/mirrorgooglecontainers/kube-scheduler   v1.13.2             3193be46e0b3        Less than a second ago   79.6 MB k8s.gcr.io/kube-scheduler                         v1.13.2             3193be46e0b3        Less than a second ago   79.6 MB k8s.gcr.io/coredns                                1.2.6               f59dcacceff4        2 months ago             40 MB k8s.gcr.io/etcd                                   3.2.24              3cab8e1b9802        3 months ago             220 MB quay.io/coreos/flannel                            v0.10.0-amd64       17ccf3fc30e3        4 months ago             44.6 MB k8s.gcr.io/pause                                  3.1                 da86e6ba6ca1        12 months ago            742 kB

二、在master執行初始化命令:

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.13.2 --apiserver-advertise-address 192.168.10.5 --pod-network-cidr=10.10.0.0/16 [init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.5] [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.5 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.5 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.510036 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: lwduhf.b20k2ahvgs9akc3o [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
 You can now join any number of machines by running the following on each node as root:  kubeadm join 192.168.10.5:6443 --token lwduhf.b20k2ahvgs9akc3o --discovery-token-ca-cert-hash sha256:19b470c59e24b46b86372e7b798f0c0fd058169a78b904d841fe8d54115c5e16

 

這條指令用於在node節點添加到master集羣,同時node節點須要kube-proxy、flannel鏡像
kubeadm join 192.168.10.5:6443 --token lwduhf.b20k2ahvgs9akc3o --discovery-token-ca-cert-hash sha256:19b470c59e24b46b86372e7b798f0c0fd058169a78b904d841fe8d54115c5e16

 

三、配置 kubectl

在 Master上用 root用戶執行下列命令來配置 kubectl:

[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile [root@k8s-master ~]# source /etc/profile [root@k8s-master ~]# echo $KUBECONFIG /etc/kubernetes/admin.conf

四、安裝Pod網絡

安裝 Pod網絡是 Pod之間進行通訊的必要條件,k8s支持衆多網絡方案,這裏咱們依然選用經典的 flannel方案

4.1 首先設置系統參數

[root@k8s-master ~]# sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1

4.2 、加載yaml文件:

[root@k8s-master script]# kubectl apply -f kube-flannel.yaml
[root@k8s-master script]# cat kube-flannel.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get
  - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.10.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
kube-flannel.yaml

4.三、查看運行的POD,檢查 CoreDNS Pod此刻是否正常運行起來

 

[root@k8s-master script]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system   coredns-86c58d9df4-dwbnz             1/1     Running   0          28m     10.10.0.2      k8s-master   <none>           <none> kube-system   coredns-86c58d9df4-xqjz2             1/1     Running   0          28m     10.10.0.3      k8s-master   <none>           <none> kube-system   etcd-k8s-master                      1/1     Running   0          27m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-apiserver-k8s-master            1/1     Running   0          27m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-controller-manager-k8s-master   1/1     Running   0          27m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-flannel-ds-amd64-dvggv          1/1     Running   0          4m17s   192.168.10.5   k8s-master   <none>           <none> kube-system   kube-proxy-sj6sm                     1/1     Running   0          28m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-scheduler-k8s-master            1/1     Running   0          27m     192.168.10.5   k8s-master   <none>           <none> 
查看master節點運行狀態:
[root@k8s
-master script]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 28m v1.13.2

3、在node節點 加入集羣:

一、node節點須要的下載鏡像

[root@k8s-node-1 ~]# cat docker_pull.sh docker pull mirrorgooglecontainers/kube-proxy:v1.13.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull docker.io/dockerofwj/flannel docker tag mirrorgooglecontainers/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/dockerofwj/flannel quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.2            docker rmi mirrorgooglecontainers/pause:3.1                         docker rmi docker.io/dockerofwj/flannel

二、在兩個 Slave節點上分別執行以下命令來讓其加入Master上已經就緒了的 k8s集羣

[root@k8s-node-2 ~]# kubeadm join 192.168.10.5:6443 --token lwduhf.b20k2ahvgs9akc3o --discovery-token-ca-cert-hash sha256:19b470c59e24b46b86372e7b798f0c0fd058169a78b904d841fe8d54115c5e16 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [discovery] Trying to connect to API Server "192.168.10.5:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.5:6443" [discovery] Requesting info from "https://192.168.10.5:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.5:6443" [discovery] Successfully established connection with API Server "192.168.10.5:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node-2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.

查看token

[root@k8s-master script]# kubeadm token list

 

查看添加節點狀態

[root@k8s-master script]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master   Ready    master   40m    v1.13.2 k8s-node-1   Ready    <none>   15s    v1.13.2 k8s-node-2   Ready    <none>   116s   v1.13.2

查看pod狀態:

[root@k8s-master script]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system   coredns-86c58d9df4-dwbnz             1/1     Running   0          43m     10.10.0.2      k8s-master   <none>           <none> kube-system   coredns-86c58d9df4-xqjz2             1/1     Running   0          43m     10.10.0.3      k8s-master   <none>           <none> kube-system   etcd-k8s-master                      1/1     Running   0          42m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-apiserver-k8s-master            1/1     Running   0          42m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-controller-manager-k8s-master   1/1     Running   0          42m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-flannel-ds-amd64-dvggv          1/1     Running   0          19m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-flannel-ds-amd64-gxmnk          1/1     Running   0          3m18s   192.168.10.8   k8s-node-1   <none>           <none> kube-system   kube-flannel-ds-amd64-rbpmq          1/1     Running   0          4m59s   192.168.10.9   k8s-node-2   <none>           <none> kube-system   kube-proxy-rblt6                     1/1     Running   0          3m18s   192.168.10.8   k8s-node-1   <none>           <none> kube-system   kube-proxy-sj6sm                     1/1     Running   0          43m     192.168.10.5   k8s-master   <none>           <none> kube-system   kube-proxy-tx2p9                     1/1     Running   0          4m59s   192.168.10.9   k8s-node-2   <none>           <none> kube-system   kube-scheduler-k8s-master            1/1     Running   0          42m     192.168.10.5   k8s-master   <none>           <none>

 

4、安裝dashboard

Kubernetes Dashboard (儀表盤)是一個旨在將通用的基於 Web 的監控和操做界面加入 Kubernetes 的項目。

 一、下載Kubernetes Dashboard鏡像:

 
  

[root@k8s-node-2 ~]# docker pull docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3

[root@k8s-node-1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy                                         v1.13.2             01cfa56edcfc        Less than a second ago   80.3 MB quay.io/coreos/flannel                                        v0.10.0-amd64       17ccf3fc30e3        4 months ago             44.6 MB docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64   v1.8.3              0c60bcf89900        11 months ago            102 MB k8s.gcr.io/pause                                              3.1                 da86e6ba6ca1        12 months ago            742 kB

建立目錄:

[root@k8s-node-1 ~]# mkdir -p /var/share/certs

二、安裝 (在master節點加載)dashboard:

[root@k8s-master script]# cat kubernetes-dashboard.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0
# # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --token-ttl=5400 # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port
 volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs hostPath: path: /var/share/certs type: Directory - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 443 targetPort: 8443 nodePort: 31234 selector: k8s-app: kubernetes-dashboard type: NodePort
kubernetes-dashboard.yaml
[root@k8s-master script]# kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created
查看服務運行狀態:

[root@k8s-master script]# kubectl get pods --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-86c58d9df4-dwbnz 1/1 Running 0 89m 10.10.0.2 k8s-master <none> <none> coredns-86c58d9df4-xqjz2 1/1 Running 0 89m 10.10.0.3 k8s-master <none> <none> etcd-k8s-master 1/1 Running 0 88m 192.168.10.5 k8s-master <none> <none> kube-apiserver-k8s-master 1/1 Running 0 88m 192.168.10.5 k8s-master <none> <none> kube-controller-manager-k8s-master 1/1 Running 0 88m 192.168.10.5 k8s-master <none> <none> kube-flannel-ds-amd64-dvggv 1/1 Running 0 65m 192.168.10.5 k8s-master <none> <none> kube-flannel-ds-amd64-gxmnk 1/1 Running 2 49m 192.168.10.8 k8s-node-1 <none> <none> kube-flannel-ds-amd64-rbpmq 1/1 Running 0 50m 192.168.10.9 k8s-node-2 <none> <none> kube-proxy-rblt6 1/1 Running 2 49m 192.168.10.8 k8s-node-1 <none> <none> kube-proxy-sj6sm 1/1 Running 0 89m 192.168.10.5 k8s-master <none> <none> kube-proxy-tx2p9 1/1 Running 0 50m 192.168.10.9 k8s-node-2 <none> <none> kube-scheduler-k8s-master 1/1 Running 0 88m 192.168.10.5 k8s-master <none> <none> kubernetes-dashboard-7d9958fcfd-b6z9k 1/1 Running 0 2m43s 10.10.1.2 k8s-node-2 <none> <none>
查看kubernetes-dashboard映射端口
[root@k8s
-master script]# kubectl get service --namespace=kube-system -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 90m k8s-app=kube-dns kubernetes-dashboard NodePort 10.105.177.107 <none> 443:31234/TCP 3m27s k8s-app=kubernetes-dashboard

三、生成私鑰和證書籤名:

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048 openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key rm dashboard.pass.key openssl req -new -key dashboard.key -out dashboard.csr【如遇輸入,一路回車便可】

四、生成SSL證書:

openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

而後將生成的 dashboard.keydashboard.crt置於路徑 /home/share/certs下,

[root@k8s-master script]# scp dashboard.key dashboard.crt /var/share/certs/

 

該路徑會配置到下面即將要操做的

五、加載dashboard-user-role.yaml 配置:

[root@k8s-master script]# cat dashboard-user-role.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
dashboard-user-role.yaml
[root@k8s-master script]# kubectl create -f dashboard-user-role.yaml clusterrolebinding.rbac.authorization.k8s.io/admin created serviceaccount/admin created 
查看token
[root@k8s
-master script]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system Name: admin-token-kcz8s Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin kubernetes.io/service-account.uid: b72b94d9-0b9e-11e9-819f-000c2953a750 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1rY3o4cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MmI5NGQ5LTBiOWUtMTFlOS04MTlmLTAwMGMyOTUzYTc1MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.LwNkLys00I3YLH2gXzvCerFN5C2nwnw-w1I6VHNDjByLucG6BXdiKTKGwIJGfznsEhp54SJ6_pgn5WGjhq-hL5Kp9fSQU4HT1DFS5VIjnwv3yBe4JfuBLEpF0AiSyefLgX5oRwiAogrXYMLNlYD4aaPThgfLjRuexUUQJKBaoVs5MikI24kgHv1wA0wFFqZUTxM6KGFhc7JmvyVyLLjDu8SQ2AXCOMefOoV-GKQ3ZVwNsVtricjnZBPX__5AbLBbdXS0KE1R32uSM_M-BgV5W4X0WNQ_vollVRRevX0i-JvXOJunHa1eB6uZXM_X_t5xv_DA8-QAt644wjHejj2LMQ

 

六、經過瀏覽器登陸web界面:

複製token

 

原文出處:https://www.cnblogs.com/51wansheng/p/10298349.html

相關文章
相關標籤/搜索